I will detail the creation of NFS mount point on a Windows 10 client in the Part 2 of this series. For now let’s focus on an Ubuntu server offering NFS storage and a Ubuntu client trying to connect to it.
The Setup
My NFS server is going to be based on Ubuntu 18.04 LTS. You can use your favorite Linux distro or FreeBSD, or any other OS that supports OpenZFS. My reason for using Ubuntu 18.04 is that it is quite popular and would considerably reduce the barrier of entry.
The NFS is supposed to be available only on my LAN which has the subnet mask of 255.255.255.0 and 192.168.0.1 as its default gateway. In plain English, this means that all the devices connected to my home network (WiFi and Ethernet, et al) will have IP addresses ranging from 192.168.0.2 through 192.168.0.254.
The NFS server will be configured to allow only devices with only the aforementioned IP address to be have access to the NFS server. This would ensure that only devices which have connected to my LAN are accessing my files and the outside world can’t access it. If you have an ‘open Wifi’ setup or if the security on your router’s endpoint is dubious, this would not guarantee any security.
I wouldn’t recommend running NFS over public internet without additional security measure.
Lastly, the commands being run on the NFS server have the prompt, server $ and the commands to be run on the client side have the prompt client $
Creating OpenZFS pool and Dataset
1. Creating zpool
If you already have a zpool up and running, skip this step. On my NFS server, which is running Ubuntu 18.04 LTS server, I first install OpenZFS.
Next we will list all the available block devices, to see the new disks (and partitions) waiting to be formatted with zfs.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 89.5M 1 loop /snap/core/6130
loop1 7:1 0 86.9M 1 loop /snap/core/4917
loop2 7:2 0 91.1M 1 loop /snap/core/6259
sda 8:0 0 50G 0 disk
├─sda1 8:1 0 1M 0 part
└─sda2 8:2 0 50G 0 part /
sdb 8:16 0 931G 0 disk
sdc 8:32 0 931G 0 disk
sr0 11:0 1 1024M 0 rom
A typical example is shown above, but your naming convention might be wildly different. You will have to use your own judgement, and be very careful about it. You don’t want to accidentally format your OS disk. For example, the sda1 partition clearly has the root filesystem as its mount point so it is not wise to touch it. If you are using new disks, chances are they won’t have a mount point or any kind of partitioning.
Once you know the name of your devices, we will use zpool create command to format a couple of these block devices (called sdb and sdc) into a zpool with a single vdev that is made up of two mirrored disk.
server $ sudo zpool status tank
zpool status tank
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
errors: No known data errors
Moving forward, you can add disks in sets of two (called vdev) to grow the size of this zpool, the new disks will show up as mirror-1, mirror-2 etc. You don’t have to create your zpool the way I did, you can use mirroring with more disks, you can use striping without redundancy but better performance, or you can use RAIDZ. You can learn more about it here.
At the end of the day, what matters is that we have created a zpool named tank. Upon which the shared NFS will live. Let’s create a dataset that will be shared. First make sure that the pool, named ‘tank’, is mounted. Default mount point is ‘/tank’ .
server $ sudo zfs create tank/nfsshare #create a new dataset on top of the pool
Setting Permissions
When sharing an NFS directory, the superuser on the client system doesn’t have access to anything on the share. While the client-side superuser is capable of doing anything on the client machine, the NFS mount is technically not a part of the client machine. So allowing operations on behalf of the client-side superuser mapped as server-side superuser could result in security issues. By default, NFS maps the client-side superuser actions to nobody:nogroup user and user group. If you intend on accessing the mounted files as root, then dataset on our NFS server should also have the same permissions,
The NFS server will run any action by the client-side root as user nobody, so the above permission will allow the operations to go through.
If you are using a different (regular) username, it is often convenient to have a user with the same exact username on both sides.
Creating NFS share
Once you have Zpool created, you should install the nfs server package from your package manager:
Traditionally, NFS server uses /etc/exports file to get as list of approved clients and the files they will have access to. However, we will be using ZFS’ inbuilt feature to achieve the same.
Simply use the command:
Earlier, I alluded to giving only certain IPs the access. You can do so as following:
The ‘rw’ stands for read-write permissions, and that is followed by the range of IPs. Make sure that the port number 111 and 2049 are open on your firewall. If you are using ufw, you can check that by running:
Make a note of your server’s IP on the LAN, by using ifconfig or ip addr command. Let’s call it server.ip
Client Side Mounting
Once the share is created, you can mount it on your client machine, by running the command:
This will mount the NFS share on /mnt folder but you could have just as easily picked any other mount point of your choice.
Conclusion
File sharing is probably the most important aspect of system administration. It is improves your understanding of the storage stack, networking, user permissions and privileges. You will quickly realize the importance of Principle of Least Privilege — That is to say, only give a user the barest possible access that it needs to its job.
You will also learn about the interoperability between different operating systems. Windows users can access NFS files, so can the Mac and BSD users. You can’t restrict yourself to one OS when dealing with a network of machines all having their own conventions and vernacular. So go ahead and experiment with your NFS share. I hope you learned something.