Linux, Unix and Technology for the mere mortal
Linux, Unix and Technology for the mere mortal

Understanding ZFS datasets in Oracle Solaris non-global zones

When working with Solaris zones, it’s inevitable that you will need to pass some kind of additional storage into the zone from the Global zone. One of my favourite ways to do this is to delegate a ZFS dataset to the zone. The benefit here is that you can run standard zfs commands to manipulate the datasets from inside the zone. This is great if you have a zone level administrator who doesn’t have access to the Global zone but is familiar with administering  ZFS.

Another way to make the storage visible inside the zone is to pass it in as a filesystem from the zonecfg utility on the Global zone. This is beneficial if your zone administrator is not familiar with zfs or where there is no need to manipulate the datasets once they’ve been made visible.

In very specific circumstances passing a filesystem into the zone while the zone is in a running state will be successful but when it comes time to reboot the Global zone or the zone for a maintenance slot, you may find that the system that hasn’t booted correctly or the zones have become unavailable or unable to boot.

Delegation

In this example I have configured a zone named zone.devzero.co.za and additional zpool called storage. I will create a new dataset called storage/zone.devzero.co.za/ds1  and delegate it to the zone.

Note: The only downside to ZFS delegation in a zone is that the zone actually needs a reboot and zoneadm -z zoneaname apply does simply not make the storage available.

root@solaris:~# zfs create -p storage/zone.devzero.co.za/ds1

root@solaris:~# zfs list -r storage
NAME USED AVAIL REFER MOUNTPOINT
storage 164K 9.78G 32K /storage
storage/zone.devzero.co.za 63K 9.78G 32K /storage/zone.devzero.co.za
storage/zone.devzero.co.za/ds1 31K 9.78G 31K /storage/zone.devzero.co.za/ds1

root@solaris:~# zonecfg -z zone.devzero.co.za
zonecfg:zone.devzero.co.za> add dataset
zonecfg:zone.devzero.co.za:dataset> set name=storage/zone.devzero.co.za/ds1
zonecfg:zone.devzero.co.za:dataset> set alias=ds1
zonecfg:zone.devzero.co.za:dataset> end
zonecfg:zone.devzero.co.za> verify
zonecfg:zone.devzero.co.za> commit
zonecfg:zone.devzero.co.za> exit

Once the zone has been rebooted, only the delegated dataset will be visible inside the zone. Note that the dataset parent highlighted previously in green is not visible to the zone.

root@zone:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
ds1 9.94G 163K 9.94G 0% 1.00x ONLINE -
rpool 79.5G 7.52G 72.0G 9% 1.00x ONLINE -

root@zone:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
ds1 31K 9.78G 31K /ds1
rpool 759M 70.6G 31K /rpool
rpool/ROOT 758M 70.6G 31K legacy
rpool/ROOT/solaris 669M 70.6G 612M /

It’s now possible to go ahead and create additional datasets from inside the zone and mount them where needed.

Note: From the Global zone the mountpoint of the dataset is relative to the zonepath.

root@zone:~# zfs create -o mountpoint=/child ds1/child
root@zone:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
ds1 62K 9.78G 31K /ds1
ds1/child 31K 9.78G 31K /child


----Global Zone----

root@solaris:~# zfs list -r storage
NAME USED AVAIL REFER MOUNTPOINT
storage 214K 9.78G 32K /storage
storage/zone.devzero.co.za 93K 9.78G 31K /storage/zone.devzero.co.za <--- note the parent is mounted at /storage and not /zones
storage/zone.devzero.co.za/ds1 62K 9.78G 31K /zones/zone.devzero.co.za/root/ds1
storage/zone.devzero.co.za/ds1/child 31K 9.78G 31K /zones/zone.devzero.co.za/root/child

ZFS filesystem pass through

In this example let’s assume that no ZFS administration will be done inside the zone and that the mountpoint will be statically controlled from the Global Zone.

I’ll reuse the dataset I created earlier and pass it in as a filesystem.

To do this I’ll change the mountpoint property of the storage/zone.devzero.co.za/ds1 dataset to legacy and then pass it in using zonecfg

root@solaris:~# zfs set mountpoint=legacy storage/zone.devzero.co.za/ds1
root@solaris:~# zfs list -r storage
NAME USED AVAIL REFER MOUNTPOINT
storage 224K 9.78G 32K /storage
storage/zone.devzero.co.za 93K 9.78G 31K /storage/zone.devzero.co.za
storage/zone.devzero.co.za/ds1 62K 9.78G 31K legacy <--- Our target
storage/zone.devzero.co.za/ds1/child 31K 9.78G 31K /child 

root@solaris:~# zonecfg -z zone.devzero.co.za
zonecfg:zone.devzero.co.za> add fs
zonecfg:zone.devzero.co.za:fs> set special=storage/zone.devzero.co.za/ds1
zonecfg:zone.devzero.co.za:fs> set dir=/new/ds1
zonecfg:zone.devzero.co.za:fs> set type=zfs

zonecfg:zone.devzero.co.za:fs> info
fs 0:
 dir: /new/ds1
 special: storage/zone.devzero.co.za/ds1
 raw not specified
 type: zfs
 options: []

zonecfg:zone.devzero.co.za:fs> end
zonecfg:zone.devzero.co.za> verify
zonecfg:zone.devzero.co.za> commit
zonecfg:zone.devzero.co.za> exit


Now I’ll apply the zone configuration and the new directory will be plugged in and made visible immediately

root@solaris:~# zoneadm -z zone.devzero.co.za apply
zone 'zone.devzero.co.za': Checking: Mounting fs dir=/new/ds1
zone 'zone.devzero.co.za': Applying the changes

root@solaris:~# zlogin zone.devzero.co.za
[Connected to zone 'zone.devzero.co.za' pts/2]

root@zone:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 79.5G 7.59G 71.9G 9% 1.00x ONLINE -

^--- Note how the ds1 is no longer available as a pool inside the zone
v--- However /new/ds1 is visible and mounted on a "peculiar" new device called /new/ds1

root@zone:~# df -h
Filesystem Size Used Available Capacity Mounted on
rpool/ROOT/solaris-3 71G 699M 71G 1% /
/dev 0K 0K 0K 0% /dev
rpool/ROOT/solaris-3/var
=== Snip ===
/new/ds1 9.8G 31K 9.8G 1% /new/ds1


Problematic filesystem configuration

Up to this point all the above configuration is clean and any reboots to either the zone or the Global Zone should go off without a hitch.

When a zone is in an installed or attached state, the entire zones root filesystem is browsable from the Global zone. It may seem like a logical idea to simply provision the storage and mount it directly in the path of the running zone. This will in fact work and the zone will see the storage as if it were being passed through from the zone configuration as in our previous example.

root@solaris:~# zfs create -o mountpoint=/zones/zone.devzero.co.za/root/ds2 storage/zone.devzero.co.za/ds2

root@solaris:~# zlogin zone.devzero.co.za
[Connected to zone 'zone.devzero.co.za' pts/2]
root@zone:~# df -h
Filesystem Size Used Available Capacity Mounted on
rpool/ROOT/solaris-3 39G 699M 38G 2% /
/dev 0K 0K 0K 0% /dev
/new/ds1 9.8G 31K 9.8G 1% /new/ds1 <---- File system presented from zonecfg
rpool/ROOT/solaris-3/var
=== snip ===
/ds2 9.8G 31K 9.8G 1% /ds2 <----- File System presented by ZFS

root@zone:~# exit
logout
[Connection to zone 'zone.devzero.co.za' pts/2 closed]


However once the Global Zone is rebooted and the zone is booted, an error will occur stating that the directory is not empty.

root@solaris:~# reboot
reboot: Halting 1 zone.
Connection to 192.168.56.102 closed by remote host.

ssh dvnt@192.168.56.102
dvnt@solaris:~$ sudo su -
root@solaris:~# zoneadm list -cv
 ID NAME STATUS PATH BRAND IP
 0 global running / solaris shared
 - zone.devzero.co.za installed /zones/zone.devzero.co.za solaris excl 

root@solaris:~# zoneadm -z zone.devzero.co.za boot
zone 'zone.devzero.co.za': cannot mount 'zone.devzero.co.za/zone.devzero.co.za/rpool/ROOT/solaris-3' on '/zones/zone.devzero.co.za/root/': directory is not empty
zone 'zone.devzero.co.za': ERROR: Error: Command <zfs mount -o nodevices,mountpoint=/zones/zone.devzero.co.za/root/ zone.devzero.co.za/zone.devzero.co.za/rpool/ROOT/solaris-3> exited with status 1
zone 'zone.devzero.co.za': ERROR: ZFS temporary mount of zone.devzero.co.za/zone.devzero.co.za/rpool/ROOT/solaris-3 on /zones/zone.devzero.co.za/root/ failed.
zone 'zone.devzero.co.za': ERROR: Unable to mount zone root dataset.
zoneadm: zone zone.devzero.co.za: call to zoneadmd(1M) failed: zoneadmd(1M) returned an error 9 (zone state change failed)
root@solaris:~#

Once the incorrect mountpoint is removed from the dataset, ZFS leaves behind an empty directory where the dataset was mounted. This empty folder needs to be removed before the zone can be booted again.

root@solaris:~# zfs set mountpoint=none storage/zone.devzero.co.za/ds2
root@solaris:~# rmdir /zones/zone.devzero.co.za/root/ds2/

root@solaris:~# zoneadm -z zone.devzero.co.za boot
root@solaris:~# zlogin zone.devzero.co.za
[Connected to zone 'zone.devzero.co.za' pts/2]

root@zone:~#

 

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.