Skip to content

Latest commit

 

History

History
147 lines (109 loc) · 4.66 KB

Upgrade_r151002_r151004.md

File metadata and controls

147 lines (109 loc) · 4.66 KB

Upgrading from r151002 to r151004

Note Current BE Name

Make a note of the name of your current boot environment (BE) in case you need to roll back.

$ /usr/sbin/beadm list

It may look something like this:

BE              Active Mountpoint Space Policy Created
omnios          NR     /          1.37G static 2012-09-26 18:26
omnios-backup-1 -      -          211K  static 2012-10-01 14:16
omnios-backup-2 -      -          110K  static 2012-10-18 19:55
omnios-backup-3 -      -          205K  static 2012-10-18 20:02

The current BE is the one marked 'NR' (active now, active on reboot.)

Update pkg(1)

A newer version of the pkg client is necessary to perform the upgrade.

# /usr/bin/pkg install pkg

Pre-update Fixes

Update 2 packages in the current BE that could break the upgrade process. These are metadata changes only-- no on-disk content changes with these updates. If you have non-global zones, these packages must be updated within each zone as well.

# /usr/bin/pkg install entire@11,5.11-0.151002:20121030T141303Z runtime/perl-5142/[email protected],5.11-0.151002:20121030T141343Z

Perform the Upgrade

If you have non-global native (ipkg) zones, they must be shutdown and detached at this time. Once the global zone is updated and rebooted, the zones will be upgraded as they are re-attached to the system. This is not necessary for s10-branded zones or KVM guests.

After shutting down the zones gracefully (zlogin <zonename>; shutdown -i5 -g0 -y):

# /usr/sbin/zoneadm -z <zonename> detach

It would also be a good idea to take a ZFS snapshot of the zone root in case it's needed for rollback (such as if there are issues with the zone upgrade.)

# /usr/sbin/zfs snapshot -r <zoneroot>@r151002

where is the name of the ZFS dataset whose mountpoint corresponds to the value of zonepath in the zone's configuration. There are child datasets under this one, so we use the -r option to recursively snapshot all.

Update the global zone. The --be-name argument argument is optional, but it's nice to use a name that's more meaningful than “omnios-N”.

# /usr/bin/pkg update --be-name=omnios-r151004

This will create a new BE and install r151004 packages into it. When it is complete, reboot your system. The new BE will now be the default option in GRUB. If your hardware supports it, you may get into the new BE via fast reboot, in which case you won't see a GRUB menu.

Once booted into your new r151004 BE, if you don't have non-global zones, you're done.

If you have non-global native (ipkg) zones, attach each one with the option, which will upgrade the zone's core packages to match the global zone.

# /usr/sbin/zoneadm -z <zonename> attach -u

Assuming the attach succeeds, the zone may be booted as usual:

# /usr/sbin/zoneadm -z <zonename> boot

Rolling Back

ZFS Upgrade

OmniOS r151004 includes a new zpool version that supports feature flags. Post-upgrade you will see a message like this in zpool status:

status: The pool is formatted using a legacy on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on software that does not support feature
        flags.

If you want to be able to roll back to your previous environment, DO NOT upgrade your zpools yet. The upgrade is one-way and irreversible. Any zpool that is upgraded will be unusable on r151002.

If, for some reason, you need to roll back to r151002, simply activate the previous BE and reboot.

# /usr/sbin/beadm activate <be-name>
# /usr/sbin/reboot

Where is the name of the previous BE that you noted at the beginning.

If you have native zones but had not yet attempted to attach them, they remain unchanged from the previous BE and may be re-attached once booted into the previous BE. Simply omit the from the attach example above. Note that even if you tried to attach but the attachment failed completely (perhaps due to a packaging issue), then the zone data remains unchanged and you can still safely re-attach it in the previous BE.

If you are rolling back because there was a problem with the zone attach in which the zone was partially updated, then you will need to roll back each of the zone root datasets that you snapshotted after detachment. There is no recursive rollback, so we have to roll back each of the datasets one at a time.

# /usr/sbin/zfs rollback <zoneroot>@r151002
# /usr/sbin/zfs rollback <zoneroot>/ROOT@r151002
# /usr/sbin/zfs rollback <zoneroot>/ROOT/zbe@r151002