Solaris 10 Branded Zone Kernel Patching Procedure
This blog explains the steps to create a new boot environment in a Oracle Solaris 10 branded zone on Oracle Solaris 11.1 (with SRU 6.4 or higher), as well as patch, activate, and boot to the new boot environment.
By default, the solaris10 brand will have only one boot environment. If additional boot environments are needed, they may be created, activated and destroyed using "zfs" properties.
Beginning with Solaris 11.1 SRU 6.4, the zfs dataset user property com.oracle.zones.solaris10:activebe exists to support multiple boot environments for solaris10 branded zones. To activate a boot environment, the user has to set the "com.oracle.zones.solaris10:activebe" property on the zone's ROOT dataset.
An upgrade installation of a Solaris 10 branded zone boot environment is currently not supported.
Example: From inside a solaris10 branded zone, create a new boot environment, patch it, activate it, and boot to it:
============================================
#> uname -a
SunOS Sol-10 5.10 Generic_Virtual sun4v sparc sun4v
#> zfs list -r -t fs rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 28.4G 54.7G 31K legacy rpool/ROOT/zbe-0 28.4G 7.61G 3.69G / rpool/ROOT/zbe-0/opt 20.0G 7.61G 1.51G /opt rpool/ROOT/zbe-0/opt/oracle 18.4G 7.61G 18.4G /opt/oracle rpool/ROOT/zbe-0/var 4.66G 5.34G 4.51G /var #>
============================================
1. Create a new Boot Environment:
# zfs snapshot -r rpool/ROOT/zbe-0@snap # zfs clone -o mountpoint=/ -o canmount=noauto rpool/ROOT/zbe-0@snap rpool/ROOT/zbe-1 # zfs promote rpool/ROOT/zbe-1
If the boot environment has /var as a separate dataset, also do the following:
# zfs clone -o mountpoint=/var -o canmount=noauto rpool/ROOT/zbe-0/var@snap rpool/ROOT/zbe-1/var # zfs promote rpool/ROOT/zbe-1/var
If the boot environment has /opt or any other separate dataset create the clone for those dataset as well:
# zfs clone -o mountpoint=/opt -o canmount=noauto rpool/ROOT/zbe-0/opt@snap rpool/ROOT/zbe-1/opt # zfs promote rpool/ROOT/zbe-1/opt
# zfs clone -o mountpoint=/opt/oracle -o canmount=noauto rpool/ROOT/zbe-0/opt/oracle@snap rpool/ROOT/zbe-1/opt/oracle # zfs promote rpool/ROOT/zbe-1/opt/oracle
NOTE: The message "The directory is not empty" can be ignored.
2. Mount the new boot environment
# zfs mount -o mountpoint=/mnt rpool/ROOT/zbe-1 # zfs mount -o mountpoint=/mnt/opt rpool/ROOT/zbe-1/opt # zfs mount -o mountpoint=/mnt/opt/oracle rpool/ROOT/zbe-1/opt/oracle # zfs mount -o mountpoint=/mnt/var rpool/ROOT/zbe-1/var
3. Patch the boot environment.
Apply the per-requisites patchset to the Current Boot Environment (BE) - see README for the passcode
# cd /nfs-share/sol10KJP/10_Recommended
# ./installpatchset --apply-prereq --s10patchset
Note: In order to install a patchset to a new boot environment the current boot environment has to have a minumum/prerequisite patch level. This ensures that the patchtools itself ( 119254-92 Date: Jun/10/2015 currently) and associated tools work properly. That is achieved with ./installpatchset --apply-prereq --<passcode> to the current boot environment. No need to apply the prereq patches in Alternate BE
Apply Recommended Patchset to the alternate BE - see README for the passcode
# ./installpatchset -R /mnt --s10patchset
It is strongly recommended to wait for exit timeout ( approx 300 seconds after last pkgadd/patchadd command has finished ) In case the pkgserv would not timeout after 300 seconds ( 5 minutes ) after last pkgadd/patchadd has finished, then check with ps command whether there are some remaining pkg/patch processes active
# ps -ef | egrep -i "pkg|patch"
In case there are some remaining pkg/patch processes active, check their activity and wait for termination of these processes. Wait another 5 minutes for the exit timeout of pkgserv.
4. unmount the alternate boot environment.
Unmount alternate BE If the boot environment has /var or any other separate dataset, first unmount it: <Reverse Order>
# zfs unmount rpool/ROOT/zbe-1/var # zfs unmount rpool/ROOT/zbe-1/opt/oracle # zfs unmount rpool/ROOT/zbe-1/opt # zfs unmount rpool/ROOT/zbe-1
5. Activate the new boot environment.
# zfs get -s local all # zfs set com.oracle.zones.solaris10:activebe=zbe-1 rpool/ROOT
6. Verify the new boot environment has been activated
# zfs get -s local all
7. Follow the reboot procedure of your branded Zone
===================================================
Example: If you need to set the zone's active boot environment from the global zone
1. shutdown the non-global zone
# zlogin s10_zone shutdown
2. Determine the zonepath dataset.
# zonecfg -z s10_zone info zonepath zonepath: /zones/s10_zone # dataset=$(zfs list -Ho name /zones/s10_zone)
3. Set the 'activebe' property in ZFS
# zfs set com.oracle.zones.solaris10:activebe=zbe-1 "$dataset/rpool/ROOT"
4. Boot the non-global zone
# zoneadm -z s10_zone boot
After patching of the Solaris 10 branded zone with u11 in the server's /etc/release it is still showing u10.
ReplyDeleteOracle Solaris 10 8/11 s10s_u10wos_17b SPARC
Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.
Assembled 23 August 2011
What is the issue here?
When you patch a server, OS update will never get change.
ReplyDelete# cd /nfs-share/sol10KJP/10_Recommended
ReplyDelete# ./installpatchset --apply-prereq --s10patchset
please explain this part to me, is this where you are keeping the patches and all the prereq that directory /nfs-share/sol10KJP/10_Recommended