bash-3.00# uname -a
SunOS x4200 5.10 Generic_142910-17 i86pc i386 i86pc
bash-3.2# echo |format
0. c1t0d0 <DEFAULT cyl 8921 alt 2 hd 255 sec 63> ----->UFS
/pci@7b,0/pci1022,7458@11/pci1000,3060@2/sd@0,0
1. c1t1d0 <DEFAULT cyl 8921 alt 2 hd 255 sec 63> ----->ZFS
/pci@7b,0/pci1022,7458@11/pci1000,3060@2/sd@1,0
bash-3.00# zpool create -f rpool c1t1d0s0
bash-3.00# zpool list
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
rpool 67.5G 95.5K 67.5G 0% ONLINE -
bash-3.00# lucreate -c ufs_BE -n zfs_BE -p rpool
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Checking GRUB menu...
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <ufs_BE>.
Creating initial configuration for primary boot environment <ufs_BE>.
INFORMATION: No BEs are configured on this system.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <ufs_BE> PBE Boot Device </dev/dsk/c1t0d0s0>.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <zfs_BE>.
Source boot environment is <ufs_BE>.
Creating file systems on boot environment <zfs_BE>.
Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/zfs_BE>.
Populating file systems on boot environment <zfs_BE>.
Analyzing zones.
Mounting ABE <zfs_BE>.
Cloning mountpoint directories.
Generating file list.
Copying data from PBE <ufs_BE> to ABE <zfs_BE>.
100% of filenames transferred
Finalizing ABE.
Fixing zonepaths in ABE.
Unmounting ABE <zfs_BE>.
Fixing properties on ZFS datasets in ABE.
Reverting state of zones in PBE <ufs_BE>.
Making boot environment <zfs_BE> bootable.
Updating bootenv.rc on ABE <zfs_BE>.
File </boot/grub/menu.lst> propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE <zfs_BE> in GRUB menu
Population of boot environment <zfs_BE> successful.
Creation of boot environment <zfs_BE> successful.
bash-3.00# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
ufs_BE yes yes yes no -
zfs_BE yes no no yes -
bash-3.00# lufslist -n ufs_BE
boot environment name: ufs_BE
This boot environment is currently active.
Filesystem fstype device size Mounted on Mount Options
----------------------- -------- ------------ ------------------- --------------
/dev/dsk/c1t0d0s1 swap 542868480 - -
/dev/dsk/c1t0d0s0 ufs 72826629120 / -
bash-3.00# lufslist -n zfs_BE
boot environment name: zfs_BE
This boot environment will be active on next system boot.
Filesystem fstype device size Mounted on Mount Options
----------------------- -------- ------------ ------------------- --------------
/dev/zvol/dsk/rpool/swap swap 543162368 - -
rpool/ROOT/zfs_BE zfs 11916598272 / -
rpool zfs 14692786176 /rpool -
bash-3.00#
bash-3.00# luactivate zfs_BE
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE <ufs_BE>
A Live Upgrade Sync operation will be performed on startup of boot environment <zfs_BE>.
Setting failsafe console to <ttya>.
Generating boot-sign for ABE <zfs_BE>
NOTE: File </etc/bootsign> not found in top level dataset for BE <zfs_BE>
Generating partition and slice information for ABE <zfs_BE>
Boot menu exists.
Generating multiboot menu entries for PBE.
Generating multiboot menu entries for ABE.
Disabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Boot from the Solaris failsafe or boot in Single User mode from Solaris
Install CD or Network.
2. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following command to mount:
mount -Fufs /dev/dsk/c1t0d0s0 /mnt
3. Run <luactivate> utility with out any arguments from the Parent boot
environment root slice, as shown below:
/mnt/sbin/luactivate
4. luactivate, activates the previous working boot environment and
indicates the result.
5. Exit Single User mode and reboot the machine.
**********************************************************************
Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File </etc/lu/installgrub.findroot> propagation successful
File </etc/lu/stage1.findroot> propagation successful
File </etc/lu/stage2.findroot> propagation successful
File </etc/lu/GRUB_capability> propagation successful
Deleting stale GRUB loader from all BEs.
File </etc/lu/installgrub.latest> deletion successful
File </etc/lu/stage1.latest> deletion successful
File </etc/lu/stage2.latest> deletion successful
Activation of boot environment <zfs_BE> successful.
bash-3.00#
bash-3.00# df -h
Filesystem size used avail capacity Mounted on
rpool/ROOT/zfs_BE 66G 11G 53G 18% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 6.6G 376K 6.6G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap2.so.1
64G 11G 53G 18% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 6.6G 36K 6.6G 1% /tmp
swap 6.6G 32K 6.6G 1% /var/run
rpool 66G 33K 53G 1% /rpool
bash-3.00# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
ufs_BE yes no no yes -
zfs_BE yes yes yes no -
bash-3.00# luactivate ufs_BE
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE <zfs_BE>
Setting failsafe console to <ttya>.
Generating boot-sign for ABE <ufs_BE>
Generating partition and slice information for ABE <ufs_BE>
Copied boot menu from top level dataset.
Generating multiboot menu entries for PBE.
Generating multiboot menu entries for ABE.
Disabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Boot from the Solaris failsafe or boot in Single User mode from Solaris
Install CD or Network.
2. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following commands in sequence to mount the BE:
zpool import rpool
zfs inherit -r mountpoint rpool/ROOT/zfs_BE
zfs set mountpoint=<mountpointName> rpool/ROOT/zfs_BE
zfs mount rpool/ROOT/zfs_BE
3. Run <luactivate> utility with out any arguments from the Parent boot
environment root slice, as shown below:
<mountpointName>/sbin/luactivate
4. luactivate, activates the previous working boot environment and
indicates the result.
5. umount /mnt
6. zfs set mountpoint=/ rpool/ROOT/zfs_BE
7. Exit Single User mode and reboot the machine.
**********************************************************************
Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File </etc/lu/installgrub.findroot> propagation successful
File </etc/lu/stage1.findroot> propagation successful
File </etc/lu/stage2.findroot> propagation successful
File </etc/lu/GRUB_capability> propagation successful
Deleting stale GRUB loader from all BEs.
File </etc/lu/installgrub.latest> deletion successful
File </etc/lu/stage1.latest> deletion successful
File </etc/lu/stage2.latest> deletion successful
Activation of boot environment <ufs_BE> successful.
bash-3.00#
bash-3.00# luupgrade -u -n zfs_BE -s /mnt -k /var/tmp/no-autoreg
System has findroot enabled GRUB
No entry for BE <zfs_BE> in GRUB menu
Copying failsafe kernel from media.
64995 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </mnt/Solaris_10/Tools/Boot>
Cannot write the indicated output key file (autoreg_key).
Validating the contents of the media </mnt>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <zfs_BE>.
Checking for GRUB menu on ABE <zfs_BE>.
Saving GRUB menu on ABE <zfs_BE>.
Checking for x86 boot partition on ABE.
Determining packages to install or upgrade for BE <zfs_BE>.
Performing the operating system upgrade of the BE <zfs_BE>.
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Restoring GRUB menu on ABE <zfs_BE>.
Updating package information on boot environment <zfs_BE>.
Package information successfully updated on boot environment <zfs_BE>.
Adding operating system patches to the BE <zfs_BE>.
The operating system patch installation is complete.
ABE boot partition backing deleted.
INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot
environment <zfs_BE> contains a log of the upgrade operation.
INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot
environment <zfs_BE> contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files
are located on boot environment <zfs_BE>. Before you activate boot
environment <zfs_BE>, determine if any additional system maintenance is
required or if additional media of the software distribution must be
installed.
The Solaris upgrade of the boot environment <zfs_BE> is complete.
Creating miniroot device
Configuring failsafe for system.
Failsafe configuration is complete.
Installing failsafe
Failsafe install is complete.
bash-3.00#
bash-3.00# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
ufs_BE yes yes yes no -
zfs_BE yes no no yes -
bash-3.2# uname -a
SunOS x4200-sin06-b 5.10 Generic_147148-26 i86pc i386 i86pc
bash-3.2# cat /etc/release
Oracle Solaris 10 1/13 s10x_u11wos_24a X86
Copyright (c) 1983, 2013, Oracle and/or its affiliates. All rights reserved.
Assembled 17 January 2013
bash-3.2#
No comments:
Post a Comment