Tuesday 24 March 2015

How to recover the OS using ZFS snapshot in Solaris 10

Many of us are very  familiar with Solaris OS recovery on UFS root filesystem. Here we are going to see about how to recover Solaris 10 on ZFS root filesystem. Here is assumptions is we are periodically keeping root FS zfs snapshot in NAS location using zfs send feature.This can be achieved easily using small “zfs send” command. 

Note:This procedure will be applicable only if you are using zfs for root and sending the rpool snapshot periodically to NAS location.

First we will see how to recover SPARC based machines.First boot the system via network if you are already have jump-start server or boot if from OS DVD.

SPARC Based Systems:
==============
1) Boot system into single user mode:
ok boot net -s
or
ok boot cdrom -s
2) Mount the NFS share with the snapshots:
# mount -F nfs nfs-server:/path_to_directory /mnt
3) Recreate the root pool:
# zpool create -f -o failmode=continue -R /a -o cachefile=/etc/zfs/zpool.cache rpool mirror c1t0d0s0 c1t1d0s0
# zpool set autoreplace=on rpool
4) Restore the snapshots:
# cd /mnt
# zfs receive -Fd rpool < hostname.rpool.20120711.zfs
# zfs receive -Fd rpool < hostname.dataset.20120711.zfs
5) Verify snapshot was restored:
# zfs list
6) Create swap and dump volumes:
# zfs create -V 8g rpool/dump
# zfs set refreservation=none rpool/dump
# zfs set checksum=off rpool/dump
# zfs create -V 8g rpool/swap
7) Set pool bootfs property:
# zpool set bootfs=rpool/ROOT/dataset rpool
# zfs set canmount=noauto rpool/ROOT/dataset
# zfs set mountpoint=/ rpool/ROOT/dataset
# zfs set mountpoint=/rpool rpool
8) Install ZFS bootblk:
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t0d0s0
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0
9) Reboot the system:
# umount /mnt
# init 6
Upon bootup, the system may appear to hang after printing the hostname.  The system
is creating the rpool/dump device, which can take several minutes.
10) Delete snapshot:
# zfs list |grep @
# zfs destroy -r rpool@XXXXXXXXX

X86 Based systems
============
1.Boot system into single user mode:
This can be done by selecting from grub menu-Failsafe mode or boot from one of the NIC if have jumpstart start server.

2.Mount the NFS share with the snapshots:
May need to configure interface networking before mounting
 # mount -F nfs nfs-server:/path_to_directory /mnt

3.Recreate the root pool:
# zpool create -f -o failmode=continue -R /a -o cachefile=/etc/zfs/zpool.cache rpool  mirror c1t0d0s0 c1t1d0s0
 # zpool set autoreplace=on rpool
4.Restore the snapshots:
 # cd /mnt
 # zfs receive -Fd rpool < hostname.rpool.20120711.zfs
 # zfs receive -Fd rpool < hostname.dataset.20120711.zfs
5.Verify snapshot was restored:
# zfs list
6.Create swap and dump volumes:
 # zfs create -V 8g rpool/dump
 # zfs set refreservation=none rpool/dump
 # zfs set checksum=off rpool/dump
 # zfs create -V 8g rpool/swap
7.Set pool bootfs property:
 # zpool set bootfs=rpool/ROOT/dataset rpool
 # zfs set canmount=noauto rpool/ROOT/dataset
 # zfs set mountpoint=/ rpool/ROOT/dataset
 # zfs set mountpoint=/rpool rpool
8.Install ZFS bootblk:
 # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t0d0s0
 # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
9.Reboot the system:
 # umount /mnt
 # init 6
Upon bootup, the system may appear to hang after printing the hostname.  The system is creating the rpool/dump device, which can take several minutes.

10.Delete snapshot:
# zfs list |grep @
# zfs destroy -r rpool@20120711

Saturday 21 March 2015

Disk Labeling in Solaris

SMI and EFI/GPT Label
===============

In Solaris a disk partition and label is required before a filesystem (ZFS or UFS) can be created on a physical raw disk. Other disk management software like Oracle ASM or Solaris Volume Manager also requires a disk label and partition before the  physical disk can be used by these disk management software. 

Solaris currently support 2 types of disk labeling on SPARC ,they are  SMI label and EFI/GPT label. 

Virtual Table of Contents (VTOC) Label is also known as the Sun Microsystems Inc (SMI) Disk label. The SMI label's significant limitation is that SMI label does 
not support disk larger than 2TB.

An example of a disk with a traditional VTOC/SMI label will have 8 partitions (0-7) 

Example A (Solaris format/partition utility) 

partition> p
Current partition table (default):
Total disk cylinders available: 24620 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders         Size            Blocks
  0       root    wm       0 -    90      128.37MB    (91/0/0)      262899
  1       swap    wu      91 -   181      128.37MB    (91/0/0)      262899
  2     backup    wu       0 - 24619       33.92GB    (24620/0/0) 71127180
  3 unassigned    wm       0                0         (0/0/0)            0
  4 unassigned    wm       0                0         (0/0/0)            0
  5 unassigned    wm       0                0         (0/0/0)            0
  6        usr    wm     182 - 24619       33.67GB    (24438/0/0) 70601382
  7 unassigned    wm       0                0         (0/0/0)            0

partition> 

GUID Partition Table (GPT) is an industry standard definition for disk partition table, GPT is part of the Unified Extensible Firmware Interface (UEFI) standard. In Solaris we call this the EFI/GPT label. EFI/GPT label is required to support DISK greater that 2TB in size, Solaris ZFS uses EFI/GPT labels by default. The EFI label standard can support up to 128 partitions, 

Solaris currently supports 7 partitions for disk with EFI/GPT labels. 

Example of EFI label with default partition. Slice 0 contains all the usable sector of the disk and slice 8 contains the 
alternate sector.

Example  (Solaris format/partition utility) 

partition> print
Current partition table (default):
Total disk sectors available: 285673405 + 16384 (reserved sectors)

Part      Tag    Flag     First Sector         Size         Last Sector
  0        usr    wm                34      136.22GB          285673438    
  1 unassigned    wm                 0           0               0    
  2 unassigned    wm                 0           0               0    
  3 unassigned    wm                 0           0               0    
  4 unassigned    wm                 0           0               0    
  5 unassigned    wm                 0           0               0    
  6 unassigned    wm                 0           0               0    
  8   reserved    wm         285673439        8.00MB          285689822    

partition> 

Another example of a disk with EFI label with customize 7 partitions (0-6 ),slice 8 contains the alternate sector.

Example  (Solaris format/partition utility) 

partition> print
Current partition table (original):
Total disk sectors available: 285673405 + 16384 (reserved sectors)

Part      Tag    Flag     First Sector         Size         Last Sector
  0        usr    wm                34      100.00GB          209715233    
  1        usr    wm         209715234        1.00GB          211812385    
  2        usr    wm         211812386        1.00GB          213909537    
  3        usr    wm         213909538        1.00GB          216006689    
  4        usr    wm         216006690        1.00GB          218103841    
  5        usr    wm         218103842        1.00GB          220200993    
  6        usr    wm         220200994        1.00GB          222298145    
  8   reserved    wm         285673439        8.00MB          285689822    

partition> 


The quickest way to determine from Solaris if the disk is EFI/GPT labelled is to check the partitions on the disk with Solaris format utility, an EFI/GPT labeled disk will have partition 8 while an SMI labeled will have partition 7.

The issue with EFI labeled disk is that it could not be configured as a bootable device on SPARC systems. This had not been a major limitation because the largest bootable internal disk that is qualified on a SPARC T5 is 600GB in capacity. 
System Firmware 8.4+ (SPARC T4) and 9.1+ (SPARC T5) removes this EFI/GPT booting limitation.


Disk Relabeling

The label on the physical disk may be changed by doing the following. 
    
WARNING: Deleting the disk label and relabeling the disk will destroy ALL DATA on the disk.

STEP 1: delete the disk label with "dd if=/dev/zero of=/dev/rdsk/cXtXdXsX count=100

Example 

t5-8-sin06-a:/dev/dsk# dd if=/dev/zero of=/dev/rdsk/c13t4d0s0 bs=512 count=100 
100+0 records in
100+0 records out
root@t5-8-sin06-a:/dev/dsk# 

STEP 2: Relabel the disk with the command "format -e cXt4XdX "  

Example E

root@t5-8-sin06-a:~# format -e c13t4d0  
selecting c13t4d0
[disk formatted]


FORMAT MENU:
        disk       - select a disk
        type       - select (define) a disk type
        partition  - select (define) a partition table
        current    - describe the current disk
        format     - format and analyze the disk
        repair     - repair a defective sector
        label      - write label to the disk
        analyze    - surface analysis
        defect     - defect list management
        backup     - search for backup labels
        verify     - read and display labels
        inquiry    - show disk ID
        scsi       - independent SCSI mode selects
        cache      - enable, disable or query SCSI disk cache
        volname    - set 8-character volume name
        !<cmd>     - execute <cmd>, then return
        quit
format> label
[0] SMI Label
[1] EFI Label
Specify Label type[1]: 1
Ready to label disk, continue? yes

format> 
format> par


PARTITION MENU:
        0      - change `0' partition
        1      - change `1' partition
        2      - change `2' partition
        3      - change `3' partition
        4      - change `4' partition
        5      - change `5' partition
        6      - change `6' partition
        select - select a predefined table
        modify - modify a predefined partition table
        name   - name the current table
        print  - display the current table
        label  - write partition map and label to the disk
        !<cmd> - execute <cmd>, then return
        quit
partition> print
Current partition table (original):
Total disk sectors available: 285673405 + 16384 (reserved sectors)

Part      Tag    Flag     First Sector         Size         Last Sector
  0        usr    wm                34      136.22GB          285673438    
  1 unassigned    wm                 0           0               0    
  2 unassigned    wm                 0           0               0    
  3 unassigned    wm                 0           0               0    
  4 unassigned    wm                 0           0               0    
  5 unassigned    wm                 0           0               0    
  6 unassigned    wm                 0           0               0    
  8   reserved    wm         285673439        8.00MB          285689822    

partition> 

EFI/GPT boot disk requirement
Booting EFI/GPT boot disk on SPARC T4 and SPARC T5 requires that the following minimum software requirement must be met.

Requirement 1.) Solaris 11.1

Requirement 2.) System Firmware 8.4+ or System Firmware 9.1+ 

System Firmware 9.1+ for SPARC T5

T5-2 with Patch 17264122 (above)
T5-4/T5-8 Patch 17264131 (above)
T5-1B Patch 17264114 (above)
Netra T5-1B Patch 17264110 (above)

System Firmware 8.4+ for SPARC T4

SPARC T4-1 Patch 150676-01 (above)
SPARC T4-2 Patch 150677-01 (above)
SPARC T4-4 Patch 150678-01 (above)
SPARC T4-1B Patch 150679-01 (above)
Netra SPARC T4-1 Patch 150680-01 (above)
Netra SPARC T4-2 Patch 150681-01 (above)
Netra SPARC T4-1B Patch 150682-01 (above)

Note:The Solaris 11.1 installer will prompt that the disk be relabeled to SMI label if OBP firmware does not support EFI/GPT boot.

UFS to ZFS conversion-Using Live upgrade

UFS to ZFS conversion-Using Live upgrade
===========================

Its time up to move form UFS to ZFS. From Solaris 10 on-wards ZFS filesystem supports root FS. If you are already having the root filesystem in UFS, you can easily convert it using Liveupgrade with minimal downtime.The actual down time is just a single reboot.Oracle is making ZFS is default filesystem from Solaris 11 onwards and there are many reason behind that. ZFS and Liveupgrade very tightly boned and this replace traditional OS patching method and back-out method.

Here we will see that how to convert root UFS to root ZFS with step by step procedure.

Requirement:
We need a physical disk matching to the current root harddisk size.if you don;t have spare disk,you remove the current mirror disk and use it for ZFS convert. 

Assumptions: 
New disk:c1t1d0
The new disk should formatted with SMI label and keep all the sectors in s0. EFI label is not supported for root pool.
Creating rpool:
First create zpool with the name of rpool using the newly configured disk.

bash-3.00# zpool create rpool c1t1d0s0

bash-3.00# zfs list
NAME    USED  AVAIL  REFER  MOUNTPOINT
rpool    72K  7.81G    21K  /rpool
Verify if you are having existing boot environment to name current boot environment,

bash-3.00# lustatus
ERROR: No boot environments are configured on this system
ERROR: cannot determine list of all boot environment names
Creating the new boot environment using rpool:
Now we can create a new boot environment using the newly configured zpool (i.e rpool) .
-c — current boot environment name
-n — new boot environment name
-p — Pool name

bash-3.00# lucreate -c sol_stage1 -n sol_stage2 -p rpool
Checking GRUB menu...
System has findroot enabled GRUB
Analyzing system configuration.
Comparing source boot environment  file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device 
is not a root device for any boot environment; ca
nnot get BE ID.
Creating configuration for boot environment .
Source boot environment is .
Creating boot environment .
Creating file systems on boot environment .
Creating  file system for </> in zone  on .
Populating file systems on boot environment .
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Updating compare databases on boot environment .
Making boot environment  bootable.
Updating bootenv.rc on ABE .
File  propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE  in GRUB menu
Population of boot environment  successful.
Creation of boot environment  successful.

bash-3.00# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
sol_stage1                 yes      yes    yes       no     -
sol_stage2                 yes      no     no        yes    -

Activating the new boot environment:
Once the lucreate is done,then activate the new boot environment.So that system will boot from new BE from next time onwards.
Note:Do not use “reboot” command.Use “init 6″

bash-3.00# luactivate sol_stage2
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE 
A Live Upgrade Sync operation will be performed on startup of boot environment <
sol_stage2>.
Generating boot-sign for ABE 
NOTE: File 
not found in top level dataset for BE 
Generating partition and slice information for ABE 
Boot menu exists.
Generating multiboot menu entries for PBE.
Generating multiboot menu entries for ABE.
Disabling splashimage
Re-enabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Boot from the Solaris failsafe or boot in Single User mode from Solaris
Install CD or Network.
2. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following command to mount:
     mount -F ufs /dev/dsk/c1t0d0s0 /mnt
3. Run  utility with out any arguments from the Parent boot
environment root slice, as shown below:
     /mnt/sbin/luactivate
4. luactivate, activates the previous working boot environment and
indicates the result.
5. Exit Single User mode and reboot the machine.
**********************************************************************
Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File  propagation successful
File  propagation successful
File  propagation successful
File  propagation successful
Deleting stale GRUB loader from all BEs.
File  deletion successful
File  deletion successful
File  deletion successful
Activation of boot environment  successful.

bash-3.00# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
sol_stage1                 yes      yes    no        no     -
sol_stage2                 yes      no     yes       no     -   -------here you can see “:Active on Reboot is yes”
Reboot the server using init 6 to boot from new boot environment.
bash-3.00# init 6
updating /platform/i86pc/boot_archive
propagating updated GRUB menu
Saving existing file 
in top level dataset for BE  as //boot/grub/menu.lst.prev.
File  propagation successful
File  propagation successful
File  propagation successful
File  propagation successful

bash-3.00# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
sol_stage1                 yes      no     no        yes    -
sol_stage2                 yes      yes    yes       no     -
Now you can see the server is booted in ZFS.
bash-3.00# zfs list
NAME                    USED  AVAIL  REFER  MOUNTPOINT
rpool                  4.60G  3.21G  34.5K  /rpool
rpool/ROOT             3.59G  3.21G    21K  legacy
rpool/ROOT/sol_stage2  3.59G  3.21G  3.59G  /
rpool/dump              512M  3.21G   512M  -
rpool/swap              528M  3.73G    16K  -
bash-3.00# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c1t1d0s0  ONLINE       0     0     0

errors: No known data errors
If everything goes fine, you can remove thezpool status old boot environment using the below command
bash-3.00# ludelete -f sol_stage1
System has findroot enabled GRUB
Updating GRUB menu default setting
Changing GRUB menu default setting to <0>
Saving existing file 
in top level dataset for BE  as //boot/grub/menu.lst.prev.
File  propagation successful
Successfully deleted entry from GRUB menu
Determining the devices to be marked free.
Updating boot environment configuration database.
Updating boot environment description database on all BEs.
Updating all boot environment configuration databases.
Boot environment  deleted.

bash-3.00# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
sol_stage2                 yes      yes    yes       no     -
Now we can use the deleted old boot environment disk for rpool mirroring . Size should equal or greater than existing rpool disk.
bash-3.00# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c1t1d0s0  ONLINE       0     0     0

errors: No known data errors
Copying partition table to second disk.
bash-3.00# prtvtoc /dev/rdsk/c1t1d0s2 |fmthard -s - /dev/rdsk/c1t0d0s2
fmthard:  New volume table of contents now in place.
Initiating the rpool mirroring:
bash-3.00# zpool attach rpool c1t1d0s0 c1t0d0s0
Please be sure to invoke installgrub(1M) to make 'c1t0d0s0' bootable.
Make sure to wait until resilver is done before rebooting.
bash-3.00# zpool status
  pool: rpool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 0h0m, 1.37% done, 0h18m to go
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            c1t1d0s0  ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0  56.9M resilvered

errors: No known data errors
Once the mirror is done, system will be running on ZFS with root mirroring.
After migrating ZFS,you have to use liveupgrade for OS patching.

Friday 20 March 2015

Solaris 10 Migrating From UFS to ZFS with Live Upgrade


 I just tested a migration of root from UFS to ZFS on X4200.The plan is straight forward in live upgrade method of Oracle. Started with creating a pool. 


bash-3.00# uname -a
SunOS x4200 5.10 Generic_142910-17 i86pc i386 i86pc


bash-3.2# echo |format

       0. c1t0d0 <DEFAULT cyl 8921 alt 2 hd 255 sec 63>       ----->UFS
          /pci@7b,0/pci1022,7458@11/pci1000,3060@2/sd@0,0
       1. c1t1d0 <DEFAULT cyl 8921 alt 2 hd 255 sec 63>       ----->ZFS
          /pci@7b,0/pci1022,7458@11/pci1000,3060@2/sd@1,0

bash-3.00# zpool create -f rpool c1t1d0s0 

bash-3.00# zpool list
NAME    SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
rpool  67.5G  95.5K  67.5G     0%  ONLINE  -

bash-3.00# lucreate -c ufs_BE -n zfs_BE -p rpool
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Checking GRUB menu...
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <ufs_BE>.
Creating initial configuration for primary boot environment <ufs_BE>.
INFORMATION: No BEs are configured on this system.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <ufs_BE> PBE Boot Device </dev/dsk/c1t0d0s0>.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <zfs_BE>.
Source boot environment is <ufs_BE>.
Creating file systems on boot environment <zfs_BE>.
Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/zfs_BE>.
Populating file systems on boot environment <zfs_BE>.
Analyzing zones.
Mounting ABE <zfs_BE>.
Cloning mountpoint directories.
Generating file list.
Copying data from PBE <ufs_BE> to ABE <zfs_BE>.
100% of filenames transferred
Finalizing ABE.
Fixing zonepaths in ABE.
Unmounting ABE <zfs_BE>.
Fixing properties on ZFS datasets in ABE.
Reverting state of zones in PBE <ufs_BE>.
Making boot environment <zfs_BE> bootable.
Updating bootenv.rc on ABE <zfs_BE>.
File </boot/grub/menu.lst> propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE <zfs_BE> in GRUB menu
Population of boot environment <zfs_BE> successful.
Creation of boot environment <zfs_BE> successful.

bash-3.00# lustatus 
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
ufs_BE                     yes      yes    yes       no     -         
zfs_BE                     yes      no     no        yes    - 

bash-3.00# lufslist -n ufs_BE
               boot environment name: ufs_BE
               This boot environment is currently active.

Filesystem              fstype    device size Mounted on          Mount Options
----------------------- -------- ------------ ------------------- --------------
/dev/dsk/c1t0d0s1       swap        542868480 -                   -
/dev/dsk/c1t0d0s0       ufs       72826629120 /                   -
bash-3.00# lufslist -n zfs_BE
               boot environment name: zfs_BE
               This boot environment will be active on next system boot.

Filesystem              fstype    device size Mounted on          Mount Options
----------------------- -------- ------------ ------------------- --------------
/dev/zvol/dsk/rpool/swap swap        543162368 -                   -
rpool/ROOT/zfs_BE       zfs       11916598272 /                   -
rpool                   zfs       14692786176 /rpool              -
bash-3.00# 


bash-3.00# luactivate zfs_BE
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE <ufs_BE>
A Live Upgrade Sync operation will be performed on startup of boot environment <zfs_BE>.

Setting failsafe console to <ttya>.
Generating boot-sign for ABE <zfs_BE>
NOTE: File </etc/bootsign> not found in top level dataset for BE <zfs_BE>
Generating partition and slice information for ABE <zfs_BE>
Boot menu exists.
Generating multiboot menu entries for PBE.
Generating multiboot menu entries for ABE.
Disabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.

**********************************************************************

The target boot environment has been activated. It will be used when you 
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You 
MUST USE either the init or the shutdown command when you reboot. If you 
do not use either init or shutdown, the system will not boot using the 
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process 
needs to be followed to fallback to the currently working boot environment:

1. Boot from the Solaris failsafe or boot in Single User mode from Solaris 
Install CD or Network.

2. Mount the Parent boot environment root slice to some directory (like 
/mnt). You can use the following command to mount:

     mount -Fufs /dev/dsk/c1t0d0s0 /mnt

3. Run <luactivate> utility with out any arguments from the Parent boot 
environment root slice, as shown below:

     /mnt/sbin/luactivate

4. luactivate, activates the previous working boot environment and 
indicates the result.

5. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File </etc/lu/installgrub.findroot> propagation successful
File </etc/lu/stage1.findroot> propagation successful
File </etc/lu/stage2.findroot> propagation successful
File </etc/lu/GRUB_capability> propagation successful
Deleting stale GRUB loader from all BEs.
File </etc/lu/installgrub.latest> deletion successful
File </etc/lu/stage1.latest> deletion successful
File </etc/lu/stage2.latest> deletion successful
Activation of boot environment <zfs_BE> successful.
bash-3.00# 

bash-3.00# df -h
Filesystem             size   used  avail capacity  Mounted on
rpool/ROOT/zfs_BE       66G    11G    53G    18%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   6.6G   376K   6.6G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap2.so.1
                        64G    11G    53G    18%    /lib/libc.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                   6.6G    36K   6.6G     1%    /tmp
swap                   6.6G    32K   6.6G     1%    /var/run
rpool                   66G    33K    53G     1%    /rpool

bash-3.00# lustatus 
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
ufs_BE                     yes      no     no        yes    -         
zfs_BE                     yes      yes    yes       no     -  


bash-3.00# luactivate ufs_BE
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE <zfs_BE>

Setting failsafe console to <ttya>.
Generating boot-sign for ABE <ufs_BE>
Generating partition and slice information for ABE <ufs_BE>
Copied boot menu from top level dataset.
Generating multiboot menu entries for PBE.
Generating multiboot menu entries for ABE.
Disabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.

**********************************************************************

The target boot environment has been activated. It will be used when you 
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You 
MUST USE either the init or the shutdown command when you reboot. If you 
do not use either init or shutdown, the system will not boot using the 
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process 
needs to be followed to fallback to the currently working boot environment:

1. Boot from the Solaris failsafe or boot in Single User mode from Solaris 
Install CD or Network.

2. Mount the Parent boot environment root slice to some directory (like 
/mnt). You can use the following commands in sequence to mount the BE:

     zpool import rpool
     zfs inherit -r mountpoint rpool/ROOT/zfs_BE
     zfs set mountpoint=<mountpointName> rpool/ROOT/zfs_BE 
     zfs mount rpool/ROOT/zfs_BE

3. Run <luactivate> utility with out any arguments from the Parent boot 
environment root slice, as shown below:

     <mountpointName>/sbin/luactivate

4. luactivate, activates the previous working boot environment and 
indicates the result.
5. umount /mnt
6. zfs set mountpoint=/ rpool/ROOT/zfs_BE 
7. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File </etc/lu/installgrub.findroot> propagation successful
File </etc/lu/stage1.findroot> propagation successful
File </etc/lu/stage2.findroot> propagation successful
File </etc/lu/GRUB_capability> propagation successful
Deleting stale GRUB loader from all BEs.
File </etc/lu/installgrub.latest> deletion successful
File </etc/lu/stage1.latest> deletion successful
File </etc/lu/stage2.latest> deletion successful
Activation of boot environment <ufs_BE> successful.
bash-3.00# 

bash-3.00# luupgrade -u -n zfs_BE -s /mnt -k /var/tmp/no-autoreg

System has findroot enabled GRUB
No entry for BE <zfs_BE> in GRUB menu
Copying failsafe kernel from media.
64995 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </mnt/Solaris_10/Tools/Boot>
Cannot write the indicated output key file (autoreg_key).
Validating the contents of the media </mnt>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <zfs_BE>.
Checking for GRUB menu on ABE <zfs_BE>.
Saving GRUB menu on ABE <zfs_BE>.
Checking for x86 boot partition on ABE.
Determining packages to install or upgrade for BE <zfs_BE>.
Performing the operating system upgrade of the BE <zfs_BE>.
CAUTION: Interrupting this process may leave the boot environment unstable 
or unbootable.
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Restoring GRUB menu on ABE <zfs_BE>.
Updating package information on boot environment <zfs_BE>.
Package information successfully updated on boot environment <zfs_BE>.
Adding operating system patches to the BE <zfs_BE>.
The operating system patch installation is complete.
ABE boot partition backing deleted.
INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot 
environment <zfs_BE> contains a log of the upgrade operation.
INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot 
environment <zfs_BE> contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files 
are located on boot environment <zfs_BE>. Before you activate boot 
environment <zfs_BE>, determine if any additional system maintenance is 
required or if additional media of the software distribution must be 
installed.
The Solaris upgrade of the boot environment <zfs_BE> is complete.
Creating miniroot device
Configuring failsafe for system.
Failsafe configuration is complete.
Installing failsafe
Failsafe install is complete.
bash-3.00# 


bash-3.00# lustatus 
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
ufs_BE                     yes      yes    yes       no     -         
zfs_BE                     yes      no     no        yes    -         


bash-3.2# uname -a
SunOS x4200-sin06-b 5.10 Generic_147148-26 i86pc i386 i86pc
bash-3.2# cat /etc/release 
                    Oracle Solaris 10 1/13 s10x_u11wos_24a X86
  Copyright (c) 1983, 2013, Oracle and/or its affiliates. All rights reserved.
                            Assembled 17 January 2013
bash-3.2# 

Wednesday 18 March 2015

Solaris 10 Live Upgrade on a UFS filesystem

The main advantages of Live upgrade is minimizing the downtime and providing the system admin to revert the original OS in case of any patching failure.

  • lucreate to create a new boot environment.
  • luupgrade to patch the new inactive boot environment.
  • luactivate to activate the new boot environment.
bash-3.00# uname -a
SunOS x4200 5.10 Generic_142910-17 i86pc i386 i86pc

bash-3.00# echo |format
       0. c1t0d0 <DEFAULT cyl 8921 alt 2 hd 255 sec 63>          /pci@7b,0/pci1022,7458@11/pci1000,3060@2/sd@0,0

       1. c1t1d0 <DEFAULT cyl 8921 alt 2 hd 255 sec 63>          /pci@7b,0/pci1022,7458@11/pci1000,3060@2/sd@1,0

bash-3.00# cat /etc/release
                    Oracle Solaris 10 9/10 s10x_u9wos_14a X86
     Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved.
                            Assembled 11 August 2010

If you disk is under SVM, remove it from SVM control and apply the patch on one disk, Here is the disk layout of my system

disk 1 Partition:
------------------

c1t0d0s0    /
c1t0d0s1    swap
c1t0d0s2    backup

disk 2 partition:
--------------------

c1t1d0s0    /rootbackup
c1t1d0s1    swap
c1t1d0s2    backup

The partition on second disk (/rootbackup) is same size as the root (/) partition and it must not appear in use in “/etc/vfstab”.

This example explains how to upgrade a Solaris 10 9/10 system to the Solaris 10 1/13 release. 

Before upgrading, you must install the Solaris Live Upgrade packages from the release to which you are upgrading. 
New capabilities are added to the upgrade tools, so installing the new packages from the target release is important. 

1.Remove the old Live upgrade packages:

bash-3.00# pkgrm SUNWluu SUNWlur SUNWlucfg

Do not remove SUNWluzone and SUNWzoneu (Ignore the warning)

In this example, you will upgrade from Solaris 10 9/10 to Solaris 10 1/13, so you must get the Solaris Live Upgrade packages from the Solaris 10 1/13 DVD.


2. Install Live Upgrade package:

Insert Solaris DVD, then  from “Solaris_10/Tools/Installers” directory,  run the “liveupgrade20″ command.

you can run the command without option “noconsole” and “nodisplay” if you want to display the GUI.

bash-3.00# cd /cdrom/sol-10-u11-ga-x86/Solaris_10/Tools/Installers/

Note:
-----------------------------------------------------------------------------------
In Case of ISO image

bash-3.00# lofiadm -a /share/iso/10/u11/x86/sol-10-u11-ga-x86-dvd.iso
bash-3.00# lofiadm
bash-3.00# mount -F hsfs /dev/lofi/1 /mnt
bash-3.00# cd /mnt/Solaris_10/Tools/Installers/
------------------------------------------------------------------------------------
bash-3.00# ./liveupgrade20 -noconsole -nodisplay

3. Verify Live Upgrade packages are installed

bash-3.00# pkginfo -l SUNWluu SUNWlur SUNWlucfg

4. Run the “lucreate” command to create a copy of the active boot environment.

bash-3.00# lucreate -c Sol10u9 -C /dev/dsk/c1t0d0s2 -m /:/dev/dsk/c1t1d0s0:ufs -m -:/dev/dsk/c1t1d0s1:swap -n Sol10u11

-c For naming current active boot environment. Since I’m running update 9. I’ve used Sol10u9.
-C current root disk. Normally lucreate finds it automatically. In case of SVM it might not be able to find out then you can specify the current active boot disk using this option.
-m For specifying where to create the alternative boot environment.
-n It’s the name for alternative boot environment. Since I’m going to put update 11. I’ve named it as Sol10u11.

And the output of the above command as follows...

Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Checking GRUB menu...
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <Sol10u9>.
Creating initial configuration for primary boot environment <Sol10u9>.
INFORMATION: No BEs are configured on this system.
The device </dev/dsk/c1t0d0s2> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <Sol10u9> PBE Boot Device </dev/dsk/c1t0d0s2>.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <Sol10u11>.
Source boot environment is <Sol10u9>.
Creating file systems on boot environment <Sol10u11>.
Creating <ufs> file system for </> in zone <global> on </dev/dsk/c1t1d0s0>.
Mounting file systems for boot environment <Sol10u11>.
Calculating required sizes of file systems for boot environment <Sol10u11>.
Populating file systems on boot environment <Sol10u11>.
Analyzing zones.
Mounting ABE <Sol10u11>.
Cloning mountpoint directories.
Generating file list.
Copying data from PBE <Sol10u9> to ABE <Sol10u11>.
100% of filenames transferred
Finalizing ABE.
Fixing zonepaths in ABE.
Unmounting ABE <Sol10u11>.
Fixing properties on ZFS datasets in ABE.
Reverting state of zones in PBE <Sol10u9>.
Making boot environment <Sol10u11> bootable.
Updating bootenv.rc on ABE <Sol10u11>.
File </boot/grub/menu.lst> propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE <Sol10u11> in GRUB menu
Population of boot environment <Sol10u11> successful.
Creation of boot environment <Sol10u11> successful.

bash-3.00# lustatus

Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
Sol10u9                    yes      yes    yes       no     -         
Sol10u11                   yes      no     no        yes    -         

bash-3.00# lufslist -n Sol10u11

               boot environment name: Sol10u11

Filesystem              fstype    device size Mounted on          Mount Options

----------------------- -------- ------------ ------------------- --------------
/dev/dsk/c1t1d0s1       swap        542868480 -                   -
/dev/dsk/c1t1d0s0       ufs       72826629120 /                   -

bash-3.00# lufslist -n Sol10u9 

               boot environment name: Sol10u9
               This boot environment is currently active.
               This boot environment will be active on next system boot.

Filesystem              fstype    device size Mounted on          Mount Options

----------------------- -------- ------------ ------------------- --------------
/dev/dsk/c1t0d0s1       swap        542868480 -                   -
/dev/dsk/c1t0d0s0       ufs       72826629120 /  


5. After the new boot environment is created, now begin the upgrade procedure:

bash-3.00# luupgrade -u -n Sol10u11 -s /cdrom/cdrom0

Note:
-------------------------------------------------------------------------------
In case of ISO image

 bash-3.00#luupgrade -u -n Sol10u11 -s /mnt   

If you get an error related to auto-registration, please do as follows

bash-3.00# echo "autoreg=disable" > /var/tmp/no-autoreg
bash-3.00# regadm status
Solaris Auto-Registration is currently disabled
bash-3.00# luupgrade -u -n Sol10u11 -s /mnt -k /var/tmp/no-autoreg
-------------------------------------------------------------------------------

And the output of the above command as follows...

System has findroot enabled GRUB
No entry for BE <Sol10u11> in GRUB menu
Copying failsafe kernel from media.
64995 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </mnt/Solaris_10/Tools/Boot>
###########################################################
 NOTE: To improve products and services, Oracle Solaris communicates
 configuration data to Oracle after rebooting. 

 You can register your version of Oracle Solaris to capture this data
 for your use, or the data is sent anonymously. 

 For information about what configuration data is communicated and how
 to control this facility, see the Release Notes or
 www.oracle.com/goto/solarisautoreg. 

 INFORMATION: After activated and booted into new BE <Sol10u11>,
 Auto Registration happens automatically with the following Information 

autoreg=disable
###########################################################
Validating the contents of the media </mnt>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <Sol10u11>.
Checking for GRUB menu on ABE <Sol10u11>.
Saving GRUB menu on ABE <Sol10u11>.
Checking for x86 boot partition on ABE.
Determining packages to install or upgrade for BE <Sol10u11>.
Performing the operating system upgrade of the BE <Sol10u11>.
CAUTION: Interrupting this process may leave the boot environment unstable 
or unbootable.
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Restoring GRUB menu on ABE <Sol10u11>.
Updating package information on boot environment <Sol10u11>.
Package information successfully updated on boot environment <Sol10u11>.
Adding operating system patches to the BE <Sol10u11>.
The operating system patch installation is complete.
ABE boot partition backing deleted.
ABE GRUB is newer than PBE GRUB. Updating GRUB.
GRUB update was successfull.
INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot 
environment <Sol10u11> contains a log of the upgrade operation.
INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot 
environment <Sol10u11> contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files 
are located on boot environment <Sol10u11>. Before you activate boot 
environment <Sol10u11>, determine if any additional system maintenance is 
required or if additional media of the software distribution must be 
installed.
The Solaris upgrade of the boot environment <Sol10u11> is complete.
Creating miniroot device
Configuring failsafe for system.
Failsafe configuration is complete.
Installing failsafe
Failsafe install is complete.


6. After finished on step 5, now time to activate the new environment.

bash-3.00# luactivate Sol10u11

bash-3.00# lustatus

Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
Sol10u9                    yes      yes    no        no     -         
Sol10u11                   yes      no     yes       no     -         



5. Reboot the server with init 6 or shutdown -y -g0 -i6

bash-3.00#init 6 (don't use reboot)

bash-3.00# uname -a
SunOS x4200 5.10 Generic_147148-26 i86pc i386 i86pc
bash-3.00# cat /etc/release 
                    Oracle Solaris 10 1/13 s10x_u11wos_24a X86
  Copyright (c) 1983, 2013, Oracle and/or its affiliates. All rights reserved.
                            Assembled 17 January 2013

Note:
-------------------------------------------------------------------------------------
init 6 will boot the system from disk1 . It should be with Solaris 10 u11 and you are good to go. If you’ve issues booting from 2nd disk due to some installation issues you can go back to original Boot environment ( Sol10u9 or boot disk0). Then mount the alternative boot environment /var file system and look for upgrade logs in file /var/sadm/system/logs/upgrade_log.
If boot disk1 is successful. You can reboot back to update 9 (disk 0) and apply the patches on Update 11 (disk1) as mentioned below.
Same way you can install the 10_Recommended patches to alternative boot environment. Solaris install_cluster has option -B for specifying alternative boot environment. Before jumping and applying patches we need to install pre-required patches on current active boot environment. You can do that using.
-------------------------------------------------------------------------------------
bash-3.00# ./installcluster --apply-prereq --s10patchset
Setup ....

Recommended OS Patchset Solaris 10 x86 (2015.03.13)

Application of patches started : 2015.03.18 10:32:50

Applying 120901-03 ( 1 of 11) ... skipped
Applying 121334-04 ( 2 of 11) ... skipped
Applying 119255-91 ( 3 of 11) ... success
Applying 119318-01 ( 4 of 11) ... skipped
Applying 121297-01 ( 5 of 11) ... skipped
Applying 138216-01 ( 6 of 11) ... skipped
Applying 147062-02 ( 7 of 11) ... success
Applying 148337-01 ( 8 of 11) ... success
Applying 146055-07 ( 9 of 11) ... success
Applying 142252-02 (10 of 11) ... success
Applying 125556-14 (11 of 11) ... success

Application of patches finished : 2015.03.18 10:33:35

Following patches were applied :
 119255-91     148337-01     146055-07     142252-02     125556-14
 147062-02

Following patches were skipped :
 Patches already applied
 120901-03     121334-04     119318-01     121297-01     138216-01

Installation of prerequisite patches complete.

Install log files written :
  /var/sadm/install_data/s10x_rec_patchset_short_2015.03.18_10.32.50.log
  /var/sadm/install_data/s10x_rec_patchset_verbose_2015.03.18_10.32.50.log
bash-3.00# 

Now Install the 10_Recommended patches on inactive boot environment using -B option.

bash-3.00# ./installcluster -B Sol10u11 --s10patchset

Setup ....


Recommended OS Patchset Solaris 10 x86 (2015.03.13)

Application of patches started : 2015.03.18 10:54:00

Applying 120901-03 (  1 of 367) ... skipped
Applying 121334-04 (  2 of 367) ... skipped
Applying 119255-91 (  3 of 367) ... success
Applying 119318-01 (  4 of 367) ... skipped
Applying 121297-01 (  5 of 367) ... skipped
Applying 138216-01 (  6 of 367) ... skipped

Applying 147062-02 (  7 of 367) ... success
.
.
.
.

Installation of patch set to alternate boot environment complete.

Please remember to activate boot environment Sol10u11 with luactivate(1M)
before rebooting.

bash-3.00# luactivate Solaris10u7