Thursday, 26 November 2015

Patching ZFS based SPARC system with Live Upgrade


Patching ZFS based SPARC 


system with Live Upgrade


Solaris – Patching with Live Upgrade, ZFS makes it so much easier
Solaris Live Upgrade is a superb tool that lets your operating system create an alternate boot environments. Live Upgrade is a simple way to update or patch systems and minimize downtime and mitigate risks often associated with patching efforts.  An admin can patch the system quickly without any system interruption and this is done by patching the alternate boot environment which the system will boot from on the next reboot after having been activated. Live Upgrade creates a copy of the active boot environment, and that copy is given a name. That copy becomes the alternate BE or boot environment. Because there are multiple BE’s or boot environments, the true beauty of Live Upgrade shows through. If problems occur with the newly created or patches BE, the original BE could be used as the backup plant boot image. So reverting back to a previous BE is the back-out plan for almost all Live Upgrade installations. Historically with UFS for even (I dread those days) with SVM, lucreate command was much more complicated as you had software raid. ZFS with snapshots and pools makes it so easy, it’s astounding. At the OBP or boot prom level, it’s mostly the same. At the ok promg, a boot -L will list the BE’s assuming the correct boot disk is mapped properly.
Live Upgrade and patching
Patching a Solaris 10 ZFS based system is done the same way you would path any basic Solaris system. You should be able to patch the Solaris 10 ZFS based system with Live upgrade successfully, and with no outages. The patches are downloaded and unzipped in a temporary location (not on /tmp)    Assumptions are that you have a valid and working rpool with zfs volumes. Lets look at our existing BE’s and the active boot environment is Nov2012.
# lustatus Boot Environment           Is       Active Active    Can    Copy       Name                       Complete Now    On Reboot Delete Status     ————————– ——– —— ——— —— ———- Nov2012                    yes      yes    yes       no     -          Oct2012                    yes      no     no        yes    -       I need a new BE for next month, December. I normally have 2 BE’s and reotate and lurename them. But for this blog article, I will crate a new one. # lucreate -n Dec2012 Analyzing system configuration. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment <Dec2012>. Source boot environment is <Nov2012>. Creating file systems on boot environment <Dec2012>. Populating file systems on boot environment <Dec2012>. Analyzing zones. Duplicating ZFS datasets from PBE to ABE. Creating snapshot for <rpool/ROOT/s10s_u10wos_17b> on <rpool/ROOT/s10s_u10wos_17b@Dec2012>. Creating clone for <rpool/ROOT/s10s_u10wos_17b@Dec2012> on <rpool/ROOT/Dec2012>. Mounting ABE <Dec2012>. Generating file list. Finalizing ABE. Fixing zonepaths in ABE. Unmounting ABE <Dec2012>. Fixing properties on ZFS datasets in ABE. Reverting state of zones in PBE <Nov2012>. Making boot environment <Dec2012> bootable. Population of boot environment <Dec2012> successful. Creation of boot environment <Dec2012> successful.
Pretty slick. The lucreate in conjunction with ZFS created the rpool/ROOT/s10s_u10wos_17b@Dec2012 snapshot which was then cloned to rpool/ROOT/Dec2012. The rpool/ROOT/Dec2012 clone is what you will see at the OBP when you do a boot -L . Lets look at our BE’s status
# lutatus Boot Environment           Is       Active Active    Can    Copy Name                       Complete Now    On Reboot Delete Status ————————– ——– —— ——— —— ———- Nov2012                    yes      yes    yes       no     - Oct2012                    yes      no     no        yes    - Dec2012                    yes      no     no        yes    -
Lets patch the new Dec2012 BE.  Assumption here is that we have downloaded the latest recommended patch cluster from Sun or Oracle site. (depends who you have alliegience with). Lets patch the BE while the system is running, and doing whatever the system is supposed to do. Lets say it’s a DNS/NTP/Jumpstart server? Don’t know. Could be anything. I’ve downloaded the patch cluster, unziped it in /var/tmp
# uname -a SunOS tweetybird 5.10 Generic_147440-12 sun4v sparc sun4v # cd /var/tmp # luupgrade -n Dec2012 -s /var/tmp/10_Recommended/patches -t `cat patch_order` Validating the contents of the media </var/tmp/10_Recommended/patches>. The media contains 364 software patches that can be added. Mounting the BE <Dec2012>. Adding patches to the BE <Dec2012>. Validating patches … Loading patches install installed on the system… Done! Loading patches requested to install. … Unmounting the BE <Dec2012>
The patch add to the BE <Dec2012> completed.
# lustatus Boot Environment           Is       Active Active    Can    Copy Name                       Complete Now    On Reboot Delete Status ————————– ——– —— ——— —— ———- Nov2012                    yes      yes    yes       no     - Oct2012                    yes      no     no        yes    - Dec2012                    yes      no     no        yes    - # luactivate Dec2012 # lutatus Boot Environment           Is       Active Active    Can    Copy Name                       Complete Now    On Reboot Delete Status ————————– ——– —— ——— —— ———- Nov2012                    yes      no     yes       no     - Oct2012                    yes      no     no        yes    - Dec2012                    yes      yes    no        yes    -
Lets reboot and makes sure the prober BE comes up. Use must use either init or shutdown, do not use halt or fastboot.
# init 6
After the server reboots, the Dec2012 should automatically be booted with the newly implemented patch bundle. So the Dec2012 is the new active BE. Lets check the kernel patch level:
# uname -a
SunOS tweetybird 5.10 Generic_147440-26 sun4v sparc sun4v
Looks good. With ZFS, Live Upgrade it’s so simple now. Heck, Live Upgrade workds wonders when you have a UFS based root volume and you dearly want to migrate over to a ZFS root volume. You will need the ZFS capable kernel to start. Create a pool called rpool using slices not the whole disk, then lucreate it to the rpool, activate it and then reboot and you are booting off of a new ZFS based Solaris system. There are a few tricks about creating the proper type of rpool. Maybe another blog entry on this. But Live Upgrade is a great tool for migrating UFS systems to ZFS. Again, with a slick backout option.
Disaster – You need an easy backout plan
Thanksfully having multiple BE’s you can choose to backout simply by choosing one of the previously installed BE’s. If the system boots up without trouble but applications are failing, simply luactivate the original BE and reboot. If the system fails to boot (yikes, this is rare), the from the boot prom, list the BE’s and choose the BE to boot from.
ok boot -L . . . Boot device: /pci@1f,0/pci@1/scsi@8/disk@0,0 File and args: -L zfs-file-system Loading: /platformsun4v/bootlst     1.Nov2012     2 Octv2012     3 Dec2012 Select environment to boot: [ 1 - 3 ]: 1 to boot the selected entry, invoke: boot [<root-device] -Z rpool/ROOT/Nov2012
and off you go. In special cases, when you have to backout and boot from the original BE and it fails, you will need to boot in fail safe mode and mount the current BE root slice and import the rootpool. Instructions are as follows:
ok boot –f  Failsafe
Now mount the current BE root slice to /mnt.
# zpool import rootpool # zfs inherit -r mountpoint rootpool/ROOT/Dec2012 # zfs set mountpoint=/mnt  rootpool/ROOT/Dec2012 # zfs mount rootpool/ROOT/Dec2012
Here we are activating the previously (known good) BE
# /mnt/sbin/luactivate
If this works, you are golden , now reboot with init 6.
# init 6
Please Note Live Upgrade and LDom’s Require an Extra Step A quick note about Live Upgrade, ZFS and LDom’s. Preserving the Logical Domains Constraints database file when using the Oracle Solaris 10 Live Upgrade feature requires some hand holding. This is a special situation. If you are using Live Upgrade on a Control Domain, you need to enter the following line to the bottom of the /etc/lu/synclist file. As in append this line.
# echo “/var/opt/SUNWldm/ldom-db.xml     OVERWRITE” >> /etc/lu/synclist
This line is important, as it forces the database to be copied automatically from the active boot environment to the new boot environment when you switch boot environments. Otherwise, as you may well guess, you loose your LDom configuration.

Thursday, 22 October 2015

Adding or Changing Swap Space in an Oracle Solaris ZFS Root Environment

Adding or Changing Swap Space in an Oracle Solaris ZFS Root Environment
===============================================

A swap volume cannot be removed if it is in use. You can confirm if the current swap volume is in use by comparing the blocks identified in the blocks column and blocks identified in the free column. 

If the blocks in the two columns are equal, the swap area is not busy. 

# swap -l
swapfile                 dev   swaplo              blocks      free
/dev/zvol/dsk/rpool/swap 256,1      16 1058800 1058800

If the current swap area is not in use, you can resize the size of the current swap volume.

# zfs get volsize rpool/swap
NAME        PROPERTY  VALUE    SOURCE
rpool/swap  volsize   517M     -

# zfs set volsize=2g rpool/swap

# zfs get volsize rpool/swap
NAME        PROPERTY  VALUE    SOURCE
rpool/swap  volsize   2G       -


If the current swap area is in use, you can add another swap volume.


# zfs create -V 2G rpool/swap2

Activate the second swap volume.

# swap -a /dev/zvol/dsk/rpool/swap2

# swap -l
swapfile                  dev  swaplo   blocks   free
/dev/zvol/dsk/rpool/swap  256,1      16 1058800 1058800
/dev/zvol/dsk/rpool/swap2 256,3      16 4194288 4194288

Friday, 10 April 2015

Administrating boot environments in Solaris11 with beadm



          Administrating boot environments in Solaris 11 is almost same as Solaris 10’s Live upgrade. In Solaris 10,we will use lu commands like lucreate,l uactivate, lumount, luumount and lustatus. But in Solaris 11, all the tasks will be carried out using beadm command. 

         Here we will perform simple operations to understand beadm in Solaris11. First thing is to create a new boot environment and add one sample package to that environment. Then activate the new BE and verify whether the sample package is installed or not. After that bring the system back to old boot environment by activating the old BE.


1. List out the current boot environments. In active column, NR stands for active Now(N) and active on Reboot(R). And also you can see there is no snapshot exist on the system

root@netra-t5440:/# date
Thu Apr  9 06:08:10 PDT 2015

root@netra-t5440:/# beadm list
BE      Active Mountpoint Space Policy Created          
--      ------ ---------- ----- ------ -------          
solaris NR     /          2.63G static 2015-01-08 11:58 

root@netra-t5440:/# zfs list |grep @
root@netra-t5440:/# 

2. Clone the current BE to new BE called BE_NEW. Here unlike solaris10, snapshots are kept in the background. So you can see only new BE’s datasets. The datasets name with BE_NEW belongs to new boot environment.

root@netra-t5440:/# beadm create BE_NEW

root@netra-t5440:/# beadm list
BE      Active Mountpoint Space Policy Created          
--      ------ ---------- ----- ------ -------          
BE_NEW  -      -          81.0K static 2015-04-09 06:29 
solaris NR     /          2.63G static 2015-01-08 11:58 

root@netra-t5440:/# beadm list -a BE_NEW
BE/Dataset/Snapshot      Active Mountpoint Space Policy Created          
-------------------      ------ ---------- ----- ------ -------          
BE_NEW
   rpool/ROOT/BE_NEW     -      -          80.0K static 2015-04-09 06:29 
   rpool/ROOT/BE_NEW/var -      -          1.0K  static 2015-04-09 06:29 

3. Mount the new boot environment.

root@netra-t5440:/# mkdir /BE_NEW

root@netra-t5440:/# beadm mount BE_NEW /BE_NEW

root@netra-t5440:/# df -h /BE_NEW
Filesystem             Size   Used  Available Capacity  Mounted on
rpool/ROOT/BE_NEW      134G   2.3G        94G     3%    /BE_NEW

root@netra-t5440:/# beadm list
BE      Active Mountpoint Space Policy Created          
--      ------ ---------- ----- ------ -------          
BE_NEW  -      /BE_NEW    81.0K static 2015-04-09 06:29 
solaris NR     /          2.63G static 2015-01-08 11:58 

4.Now we will install new package on BE_NEW.

root@netra-t5440:/# pkg -R /BE_NEW verify -v samba
pkg verify: no packages matching 'samba' installed

root@netra-t5440:/# pkg -R /BE_NEW install -v samba
           Packages to install:        12
            Services to change:         1
     Estimated space available:  93.96 GB
Estimated space to be consumed: 660.50 MB
          Rebuild boot archive:        No

Changed packages:
solaris
  library/desktop/gobject/gobject-introspection
    None -> 0.9.12,5.11-0.175.2.0.0.41.0:20140609T232030Z
  library/desktop/libglade
    None -> 2.6.4,5.11-0.175.2.0.0.35.0:20140317T124538Z
  .
  .
  .
  .
Services:
  restart_fmri:
    svc:/system/manifest-import:default

Editable files to change:
  Install:
    etc/dbus-1/system.d/avahi-dbus.conf
DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                              12/12     4240/4240  134.9/134.9  3.9M/s

PHASE                                          ITEMS
Installing new actions                     5005/5005
Updating package state database                 Done 
Updating package cache                           0/0 
Updating image state                            Done 
Creating fast lookup database                   Done 
Updating package cache                           2/2 
root@netra-t5440:/# 

Note:You need to configure IPS to install new packages.

5.Verify the package on BE_NEW

root@netra-t5440:/# pkg -R /BE_NEW verify -v samba
PACKAGE                                                                 STATUS
pkg://solaris/service/network/samba                                         OK

root@netra-t5440:/# pkg -R /BE_NEW list -v samba
FMRI                                                                         IFO
pkg://solaris/service/network/samba@3.6.23,5.11-0.175.2.0.0.42.1:20140623T021406Z i-- 

6.Activate the BE_NEW.

root@netra-t5440:/# beadm activate BE_NEW

root@netra-t5440:/# beadm list
BE      Active Mountpoint Space  Policy Created          
--      ------ ---------- -----  ------ -------          
BE_NEW  R      /BE_NEW    3.12G  static 2015-04-09 06:29 
solaris N      /          229.0K static 2015-01-08 11:58 

7.Reboot the system to boot the system from BE_NEW.

root@netra-t5440:/# init 6

8.Verify whether system is boot from BE_NEW.

root@netra-t5440:~# beadm list
BE      Active Mountpoint Space  Policy Created          
--      ------ ---------- -----  ------ -------          
BE_NEW  NR     /          3.36G  static 2015-04-09 06:29 
solaris -      -          87.24M static 2015-01-08 11:58 

Note:Look at the Active column to confirm BE state. N- Active Now; R- Active on Reboot;

9.Verify the installed packages are available in current boot environment.

root@netra-t5440:~# pkg list samba
NAME (PUBLISHER)                                  VERSION                    IFO
service/network/samba                             3.6.23-0.175.2.0.0.42.1    i--

10.You can also do the verification on OLD-BE, whether the package is available there are not.

root@netra-t5440:~# mkdir /Old_BE
root@netra-t5440:~# beadm mount solaris /Old_BE
root@netra-t5440:~# pkg -R /Old_BE list samba
pkg list: No packages matching 'samba' installed


Rollback operation
============
1.Any time you can rollback the Solaris 11 to old boot environment using below command.

root@netra-t5440:~# beadm activate solaris

root@netra-t5440:~# beadm list
BE      Active Mountpoint Space   Policy Created          
--      ------ ---------- -----   ------ -------          
BE_NEW  N      /          678.50M static 2015-04-09 06:29 
solaris R      /Old_BE    2.72G   static 2015-01-08 11:58 

N- Active now
R- Active upon Reboot

2.Reboot the server using “init 6″ .

root@netra-t5440:~# init 6

3.Now you won't see the new package on the system.

root@netra-t5440:~# beadm list
BE      Active Mountpoint Space   Policy Created          
--      ------ ---------- -----   ------ -------          
BE_NEW  -      -          686.82M static 2015-04-09 06:29 
solaris NR     /          2.79G   static 2015-01-08 11:58 

root@netra-t5440:~# pkg list samba
pkg list: No packages matching 'samba' installed


Thus you can manage the boot environments for Solaris 11 for package/patch bundle installation.

Wednesday, 8 April 2015

Understanding Veritas Volume Manager

VxVM:

VxVM is a storage management subsystem that allows to manage physical disks as logical devices called volumes.
Provides easy-to-use online disk storage management for computing environments and SAN environments.
VxVM volumes can span multiple disks.
Provides tools to improve performance and ensure data availability and integrity.
VxVM and the Operating System:
Operates as a subsystem between OS and data management systems
VxVM depends on OS for the following:
            OS disk devices
            Device handles
            VxVM dynamic multipathing (DMP) Metadevice
VxVM relies on the following daemons:
vxconfigd: Configuration daemon maintains disk and group configurations and communicates configuration changes to the kernel.
vxiod: VxVM I/O daemon provides extended I/O operations.
vxrelocd: The hot-relocation daemon monitors VxVM for events that affect redundancy, and performs hot-relocation to restore redundancy.
VxVM Storage Management:
VxVM uses two types of objects to handle storage management.
Physical objects:
Basic storage device where the data is ultimately stored
Device names – c#t#d#s#
Virtual objects:
When one or more physical disks are brought under the control of VxVM, it creates virtual objects called “volumes”.
Virtual Objects in VxVM:
VM Disks
Disk Groups
Sub disks
Plexes
Volumes

VM Disks:
When a physical disk is placed under VxVM control, a VM disk is assigned to the physical disk.
VM disk typically includes a public region (allocated storage) and a private region where internal configuration information is stored.
VM disk has a unique name (disk media name, can be maximum 31 characters, by default takes disk## format).
Disk Groups:
Is a collection of VM disks that share a common configuration.
The default disk group is “rootdg”.
Disk group name can e max 31 characters.
Allows to group disks into logical collections.
Volumes are created within a disk group.
Subdisks:
Is a set of contiguous disk blocks.
A VM disk can be divided into one or more subdisks.
Default name for VM disk is disk## (disk01) and default name for subdisk is disk##-## (disk01-01).
Any VM disk space that is not part of a subdisk is free space and can be used for creating new subdisks.
Plexes:
VxVM uses subdisks to build virtual objects called plexes.
A plex consists of one or more subdisks located on one or more physical disks.
Volumes:
Is a virtual disk device that appears to applications.
Consists of one or more plexes.
Default naming convention for a volume is vol## and default naming convention for plexes in a volume is vol##-##.
Volume can contain upto 31 characters.
Can consist of up to 32 plexes.
Must have at least one plex associated.
All subdisk within a volume must belong to the same disk group.

Combining Virtual objects in VxVM:
VM disks are grouped in to sub disk groups.
Subdisks are combined to form plexes.
Volumes are composed of one or more plexes.

Volume Layouts in VxVM:




Non-layered Volumes:
Sub disk is restricted to mapping directly to a VM disk.
Layered Volumes:
Is constructed by mapping its subdisks to underlying volumes.

Layout Methods:
Concatenation and Spanning
Striping (RAID 0)
Mirroring (RAID 1)
Striping + Mirroring (Mirrored Stripe or RAID 0+1)
Mirroring + Striping (Striped Mirror or RAID 1+0)
RAID 5 (striping with Parity)
Online Relayout:
Online relayout allows to change the storage layouts that have been created already without disturbing data access.

Dirty Region Logging (DRL):
DRL is enabled, speeds recovery of mirrored volumes after a system crash.
DRL keeps track of the regions that have changed due to I/O writes to a mirrored volume.
DRL uses this information to recover only those portions of the volumes that needed to be recovered.

Fast Resync:
Performs quick and efficient resynchronization of stale mirrors (a mirror that is not synchronized).
Hot-Relocation:

Feature that allows a system to react automatically to I/O failures on redundant objects in VxVM and restore redundancy and access to those objects.

Tuesday, 7 April 2015

SVM Root disk mirroring


The concept
***********
Solaris Volume Manager is a software package for creating, modifying and controlling RAID-0 (concatenation and stripe) volumes, RAID-1 (mirror) volumes, RAID 0+1 volumes, RAID 1+0 volumes, RAID-5 volumes, and soft partitions.

Before configuring SVM you should take the Backup of /etc/vfstab & /etc/system File

# cp /etc/vfstab /etc/vfstab.before_mirror

# cp /etc/system /etc/system.before_mirror

Installing the software(Solaris 9 onwards its defaultly installed with OE)
**********************************************************
The DiskSuite product is found on the Solaris 8 "2-of-2" CD. Minimally, the drivers (SUNWmdr and SUNWmdx) and command line tools (SUNWmdu) need to be installed. 

The "metatool" GUI is in the optional SUNWmdg package. Reboot after installing the packages.

bash-2.05#pkgadd -d /cdrom/cdrom0/Solaris_8/EA/products/DiskSuite_4.2.1/sparc/Packages \
          SUNWmdr SUNWmdx SUNWmdu SUNWmdg

bash-2.05#shutdown -y -g0 -i6

Naming convention
****************
You can number your metadevices however you wish. I like something that makes a little bit of sense, 

I use the following convention:

d0 - mirror metadevice to be mounted instead of c0t0d0s0
d10 - submirror metadevice on first disk, c0t0d0s0
d20 - submirror metadevice on second disk, c0t1d0s0

d4 - mirror metadevice to be mounted instead of c0t0d0s4
d14 - submirror metadevice on first disk, c0t0d0s4
d24 - submirror metadevice on second disk, c0t1d0s4

d5 - mirror metadevice to be mounted instead of c0t0d0s5
d15 - submirror metadevice on first disk, c0t0d0s5
d25 - submirror metadevice on second disk, c0t1d0s5

d6 - mirror metadevice to be mounted instead of c0t0d0s6
d16 - submirror metadevice on first disk, c0t0d0s6
d26 - submirror metadevice on second disk, c0t1d0s6
Etc.

***********************************************************
Make sure both disks are partitioned identically, and has filesystem on it.
***********************************************************
bash-2.05# prtvtoc /dev/rdsk/c0t0d0s2 |fmthard -s - /dev/rdsk/c0t1d0s2
fmthard:  New volume table of contents now in place. 

bash-2.05# newfs /dev/rdsk/c0t1d0s0
bash-2.05# newfs /dev/rdsk/c0t1d0s4
bash-2.05# newfs /dev/rdsk/c0t1d0s5
bash-2.05# newfs /dev/rdsk/c0t1d0s6

*****************************************
Creating Meta Database in Both the system Disks
*****************************************
A minimum of two metadatabases must be on each system disk, preferably spread over more than one disk slice.


bash-2.05#
bash-2.05#metadb -afc3 c0t0d0s7  c0t1d0s7 

*******************
Mirroring the root disk
*******************

Create a metadevice out of the original root:

        bash-2.05#metainit -f d10 1 1 c0t0d0s0

Create a metadevice for the root mirror:

  bash-2.05#metainit d20 1 1 c0t1d0s0

Set up a one-way mirror of the root metadevice:

bash-2.05#metainit d0 -m d10

Configure the system to boot the root filesystem from the metadevice, using the "metaroot" command. This will make the necessary changes to /etc/vfstab and /etc/system:

bash-2.05#metaroot d0


Flush any UFS logging of the master filesystem

bash-2.05#lockfs -fa

Reboot the server so that it will boot up in SVM contorl

bash-2.05#shutdown -y -g0 -i6

Attach the second metadevice to the root metadevice to make it a 2-way mirror:

bash-2.05#metattach d0 d20

Copy the Bootblock to the Mirror Disk so that it become bootable

bash-2.05#cd /usr/platform/'uname -i'/lib/fs/ufs

bash-2.05#installboot bootblk /dev/rdsk/c0t1d0s0

So that you can boot the machine from disk1 also

******************************
Mirroring the remaining system slices
******************************

***************************************************
Create the sub-mirror metadevice for /var:

bash-2.05# metainit -f d14 1 1 c0t0d0s4
bash-2.05# metainit -f d24 1 1 c0t1d0s4

Create the Main mirror metadevice for /var:

bash-2.05# metainit d4 -m d14
***************************************************
Create the sub-mirror metadevice for /home:

bash-2.05# metainit -f d15 1 1 c0t0d0s5
bash-2.05# metainit -f d25 1 1 c0t1d0s5

Create the Main mirror metadevice for /home:

bash-2.05# metainit d5 -m d15
***************************************************
Create the sub-mirror metadevice /usr mirror:

bash-2.05# metainit -f d16 1 1 c0t0d0s6
bash-2.05# metainit -f d26 1 1 c0t1d0s6

Create the Main mirror metadevice for /usr:

bash-2.05# metainit d6 -m d16
***************************************************

Edit /etc/vfstab so that the new metadevices will be mounted:

/dev/md/dsk/d4 /dev/md/rdsk/d4  /var    ufs     1   no  logging
/dev/md/dsk/d5 /dev/md/rdsk/d5  /home   ufs     1   no  logging
/dev/md/dsk/d6 /dev/md/rdsk/d6  /usr    ufs     1   no  logging

Reboot:

bash-2.05# shutdown -y -g0 -i6

Attach the second submirrors to the mirrors to make 2-way mirrors:

bash-2.05# metattach d4 d24
bash-2.05# metattach d5 d25
bash-2.05# metattach d6 d26

****************************************************
Wait until disk activity stops before doing much else. DiskSuite's progress of syncing the second drive to the first can be monitored using the "metastat" command. Though it is not strictly necessary, it is a good idea to reboot after this, if only to make sure there are no problems and that the box will indeed come back up.

Note:
****************************************************
The following warning messages are harmless, and may be safely ignored. They are an artifact of the way drivers are loaded during the boot process when you have a mirrored root or /usr file system.:
WARNING: forceload of misc/md_trans failed
WARNING: forceload of misc/md_raid failed
WARNING: forceload of misc/md_hotspares failed

"WARNING: forceload of misc/md_hotspares failed". This messages can be suppressed by creating an empty hot spare pool. The following metainit command does just that:

        bash-2.05# metainit hsp001

Root mirror disk replacement under SVM



1. Make a backup of the following files

# cp /etc/vfstab /etc/vfstab.`date +'%d-%m-%Y'`
# metastat –c > /var/tmp/metastat-c.out
# metastat > /var/tmp/metastat.out
# metadb > /var/tmp/metadb.out
# cp /etc/system /var/tmp/system.`date +'%d-%m-%Y'`
# prtvtoc /dev/rdsk/c0t0d0s2 /var/tmp/prtvtoc-c0t0d0s2

In this procedure we will assume that /dev/dsk/c0t1d0 disk failed.

2. Check defective drive:

# iostat –En /dev/dsk/c0t1d0 [You will see errors]

c0t1d0 Soft Errors: 0 Hard Errors: 102 Transport Errors: 231
Vendor: SEAGATE Product: ST914602SSUN146G Revision: 0400 Serial No: 070490N5V8
Size: 146.80GB <146800115712 bytes>
Media Error: 20 Device Not Ready: 0 No Device: 82 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0

# cfgadm –al

Ap_Id Type Receptacle Occupant Condition
c0 scsi-bus connected configured unknown
c0::dsk/c0t0d0 disk connected configured unknown
c0::dsk/c0t1d0 disk connected configured unknown

# metastat –c [It will show that bad disk is in maintenance state]

d6 m 10GB d16 d26 (maint)
d17 s 10GB c0t0d0s6
d27 s 10GB c0t1d0s6 (maint)
d4 m 40GB d14 d24 (maint)
d14 s 40GB c0t0d0s4
d24 s 40GB c0t1d0s4 (maint)
d3 m 40GB d13 d23 (maint)
d13 s 40GB c0t0d0s3
d23 s 40GB c0t1d0s3 (maint)
d1 m 2.0GB d11 d21 (maint)
d11 s 2.0GB c0t0d0s1
d21 s 2.0GB c0t1d0s1 (maint)
d0 m 3.0GB d10 d20 (maint)
d10 s 3.0GB c0t0d0s0
d20 s 3.0GB c0t1d0s0 (maint)
d5 m 40GB d15 d25 (maint)
d15 s 40GB c0t0d0s5
d25 s 40GB c0t1d0s5 (maint)

3. Remove mirror information from bad disk

# metadb -d /dev/dsk/c0t1d0s7
# metadetach -f d5 d25
# metadetach -f d0 d20
# metadetach -f d1 d21
# metadetach -f d3 d23
# metadetach -f d4 d24
# metadetach -f d6 d26
# metaclear d25
# metaclear d20
# metaclear d21
# metaclear d23
# metaclear d24
# metaclear d26

4. Check the successful mirror reduction:
  
# metastat –c
# metadb

5. Unconfigure disk in Solaris

# cfgadm -c unconfigure c0::dsk/c0t1d0

6. Physically replace the disk online (No need to shutdown)

Note: Some older server (IDE Drives) require downtime as the HDD is not hot pluggable)

7. Configure new disk

# cfgadm -c configure c0::dsk/c0t1d0 

8. Verify that disk is visible and there is no error

# echo | format [It will show you c0t1d0 disk]
# iostat –En /dev/dsk/c0t1d0
# cfgadm -al

9. Copy partition table from root disk [in this case we assume it is /dev/dsk/c0t0d0]

# prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t1d0s2

fmthard:  New volume table of contents now in place.

10. Install boot block
   
# installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c0t1d0s0

11. Create state database replicas on new disk

# metadb –afc3 c0t1d0s7

 12. Check that replicas created

# metadb [It should show you same number of replicas on both disks and on same slice]

13. Create meta devices on new disk
# metainit -f d20 1 1 c0t1d0s0
# metainit -f d21 1 1 c0t1d0s1
# metainit -f d23 1 1 c0t1d0s3
# metainit -f d24 1 1 c0t1d0s4
# metainit -f d25 1 1 c0t1d0s5
# metainit -f d26 1 1 c0t1d0s6

14. Create mirror or synchronize data on new disk
# metattach d0 d20
# metattach d5 d25
# metattach d1 d21
# metattach d3 d23
# metattach d4 d24
# metattach d6 d26

15. Check that mirror is sync’ing

# metastat –c [It will tell you how much data has been sync’ed on each slice]