Friday 10 April 2015

Administrating boot environments in Solaris11 with beadm



          Administrating boot environments in Solaris 11 is almost same as Solaris 10’s Live upgrade. In Solaris 10,we will use lu commands like lucreate,l uactivate, lumount, luumount and lustatus. But in Solaris 11, all the tasks will be carried out using beadm command. 

         Here we will perform simple operations to understand beadm in Solaris11. First thing is to create a new boot environment and add one sample package to that environment. Then activate the new BE and verify whether the sample package is installed or not. After that bring the system back to old boot environment by activating the old BE.


1. List out the current boot environments. In active column, NR stands for active Now(N) and active on Reboot(R). And also you can see there is no snapshot exist on the system

root@netra-t5440:/# date
Thu Apr  9 06:08:10 PDT 2015

root@netra-t5440:/# beadm list
BE      Active Mountpoint Space Policy Created          
--      ------ ---------- ----- ------ -------          
solaris NR     /          2.63G static 2015-01-08 11:58 

root@netra-t5440:/# zfs list |grep @
root@netra-t5440:/# 

2. Clone the current BE to new BE called BE_NEW. Here unlike solaris10, snapshots are kept in the background. So you can see only new BE’s datasets. The datasets name with BE_NEW belongs to new boot environment.

root@netra-t5440:/# beadm create BE_NEW

root@netra-t5440:/# beadm list
BE      Active Mountpoint Space Policy Created          
--      ------ ---------- ----- ------ -------          
BE_NEW  -      -          81.0K static 2015-04-09 06:29 
solaris NR     /          2.63G static 2015-01-08 11:58 

root@netra-t5440:/# beadm list -a BE_NEW
BE/Dataset/Snapshot      Active Mountpoint Space Policy Created          
-------------------      ------ ---------- ----- ------ -------          
BE_NEW
   rpool/ROOT/BE_NEW     -      -          80.0K static 2015-04-09 06:29 
   rpool/ROOT/BE_NEW/var -      -          1.0K  static 2015-04-09 06:29 

3. Mount the new boot environment.

root@netra-t5440:/# mkdir /BE_NEW

root@netra-t5440:/# beadm mount BE_NEW /BE_NEW

root@netra-t5440:/# df -h /BE_NEW
Filesystem             Size   Used  Available Capacity  Mounted on
rpool/ROOT/BE_NEW      134G   2.3G        94G     3%    /BE_NEW

root@netra-t5440:/# beadm list
BE      Active Mountpoint Space Policy Created          
--      ------ ---------- ----- ------ -------          
BE_NEW  -      /BE_NEW    81.0K static 2015-04-09 06:29 
solaris NR     /          2.63G static 2015-01-08 11:58 

4.Now we will install new package on BE_NEW.

root@netra-t5440:/# pkg -R /BE_NEW verify -v samba
pkg verify: no packages matching 'samba' installed

root@netra-t5440:/# pkg -R /BE_NEW install -v samba
           Packages to install:        12
            Services to change:         1
     Estimated space available:  93.96 GB
Estimated space to be consumed: 660.50 MB
          Rebuild boot archive:        No

Changed packages:
solaris
  library/desktop/gobject/gobject-introspection
    None -> 0.9.12,5.11-0.175.2.0.0.41.0:20140609T232030Z
  library/desktop/libglade
    None -> 2.6.4,5.11-0.175.2.0.0.35.0:20140317T124538Z
  .
  .
  .
  .
Services:
  restart_fmri:
    svc:/system/manifest-import:default

Editable files to change:
  Install:
    etc/dbus-1/system.d/avahi-dbus.conf
DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                              12/12     4240/4240  134.9/134.9  3.9M/s

PHASE                                          ITEMS
Installing new actions                     5005/5005
Updating package state database                 Done 
Updating package cache                           0/0 
Updating image state                            Done 
Creating fast lookup database                   Done 
Updating package cache                           2/2 
root@netra-t5440:/# 

Note:You need to configure IPS to install new packages.

5.Verify the package on BE_NEW

root@netra-t5440:/# pkg -R /BE_NEW verify -v samba
PACKAGE                                                                 STATUS
pkg://solaris/service/network/samba                                         OK

root@netra-t5440:/# pkg -R /BE_NEW list -v samba
FMRI                                                                         IFO
pkg://solaris/service/network/samba@3.6.23,5.11-0.175.2.0.0.42.1:20140623T021406Z i-- 

6.Activate the BE_NEW.

root@netra-t5440:/# beadm activate BE_NEW

root@netra-t5440:/# beadm list
BE      Active Mountpoint Space  Policy Created          
--      ------ ---------- -----  ------ -------          
BE_NEW  R      /BE_NEW    3.12G  static 2015-04-09 06:29 
solaris N      /          229.0K static 2015-01-08 11:58 

7.Reboot the system to boot the system from BE_NEW.

root@netra-t5440:/# init 6

8.Verify whether system is boot from BE_NEW.

root@netra-t5440:~# beadm list
BE      Active Mountpoint Space  Policy Created          
--      ------ ---------- -----  ------ -------          
BE_NEW  NR     /          3.36G  static 2015-04-09 06:29 
solaris -      -          87.24M static 2015-01-08 11:58 

Note:Look at the Active column to confirm BE state. N- Active Now; R- Active on Reboot;

9.Verify the installed packages are available in current boot environment.

root@netra-t5440:~# pkg list samba
NAME (PUBLISHER)                                  VERSION                    IFO
service/network/samba                             3.6.23-0.175.2.0.0.42.1    i--

10.You can also do the verification on OLD-BE, whether the package is available there are not.

root@netra-t5440:~# mkdir /Old_BE
root@netra-t5440:~# beadm mount solaris /Old_BE
root@netra-t5440:~# pkg -R /Old_BE list samba
pkg list: No packages matching 'samba' installed


Rollback operation
============
1.Any time you can rollback the Solaris 11 to old boot environment using below command.

root@netra-t5440:~# beadm activate solaris

root@netra-t5440:~# beadm list
BE      Active Mountpoint Space   Policy Created          
--      ------ ---------- -----   ------ -------          
BE_NEW  N      /          678.50M static 2015-04-09 06:29 
solaris R      /Old_BE    2.72G   static 2015-01-08 11:58 

N- Active now
R- Active upon Reboot

2.Reboot the server using “init 6″ .

root@netra-t5440:~# init 6

3.Now you won't see the new package on the system.

root@netra-t5440:~# beadm list
BE      Active Mountpoint Space   Policy Created          
--      ------ ---------- -----   ------ -------          
BE_NEW  -      -          686.82M static 2015-04-09 06:29 
solaris NR     /          2.79G   static 2015-01-08 11:58 

root@netra-t5440:~# pkg list samba
pkg list: No packages matching 'samba' installed


Thus you can manage the boot environments for Solaris 11 for package/patch bundle installation.

Wednesday 8 April 2015

Understanding Veritas Volume Manager

VxVM:

VxVM is a storage management subsystem that allows to manage physical disks as logical devices called volumes.
Provides easy-to-use online disk storage management for computing environments and SAN environments.
VxVM volumes can span multiple disks.
Provides tools to improve performance and ensure data availability and integrity.
VxVM and the Operating System:
Operates as a subsystem between OS and data management systems
VxVM depends on OS for the following:
            OS disk devices
            Device handles
            VxVM dynamic multipathing (DMP) Metadevice
VxVM relies on the following daemons:
vxconfigd: Configuration daemon maintains disk and group configurations and communicates configuration changes to the kernel.
vxiod: VxVM I/O daemon provides extended I/O operations.
vxrelocd: The hot-relocation daemon monitors VxVM for events that affect redundancy, and performs hot-relocation to restore redundancy.
VxVM Storage Management:
VxVM uses two types of objects to handle storage management.
Physical objects:
Basic storage device where the data is ultimately stored
Device names – c#t#d#s#
Virtual objects:
When one or more physical disks are brought under the control of VxVM, it creates virtual objects called “volumes”.
Virtual Objects in VxVM:
VM Disks
Disk Groups
Sub disks
Plexes
Volumes

VM Disks:
When a physical disk is placed under VxVM control, a VM disk is assigned to the physical disk.
VM disk typically includes a public region (allocated storage) and a private region where internal configuration information is stored.
VM disk has a unique name (disk media name, can be maximum 31 characters, by default takes disk## format).
Disk Groups:
Is a collection of VM disks that share a common configuration.
The default disk group is “rootdg”.
Disk group name can e max 31 characters.
Allows to group disks into logical collections.
Volumes are created within a disk group.
Subdisks:
Is a set of contiguous disk blocks.
A VM disk can be divided into one or more subdisks.
Default name for VM disk is disk## (disk01) and default name for subdisk is disk##-## (disk01-01).
Any VM disk space that is not part of a subdisk is free space and can be used for creating new subdisks.
Plexes:
VxVM uses subdisks to build virtual objects called plexes.
A plex consists of one or more subdisks located on one or more physical disks.
Volumes:
Is a virtual disk device that appears to applications.
Consists of one or more plexes.
Default naming convention for a volume is vol## and default naming convention for plexes in a volume is vol##-##.
Volume can contain upto 31 characters.
Can consist of up to 32 plexes.
Must have at least one plex associated.
All subdisk within a volume must belong to the same disk group.

Combining Virtual objects in VxVM:
VM disks are grouped in to sub disk groups.
Subdisks are combined to form plexes.
Volumes are composed of one or more plexes.

Volume Layouts in VxVM:




Non-layered Volumes:
Sub disk is restricted to mapping directly to a VM disk.
Layered Volumes:
Is constructed by mapping its subdisks to underlying volumes.

Layout Methods:
Concatenation and Spanning
Striping (RAID 0)
Mirroring (RAID 1)
Striping + Mirroring (Mirrored Stripe or RAID 0+1)
Mirroring + Striping (Striped Mirror or RAID 1+0)
RAID 5 (striping with Parity)
Online Relayout:
Online relayout allows to change the storage layouts that have been created already without disturbing data access.

Dirty Region Logging (DRL):
DRL is enabled, speeds recovery of mirrored volumes after a system crash.
DRL keeps track of the regions that have changed due to I/O writes to a mirrored volume.
DRL uses this information to recover only those portions of the volumes that needed to be recovered.

Fast Resync:
Performs quick and efficient resynchronization of stale mirrors (a mirror that is not synchronized).
Hot-Relocation:

Feature that allows a system to react automatically to I/O failures on redundant objects in VxVM and restore redundancy and access to those objects.

Tuesday 7 April 2015

SVM Root disk mirroring


The concept
***********
Solaris Volume Manager is a software package for creating, modifying and controlling RAID-0 (concatenation and stripe) volumes, RAID-1 (mirror) volumes, RAID 0+1 volumes, RAID 1+0 volumes, RAID-5 volumes, and soft partitions.

Before configuring SVM you should take the Backup of /etc/vfstab & /etc/system File

# cp /etc/vfstab /etc/vfstab.before_mirror

# cp /etc/system /etc/system.before_mirror

Installing the software(Solaris 9 onwards its defaultly installed with OE)
**********************************************************
The DiskSuite product is found on the Solaris 8 "2-of-2" CD. Minimally, the drivers (SUNWmdr and SUNWmdx) and command line tools (SUNWmdu) need to be installed. 

The "metatool" GUI is in the optional SUNWmdg package. Reboot after installing the packages.

bash-2.05#pkgadd -d /cdrom/cdrom0/Solaris_8/EA/products/DiskSuite_4.2.1/sparc/Packages \
          SUNWmdr SUNWmdx SUNWmdu SUNWmdg

bash-2.05#shutdown -y -g0 -i6

Naming convention
****************
You can number your metadevices however you wish. I like something that makes a little bit of sense, 

I use the following convention:

d0 - mirror metadevice to be mounted instead of c0t0d0s0
d10 - submirror metadevice on first disk, c0t0d0s0
d20 - submirror metadevice on second disk, c0t1d0s0

d4 - mirror metadevice to be mounted instead of c0t0d0s4
d14 - submirror metadevice on first disk, c0t0d0s4
d24 - submirror metadevice on second disk, c0t1d0s4

d5 - mirror metadevice to be mounted instead of c0t0d0s5
d15 - submirror metadevice on first disk, c0t0d0s5
d25 - submirror metadevice on second disk, c0t1d0s5

d6 - mirror metadevice to be mounted instead of c0t0d0s6
d16 - submirror metadevice on first disk, c0t0d0s6
d26 - submirror metadevice on second disk, c0t1d0s6
Etc.

***********************************************************
Make sure both disks are partitioned identically, and has filesystem on it.
***********************************************************
bash-2.05# prtvtoc /dev/rdsk/c0t0d0s2 |fmthard -s - /dev/rdsk/c0t1d0s2
fmthard:  New volume table of contents now in place. 

bash-2.05# newfs /dev/rdsk/c0t1d0s0
bash-2.05# newfs /dev/rdsk/c0t1d0s4
bash-2.05# newfs /dev/rdsk/c0t1d0s5
bash-2.05# newfs /dev/rdsk/c0t1d0s6

*****************************************
Creating Meta Database in Both the system Disks
*****************************************
A minimum of two metadatabases must be on each system disk, preferably spread over more than one disk slice.


bash-2.05#
bash-2.05#metadb -afc3 c0t0d0s7  c0t1d0s7 

*******************
Mirroring the root disk
*******************

Create a metadevice out of the original root:

        bash-2.05#metainit -f d10 1 1 c0t0d0s0

Create a metadevice for the root mirror:

  bash-2.05#metainit d20 1 1 c0t1d0s0

Set up a one-way mirror of the root metadevice:

bash-2.05#metainit d0 -m d10

Configure the system to boot the root filesystem from the metadevice, using the "metaroot" command. This will make the necessary changes to /etc/vfstab and /etc/system:

bash-2.05#metaroot d0


Flush any UFS logging of the master filesystem

bash-2.05#lockfs -fa

Reboot the server so that it will boot up in SVM contorl

bash-2.05#shutdown -y -g0 -i6

Attach the second metadevice to the root metadevice to make it a 2-way mirror:

bash-2.05#metattach d0 d20

Copy the Bootblock to the Mirror Disk so that it become bootable

bash-2.05#cd /usr/platform/'uname -i'/lib/fs/ufs

bash-2.05#installboot bootblk /dev/rdsk/c0t1d0s0

So that you can boot the machine from disk1 also

******************************
Mirroring the remaining system slices
******************************

***************************************************
Create the sub-mirror metadevice for /var:

bash-2.05# metainit -f d14 1 1 c0t0d0s4
bash-2.05# metainit -f d24 1 1 c0t1d0s4

Create the Main mirror metadevice for /var:

bash-2.05# metainit d4 -m d14
***************************************************
Create the sub-mirror metadevice for /home:

bash-2.05# metainit -f d15 1 1 c0t0d0s5
bash-2.05# metainit -f d25 1 1 c0t1d0s5

Create the Main mirror metadevice for /home:

bash-2.05# metainit d5 -m d15
***************************************************
Create the sub-mirror metadevice /usr mirror:

bash-2.05# metainit -f d16 1 1 c0t0d0s6
bash-2.05# metainit -f d26 1 1 c0t1d0s6

Create the Main mirror metadevice for /usr:

bash-2.05# metainit d6 -m d16
***************************************************

Edit /etc/vfstab so that the new metadevices will be mounted:

/dev/md/dsk/d4 /dev/md/rdsk/d4  /var    ufs     1   no  logging
/dev/md/dsk/d5 /dev/md/rdsk/d5  /home   ufs     1   no  logging
/dev/md/dsk/d6 /dev/md/rdsk/d6  /usr    ufs     1   no  logging

Reboot:

bash-2.05# shutdown -y -g0 -i6

Attach the second submirrors to the mirrors to make 2-way mirrors:

bash-2.05# metattach d4 d24
bash-2.05# metattach d5 d25
bash-2.05# metattach d6 d26

****************************************************
Wait until disk activity stops before doing much else. DiskSuite's progress of syncing the second drive to the first can be monitored using the "metastat" command. Though it is not strictly necessary, it is a good idea to reboot after this, if only to make sure there are no problems and that the box will indeed come back up.

Note:
****************************************************
The following warning messages are harmless, and may be safely ignored. They are an artifact of the way drivers are loaded during the boot process when you have a mirrored root or /usr file system.:
WARNING: forceload of misc/md_trans failed
WARNING: forceload of misc/md_raid failed
WARNING: forceload of misc/md_hotspares failed

"WARNING: forceload of misc/md_hotspares failed". This messages can be suppressed by creating an empty hot spare pool. The following metainit command does just that:

        bash-2.05# metainit hsp001

Root mirror disk replacement under SVM



1. Make a backup of the following files

# cp /etc/vfstab /etc/vfstab.`date +'%d-%m-%Y'`
# metastat –c > /var/tmp/metastat-c.out
# metastat > /var/tmp/metastat.out
# metadb > /var/tmp/metadb.out
# cp /etc/system /var/tmp/system.`date +'%d-%m-%Y'`
# prtvtoc /dev/rdsk/c0t0d0s2 /var/tmp/prtvtoc-c0t0d0s2

In this procedure we will assume that /dev/dsk/c0t1d0 disk failed.

2. Check defective drive:

# iostat –En /dev/dsk/c0t1d0 [You will see errors]

c0t1d0 Soft Errors: 0 Hard Errors: 102 Transport Errors: 231
Vendor: SEAGATE Product: ST914602SSUN146G Revision: 0400 Serial No: 070490N5V8
Size: 146.80GB <146800115712 bytes>
Media Error: 20 Device Not Ready: 0 No Device: 82 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0

# cfgadm –al

Ap_Id Type Receptacle Occupant Condition
c0 scsi-bus connected configured unknown
c0::dsk/c0t0d0 disk connected configured unknown
c0::dsk/c0t1d0 disk connected configured unknown

# metastat –c [It will show that bad disk is in maintenance state]

d6 m 10GB d16 d26 (maint)
d17 s 10GB c0t0d0s6
d27 s 10GB c0t1d0s6 (maint)
d4 m 40GB d14 d24 (maint)
d14 s 40GB c0t0d0s4
d24 s 40GB c0t1d0s4 (maint)
d3 m 40GB d13 d23 (maint)
d13 s 40GB c0t0d0s3
d23 s 40GB c0t1d0s3 (maint)
d1 m 2.0GB d11 d21 (maint)
d11 s 2.0GB c0t0d0s1
d21 s 2.0GB c0t1d0s1 (maint)
d0 m 3.0GB d10 d20 (maint)
d10 s 3.0GB c0t0d0s0
d20 s 3.0GB c0t1d0s0 (maint)
d5 m 40GB d15 d25 (maint)
d15 s 40GB c0t0d0s5
d25 s 40GB c0t1d0s5 (maint)

3. Remove mirror information from bad disk

# metadb -d /dev/dsk/c0t1d0s7
# metadetach -f d5 d25
# metadetach -f d0 d20
# metadetach -f d1 d21
# metadetach -f d3 d23
# metadetach -f d4 d24
# metadetach -f d6 d26
# metaclear d25
# metaclear d20
# metaclear d21
# metaclear d23
# metaclear d24
# metaclear d26

4. Check the successful mirror reduction:
  
# metastat –c
# metadb

5. Unconfigure disk in Solaris

# cfgadm -c unconfigure c0::dsk/c0t1d0

6. Physically replace the disk online (No need to shutdown)

Note: Some older server (IDE Drives) require downtime as the HDD is not hot pluggable)

7. Configure new disk

# cfgadm -c configure c0::dsk/c0t1d0 

8. Verify that disk is visible and there is no error

# echo | format [It will show you c0t1d0 disk]
# iostat –En /dev/dsk/c0t1d0
# cfgadm -al

9. Copy partition table from root disk [in this case we assume it is /dev/dsk/c0t0d0]

# prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t1d0s2

fmthard:  New volume table of contents now in place.

10. Install boot block
   
# installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c0t1d0s0

11. Create state database replicas on new disk

# metadb –afc3 c0t1d0s7

 12. Check that replicas created

# metadb [It should show you same number of replicas on both disks and on same slice]

13. Create meta devices on new disk
# metainit -f d20 1 1 c0t1d0s0
# metainit -f d21 1 1 c0t1d0s1
# metainit -f d23 1 1 c0t1d0s3
# metainit -f d24 1 1 c0t1d0s4
# metainit -f d25 1 1 c0t1d0s5
# metainit -f d26 1 1 c0t1d0s6

14. Create mirror or synchronize data on new disk
# metattach d0 d20
# metattach d5 d25
# metattach d1 d21
# metattach d3 d23
# metattach d4 d24
# metattach d6 d26

15. Check that mirror is sync’ing

# metastat –c [It will tell you how much data has been sync’ed on each slice]