Monday, 25 May 2015

grub> Software RAID devices assemble after grub prompt - CentOS 7

Short story is that I tried to make few changes to the grub without having taken backup which costed me several hours to recover when I had only GRUB prompt. I installed the grub2-install and after reboot, I am at the same grub prompt where I was earlier. Hence, I decided to remove the /boot/grub2/grub.cfg file and again re-create the file, which had no luck for me.

I had installed my server using software RAID1, I am referring them by the descriptive names(md0, md1 and md2). I figured out my boot drive which is second partition of the first hard-disk drive, tried loading the kernel and initramfs as below - 

grub>set prefix=(hd0,msdos2)/grub2
grub>set root=(hd0,msdos2)
grub>linux16 /vmlinuz-3.10.0-123.el7.x86_64
grub>initrd16  /initramfs-3.10.0-123.el7.x86_64.img

it would start to boot, and unfortunately the system drops into the initramfs rescue shell with the following information in 'journalctl'.  

#  journalctl
Not switching root: /sysroot does not seem to be an OS tree.  <<<============= 
/etc/os-release is missing.
Initrd-switch-root.service: main process exited, code=exited, 
Failed to start Switch Root.  <<<<=====================
. . . . .
Triggering OnFailure= dependencies of initrd-switch-root.service.
Starting Emergency Shell. . .
Failed to issue method call: Invalid argument

From this log I had to start suspect that root file system was not mounted as it appears to me that there was 'md' devices weren't assembled to run or the naming could have been chaged in the device mappers or because it doesn't know the file system or layers under it.
So, I thought to assemble the RAID volumes, mount the root volumes and fix the grub devices to match what 'mdadm --detail' says ..

I had booted into 'CentOS' server install's DVD rescue system, Chose to execute a shell from the installer environment and not to use a root file system.

#cat /proc/mdstat

- this would show the meta devices, (md125, md126, md127), stop those devices. 
# mdadm -S /dev/md125
# mdadm -S /dev/md126
# mdadm -S /dev/md127

- Assemble the volumes 
# mdadm -Av --run /dev/md0 /dev/sda1 /dev/sdb1
# mdadm -Av --run /dev/md1 /dev/sda2 /dev/sdb2
# mdadm -Av --run /dev/md2 /dev/sda3 /dev/sdb3

- Check the outcome
# cat /proc/mdstat
# mdadm --detail /dev/md0
# mdadm --detail /dev/md1
# mdadm --detail /dev/md2

- create the directory 'sysroot' and would mount the root file system to fix the device maps.
# mkdir /sysroot
# mount -o bind /dev/ /sysroot/dev
# mount -o bind /proc /sysroot/proc
# mount -o bind /sys /sysroot/sys
# mount -o bind /dev/pts /sysroot/dev/pts 

- chroot into it
# chroot /sysroot /bin/bash

- correct your device names so that current names of the 'md' devices and their UUIDs are properly read by GRUB.
#cp -p /boot/grub2/ /boot/grub2/
#for i in /dev/disk/by-id/md-uuid-*; do DEV=$(readlink $i); echo "(${DEV##*/}) $i"; done|sort|tee /boot/grub2/

- Recrete your initramfs
#dracut -vf /boot/initramfs-3.10.0-123.el7.x86_64.img

- Just make sure that you install GRUB2 on the two drives which are eligible for booting
# grub2-install /dev/sda
# grub2-install /dev/sdb

- update the grub to make sure that latest configuration to take effect.
# grub2-mkconfig -o /boot/grub2/grub.cfg

- you can leave the chroot environment and reboot.

Monday, 27 April 2015

High-Availability (HA) Apache Cluster - CentOS 7

In this article, I would be walk through the steps required to build a high-availability apache cluster on CentOS 7. In CentOS 7(as in RHEL 7) the cluster stack has been moved to pacemaker/corosync, with a new command line tool(pcs) to manage the cluster.

Environment Description:
The cluster would be of 2-node(centos71 & centos72), iSCSI shared storage will be presented from the node iscsitarget which is around 1GB.
If you want to know on how to share disk from iscsi click here

High availability cluster that is serving websites where its document root would be utilising a simple failover filesystem.
In a stable situation, cluster should look something like this:

There is one owner of the virtual IP, in this case that is centos71. The owner of the virtual IP also provides the service for the cluster at that moment. A client that is trying to reach our website via will be served the webpages from the webserver running on centos71 . In the above situation, the second node is not doing anything besides waiting for centos71 to fail and take over. This scenario is called active-passive.

In case something happens to centos71 the system crashes, the node is no longer reachable or the webserver isn't responding anymore, centos72 will become the owner of the virtual IP and start its webserver to provide the same services as were running on centos71

Configure both the cluster nodes with static IP, hostname, and make sure they are in the same subnet and could be reached each other by their nodenames.(either you could add it in /etc/hosts or configure your DNS server)

If you had configured your firewall, make sure you allow cluster traffic(incase if they are active on any nodes), in this example, I had shutdown the firewall.

After your basic setup, install packages
[root@centos71 ~]# yum install corosync pcs pacemaker
[root@centos72 ~]# yum install corosync pcs pacemaker

To manage cluster nodes, we will use PCS. this allows us to have a single interface to manage all cluster nodes. By installing the necessary packages, Yum also created a user, hacluster, which can be used together with PCS to do the configuration of the cluster nodes.

[root@centos71 ~]# passwd hacluster
[root@centos72 ~]# passwd hacluster

Next, start the pcsd service on both nodes:
[root@centos71 ~]# systemctl start pcsd
[root@centos72 ~]# systemctl start pcsd

Since we will configure all nodes from one point, we need to authenticate on all nodes before we are allowed to change the configuration. Use the previously configured hacluster user and password to do this.

[root@centos71 ~]# pcs cluster auth centos71 centos72
From here, we can control the cluster by using PCS from centos71 It's no longer required to repeat all commands on both nodes.

Create the cluster and add nodes
[root@centos71 ~]# pcs cluster setup --name webcluster centos71 centos72

The above command creates the cluster node configuration in /etc/corosync.conf
After creating the cluster and adding nodes to it, we can start it
[root@centos71 ~]# pcs cluster start --all
centos72: Starting Cluster...
centos71: Starting Cluster...
[root@centos71 ~]#

check the status of the cluster after starting it:
[root@centos71 ~]# pcs status cluster
Cluster Status:
 Last updated: Sun Apr 26 18:17:32 2015
 Last change: Sun Apr 26 18:16:26 2015 via cibadmin on centos71
 Stack: corosync
 Current DC: centos71 (1) - partition with quorum
 Version: 1.1.10-29.el7-368c726
 2 Nodes configured
 0 Resources configured
[root@centos71 ~]#

Since we have simple cluster, we'll just disable the stonith and ignore quorum device.
[root@centos71 ~]# pcs property set stonith-enabled=false
[root@centos71 ~]# pcs property set no-quorum-policy=ignore

Next, create a partition on the 1GB LUN  – this will house a filesystem to be used as the DocumentRoot for our Apache installation.
I had configured multipath for the device, hence install device-mapper* on both nodes. 
[root@centos71 ~]# mpathconf --enable
[root@centos72 ~]# mpathconf --enable
[root@centos71 ~]# systemctl multipathd start
[root@centos72 ~]# systemctl multipathd start

Create partiton from any of the one node, and then only 'partprobe' the second by default you would have had the same mapper device for the newly added iSCSI device. 
[root@centos71 ~]# lsblk -i
sdb                       8:16   0 1016M  0 disk
`-mpatha                253:6    0 1016M  0 mpath
  `-mpatha1             253:7    0 1015M  0 part
[root@centos71 ~]#

[root@centos72 ~]# lsblk -i
sdb                       8:16   0 1016M  0 disk
`-mpatha                253:5    0 1016M  0 mpath
  `-mpatha1             253:6    0 1015M  0 part
[root@centos72 ~]#

[root@centos71 ~]# fdisk /dev/mapper/mpatha
[root@centos71 ~]# partprobe
[root@centos71 ~]# mkfs.ext4 /dev/mapper/mpatha1
[root@centos71 ~]# mount /dev/mapper/mpatha1 /var/www
[root@centos71 ~]# mkdir /var/www/html;mkdir /var/www/error; 
[root@centos71 ~]# echo "apache test page" >/var/www/html/index.html 
[root@centos71 ~]# umount /dev/mapper/mpatha1

Create the filesystem cluster resource fs_apache_shared, and would group it to "apachegroup" which will be used to group the resources together as one unit.
[root@centos71 ~]#  pcs resource create fs_apache_shared ocf:heartbeat:Filesystem device=/dev/mapper/mpatha1 fstype=ext4 directory="/var/www" --group=apachegroup
[root@centos71 ~]#

We will add a virtual IP to our cluster. This virtual IP is the IP address that which will be contacted to reach the services (the webserver in our case). A virtual IP is a resource.
[root@centos71 ~]# pcs resource create virtual_ip ocf:heartbeat:IPaddr2 ip= cidr_netmask=24 --group=apachegroup

Create a file /etc/httpd/conf.d/serverstatus.conf with the following contents on both nodes:
[root@centos71 ~]# cat /etc/httpd/conf.d/serverstatus.conf
 <Location /server-status>
 SetHandler server-status
 Order deny,allow
 Deny from all
 Allow from
[root@centos71 ~]#

Disable the current Listen-statement in the Apache configuration in order to avoid trying to listen multiple times on the same port.
[root@centos71 ~]#  grep -w "Listen 80" /etc/httpd/conf/httpd.conf
#Listen 80
[root@centos72 ~]#  grep -w "Listen 80" /etc/httpd/conf/httpd.conf
#Listen 80

Now that Apache is ready to be controlled by our cluster, we'll add a resource for the webserver
[root@centos71 ~]# pcs resource create webserver ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf statusurl="http://localhost/server-status" --group=apachegroup

Browse, it must display a test page. move your resource groups to partner node, where you shouldn't expect any downtime to your apache service.

[root@centos71 ~]# pcs status
Cluster name: webcluster
Last updated: Sun Apr 26 21:20:14 2015
Last change: Sun Apr 26 21:19:09 2015 via cibadmin on centos71
Stack: corosync
Current DC: centos71 (1) - partition with quorum
Version: 1.1.10-29.el7-368c726
2 Nodes configured
3 Resources configured

Online: [ centos71 centos72 ]

Full list of resources:

 Resource Group: apachegroup
     fs_apache_shared   (ocf::heartbeat:Filesystem):    Started centos71
     virtual_ip (ocf::heartbeat:IPaddr2):       Started centos71
     webserver  (ocf::heartbeat:apache):        Started centos71

PCSD Status:
  centos71: Online
  centos72: Online

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/disabled
[root@centos71 ~]#

[root@centos71 ~]# pcs resource move webserver
[root@centos71 ~]# pcs status
Cluster name: webcluster
Last updated: Sun Apr 26 21:21:00 2015
Last change: Sun Apr 26 21:20:57 2015 via crm_resource on centos71
Stack: corosync
Current DC: centos71 (1) - partition with quorum
Version: 1.1.10-29.el7-368c726
2 Nodes configured
3 Resources configured

Online: [ centos71 centos72 ]

Full list of resources:

 Resource Group: apachegroup
     fs_apache_shared   (ocf::heartbeat:Filesystem):    Started centos72
     virtual_ip (ocf::heartbeat:IPaddr2):       Started centos72
     webserver  (ocf::heartbeat:apache):        Started centos72

PCSD Status:
  centos71: Online
  centos72: Online

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/disabled
[root@centos71 ~]#

you can use df -h  to confirm file system failover, ip addr show to confirm IP address failover.

Saturday, 25 April 2015

Install/Configure iSCSI targets, LUN's, initiators - CentOS

​iSCSI is an acronym for Internet Small Computer System Interface, an Internet Protocol (IP)-based storage networking standard for linking data storage facilities. iSCSI can be used to transmit data over local area networks (LANs), wide area networks (WANs), or the Internet. iSCSI protocol allows clients (called initiators) to send SCSI commands (CDBs) to SCSI storage devices (targets) on remote servers. It is a storage area network (SAN) protocol, allowing organizations to consolidate storage into data center storage arrays while providing hosts with the illusion of locally attached disks. However, the performance of an iSCSI SAN deployment can be severely degraded if not operated on a dedicated network or subnet (LAN or VLAN), due to competition for a fixed amount of bandwidth.

Environment: centos 6.5 - iSCSI target 
             centos 7   - iSCSI initiators

Diagram is self explanatory for the environment on which I would explain below article in three sections
I have attached one HDD on the target server.

Section 1: Install iscsi target 
Section 2: Create LUN using iscsi
Section 3: Install initiators and verify LUN

Section 1 :

- Install the iscsi target package
iscsitarget# yum install scsi-target-utils -y

- start the iscsi service and autostart while system boots.
iscsitarget# /etc/init.d/tgtd start
iscsitarget# chkconfig tgtd on

- Incase firewall is enabled, add rules to allow iptables to discover iscsi targets.
iscsitarget# iptables -A INPUT -i eth0 -p tcp --dport 860 -m state --state NEW,ESTABLISHED -j ACCEPT
iscsitarget# iptables -A INPUT -i eth0 -p tcp --dport 3260 -m state --state NEW,ESTABLISHED -j ACCEPT
iscsitarget# iptables-save
iscsitarget# /etc/init.d/iptables restart

Section 2:
- Check your fdisk utility shows the newly added disk, then partition the drive and change partition type to LVM(8e). save the changes and make sure kernel is aware of the changes made to partition table(partprobe).

iscsitarget# pvcreate /dev/sdb1
iscsitarget# vgcreate vg_iscsi /dev/sdb1
iscsitarget# lvcreate -l 100%FREE -n lv_iscsi vg_iscsi
iscsitarget# pvs;vgs;lvs
  PV         VG       Fmt  Attr PSize    PFree
  /dev/sdb1  vg_iscsi lvm2 a--  1016.00m    0
  VG       #PV #LV #SN Attr   VSize    VFree
  vg_iscsi   1   1   0 wz--n- 1016.00m    0
  LV       VG       Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv_iscsi vg_iscsi -wi-ao---- 1016.00m

- we need to define the LUN in the target configuration, so it will be available for client machines(initiators).
iscsitarget# cat -n /etc/tgt/targets.conf
    16  default-driver iscsi
    18  <target>
    19          backing-store /dev/vg_iscsi/lv_iscsi
    20  </target>

briefly, the fields are:
- iqn, iSCSI Qualified Name 
- date (yyyy-mm),  that the naming authority took ownership of the domain
- reversed domain name of the authority
- Optional ":" prefixing a storage target name specified by the naming authority

- reload the configuration
iscsitarget# /etc/init.d/tgtd reload

- show target and LUN status
iscsitarget# tgtadm --mode target --op show
Target 1:  <<=== iSCSI Qualified Name
    System information: 
        Driver: iscsi
        State: ready   <<=== iSCSI is Ready to Use
    I_T nexus information:
        I_T nexus: 1
            Connection: 0
                IP Address:
        I_T nexus: 2
            Connection: 0
                IP Address:
    LUN information:
        LUN: 0  <<==== Default LUN ID reservered for controller  
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: null
            Backing store path: None
            Backing store flags:
        LUN: 1 
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 1065 MB, Block size: 512  <<== Lun size defined in the target
            Online: Yes  <<===  Lun is ready to be used 
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: rdwr
            Backing store path: /dev/vg_iscsi/lv_iscsi
            Backing store flags:
    Account information:
    ACL information:

Section 3:

On the clients, install initiator package. 
iscsiinitiator1# yum install iscsi-initiator-utils.x86_64

iscsiinitiator2# yum install iscsi-initiator-utils.x86_64

- discover the share from the target server. 
iscsiinitiator1# iscsiadm -m discoverydb -t st -p -D,1

iscsiinitiator2# iscsiadm -m discoverydb -t st -p -D,1

- To attach LUN to client's local system, you will be required to authenticate with target server and allow to login to LUN.
iscsiinitiator1# iscsiadm -m node -T -p -l

iscsiinitiator2# iscsiadm -m node -T -p -l

- you can list your block devices from the initiators(lsblk), where you could see new disk. Now, you can create partition, format and mount the file system to use. optional, can be appended in /etc/fstab if you need it to be mounted after reboots.
iscsiinitiator1# lsblk -i 
sdb                       8:16   0 1016M  0 disk

iscsiinitiator2# lsblk -i 
sdb                       8:16   0 1016M  0 disk

Wednesday, 1 April 2015

Build your RPM

Since, I had an local repository file where I wanted to distribute it to the YUM repository, so had to created an RPM file as explained below

Objective: Demonstrate a simple scenario on how to build RPM. 


Environment: CentOS 6.6 (X86_64) 

Install required RPM 

#yum install -y rpm-build rpmdevtools

Create user for RPM build

# useradd -m rpmbld
# passwd rpmbld

Build RPM

login with the rpmbld account and from the home directory create the directory structure.
Creates the directory rpmbuild with several sub-directories.

~]$ id
uid=501(rpmbld) gid=501(rpmbld) groups=501(rpmbld)
~]$ rpmdev-setuptree 
~]$ echo $?

~]$ cd rpmbuild/
~]$ ls

Create compressed content with RPM content

Change to the SOURCE directory, representing directory structure with RPM name,version and target file system. here, I use /etc/yum.repos.d copy desired repo file into the rpms structure. Once after that gzip all of that.

~]$ cd rpmbuild/SOURCES/
~/rpmbuild/SOURCES]$ ls
~/rpmbuild/SOURCES]$ mkdir -p localrepo-1/etc/yum.repos.d
~/rpmbuild/SOURCES]$ cp /tmp/centos66.repo localrepo-1/etc/yum.repos.d

~]$ ls

~/rpmbuild/SOURCES]$ tar -zcvf localrepo-1.tar.gz localrepo-1/

SPEC skeleton

Instructions for the build process are created in the rpmbuild/SPECS directory. rpmdev-newspec <filename> used to create a sample file in your current directory.Edit as required.

~]$ cd rpmbuild/SPECS/

~/rpmbuild/SPECS]$ rpmdev-newspec localrepo.spec
Skeleton specfile (minimal) has been created to "localrepo.spec".

~/rpmbuild/SPECS]$ ls

:~]$ cat rpmbuild/SPECS/localrepo.spec 
Name:           localrepo
Version:        1 
Release:        0
Summary:        Centos repository
Group:          System Environment/Base
License:        GPL
source0:        localrepo-1.tar.gz
buildarch:      noarch
BuildRoot:      %{_tmppath}/%{name}-buildroot

 Create YUM repository pointing to local centos/redhat repository.

%setup -q

mkdir -p "$RPM_BUILD_ROOT"




you can now use the rpmbuild process to create RPM with -bb for binary rpm without src rpm.

:~]$ rpmbuild -v -bb rpmbuild/SPECS/localrepo.spec 
Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.RjS32s
+ umask 022
+ cd /home/rpmbld/rpmbuild/BUILD
+ cd /home/rpmbld/rpmbuild/BUILD
+ rm -rf localrepo-1
+ /bin/tar -xf -
+ /usr/bin/gzip -dc /home/rpmbld/rpmbuild/SOURCES/localrepo-1.tar.gz
+ '[' 0 -ne 0 ']'
+ cd localrepo-1
+ /bin/chmod -Rf a+rX,u+w,g-w,o-w .
+ exit 0
Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.qqYUfq
+ umask 022
+ cd /home/rpmbld/rpmbuild/BUILD
+ cd localrepo-1
+ mkdir -p /home/rpmbld/rpmbuild/BUILDROOT/localrepo-1-0.i386
+ cp -R etc /home/rpmbld/rpmbuild/BUILDROOT/localrepo-1-0.i386
+ /usr/lib/rpm/check-rpaths /usr/lib/rpm/check-buildroot
+ /usr/lib/rpm/brp-compress
+ /usr/lib/rpm/brp-strip
+ /usr/lib/rpm/brp-strip-static-archive
+ /usr/lib/rpm/brp-strip-comment-note
Processing files: localrepo-1-0.noarch
Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1
Checking for unpackaged file(s): /usr/lib/rpm/check-files /home/rpmbld/rpmbuild/BUILDROOT/localrepo-1-0.i386
Wrote: /home/rpmbld/rpmbuild/RPMS/noarch/localrepo-1-0.noarch.rpm
Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.lBmbql
+ umask 022
+ cd /home/rpmbld/rpmbuild/BUILD
+ cd localrepo-1
+ rm -rf /home/rpmbld/rpmbuild/BUILDROOT/localrepo-1-0.i386
+ exit 0

Test your build custom RPM's:

:~]# rpm -qiaf /home/rpmbld/rpmbuild/RPMS/noarch/localrepo-1-0.noarch.rpm
Name        : localrepo                    Relocations: (not relocatable)
Version     : 1                                 Vendor: (none)
Release     : 0                             Build Date: Wednesday 01 April 2015 05:18:03 AM IST
Install Date: (not installed)               Build Host: redhat
Group       : System Environment/Base       Source RPM: localrepo-1-0.src.rpm
Size        : 110                              License: GPL
Signature   : (none)
URL         :
Summary     : Centos repository
Description :
 Create YUM repository pointing to local centos repository.

:~]# rpm -ivh /home/rpmbld/rpmbuild/RPMS/noarch/localrepo-1-0.noarch.rpm
Preparing...                ########################################### [100%]
   1:localrepo              ########################################### [100%]

:~]# ls -l /etc/yum.repos.d/centos66.repo 
-rw-r--r-- 1 root root 110 Apr  1 05:18 /etc/yum.repos.d/centos66.repo

this is how the custom RPM's are build, hope this tutorial can help you all for your custom RPM builds.
sharing in public, re-share for all. 

Monday, 23 March 2015

SLE 11 SP3 to SLE 12 - upgrade methods explained

SLE allows to update an existing system to the new version, for example, going from SLE 11 SP3 to SLE 12. No new installation is needed. Existing data, such as home and data directories and system configuration, is kept intact. You can update from a local CD or DVD drive or from a central network installation source.

Note: Before updating, copy existing configuration files to a separate medium (such as tape device, removable hard disk, etc.) to back up the data. This primarily applies to files stored in /etc as well as some of the directories and files in /var and /opt. You may also want to write the user data in /home (the HOME directories) to a backup medium.

Environment: SLES 11 SP3
Kernel sles11sp3: 3.0.76-0.11(before up-gradation)
kernel sles12sp0: 3.12.28-4 (after up-gradation)

To upgrade your system this way, you need to boot from an installation source, like you would do for a fresh installation. However, when the boot screen appears, you need to select Upgrade (instead of Installation). The installation source to boot from can be one of the following:
- Local installation medium -(like a DVD, or an ISO image on a USB mass storage device)
- Network installation source -You can either boot from the local medium (like a DVD, or an ISO image)and then select the respective network installation type, or boot via PXE.

- Upgrade using network installation source using CDROM.
- Upgrade using network installation source using PXE.
- Perform an automated migration.

If you want to start an upgrade from a network installation source, make sure that the following below requirements are met, and I would leave it to reader to configure accordingly as it was already explained in previous posts (pxe-installation-on-sles-11)

Network Installation Source - network installation source should be setup.
Network Connection and Network Services - Both the installation server and the target machine have a functioning network connection. The network must provide the following services: a name service, DHCP(optional, but needed for booting via PXE)

Upgrade using network installation source using CDROM:

- Insert DVD 1 of the SUSE Linux Enterprise 12 installation media and boot your machine. A Welcome screen is displayed, followed by the boot screen.
- Select the type of network installation source you want to use (FTP, HTTP, NFS, SMB, or SLP). Since I had configured using HTTP, I would select HTTP to serve installations.

              Fig 1
               Fig 2

Upgrade using network installation source using PXE:
- Adjust the setup of your DHCP server to provide the address information needed for booting via PXE.
- Set up a TFTP server to hold the boot image needed for booting via PXE.
- Prepare PXE Boot and Wake-on-LAN on the target machine.

              Fig 3

- Once you get your screen as (Fig 2)Proceed with the upgrade process, steps are self-explanatory.

Perform an automated migration:

Copy the installation Kernel linux and the file initrd from /boot/x86_64/loader/ of your first installation DVD to your system's /boot directory

#cp -vi DVDROOT/boot/x86_64/loader/linux /boot/linux.upgrade
#cp -vi DVDROOT/boot/x86_64/loader/initrd /boot/initrd.upgrade
where, DVDROOT denotes the path where your system mounts the DVD

Open the GRUB legacy configuration file /boot/grub/menu.lst and add another section. For other boot loaders, edit the respective configuration file(s). Adjust device names accordingly to your /boot partition.

title Linux Upgrade Kernel
kernel (hd0,0)/boot/linux.upgrade root=/dev/sda1 upgrade=1   
initrd (hd0,0)/boot/initrd.upgrade

Reboot your machine and select the newly added section from the boot menu (here: Linux Upgrade Kernel).

                 Fig 4

- Once you get your screen as (Fig 2)Proceed with the upgrade process, steps are self-explanatory.

- After the upgrade process was finished successfully, remove the installation Kernel and initrd files (/boot/linux.upgrade and /boot/initrd.upgrade). They are useless now and are not needed anymore.

Once the up-gradation is completed, system would be rebooted and booted with new kernel. 

                  Fig 5                                             Fig 6

Up-gradations completed successfully.