Monday, 18 January 2016

Rescue environment paravirtualized VM Xen Virtualization - Redhat 7 on OVM333

Objective: how to work on rescue environment from the dom0, where mount ISO image bypassing pygrub and renaming root volume groups.

Environment: 
            : Oracle Virtual server 3.3.3 X86_64 (HVM)
            : Redhat 7.0 x86_64 (Redhat OS)

recently I had to rename volume groups in one of my guest machines, it was Redhat and was paravirtualized. since the root partition(/root) was running on the LV, I had to boot the guest in the rescue environment, since I had to boot guest directly from the kernel and initrd used in installations, so I copied to /OVS/Repositories/redhat7/vmlinuz and /OVS/Repositories/redhat7/initrd.img to my OVM HV. I would now tell my guest to use kernel and initrd which i had copied to boot from the rescue environment.

kernel='/OVS/Repositories/redhat7/vmlinuz'
ramdisk='/OVS/Repositories/redhat7/initrd.img'
extra="rescue method=/mnt" or extra="install=hd:xvdc rescue=1 xencons=tty"

There are two ways making for rescue environment:

1. If your ISO image is being mounted to some temp mount point (mount -o loop redhat7.iso /mnt), provide the rescue path location pointing to that. 

extra="rescue method=/mnt"

2. If you have your ISo image being mounted to the guest OS and you know the name of the block device, then you could provide the name of the disk for rescue 

In my case I know the name of the block device was xvdc. 

extra="install=hd:xvdc rescue=1 xencons=tty"

This would pass as an extra arguments to kernel, and would tell anaconda installer to get the install files from the location.


I remember I had in the past written post on how to rename VG/LV for CentOS 6(http://goo.gl/M71G5a), however i didn't find here much difference except it was GRUB2.

I brought down the LV's offline and renamed the VG, and entries in /etc/fstab and /boot/grub/grub.cfg was changed and remade grub.

- scan all disks in the LV - lvscan, if the disks are not offline, make it offline


sh-4.2# lvchange -an /dev/<vgname>/swap
sh-4.2# lvchange -an /dev/<vgname>/root

- Change the VG name 
sh-4.2# lvm vgrename <old_vgname> <new_vgname>
Volume group "<old_vgname>" successfully renamed to "<new_vgname>"

- make sure you point all your /etc/fstab entries to new volume group.

- let your /boot/grub/grub.conf know the changes made to your new volume group, and generate the GRUB config file
sh-4.2# grub2-mkconfig -o /boot/grub/grub.cfg

Once all done, remove entries made for rescue in the config file, and then boot.

OVM# xm create -c redhat7.cfg


Thank you for reading and re-sharing.

Sunday, 10 January 2016

Security updates and installation using YUM - RHEL 5/6/7

Hello All, 

I had come across a situation where I wanted to check, verify and update the security on the different releases of RHEL, since I was not able to find all at one place. I thought of putting across all at one place. I thought of keeping all at once place helps and so sharing in public !
Operating systems
Explanation on security updates on RHELRHEL 5RHEL 6RHEL 7
yum could install the security updates
using the plugin yum-security
yum install yum-securityyum install yum-plugin-securityNo plugin required as it is
already part of yum
list all available errata without installingyum list-secyum updateinfo list available
list all available security updates without
installing
yum list-security --securityyum updateinfo list security all
yum updateinfo list sec
list of currently installed security updatesyum list-secyum updateinfo list security installed
list all security update with verbose descriptionsyum list-sec
apply all security updates from RHNyum -y update --security
updates based on CVE referenceyum update --cve <CVE>
view available advisories by severityyum updateinfo list
more detailed information about
advisory before applying
yum updateinfo RHSA-2015:XXXX
apply only one specific advisoryyum update --advisory=RHSA 2015:XXXX
More information could be found atman yum-security

First post in year 2016, wishing all of you - HAPPY NEW YEAR :)

Thanks

Sunday, 1 November 2015

Reset root password by accessing file system on Guest OS from Physical host - CentOS

Everyone know as how to reset your forgotten root password on Linux - (http://goo.gl/6j9u2k), but in this article since I'm using Guest OS on KVM hypervisor, I would demonstrate as how to mount the root file system and reset password.

Details:

Hostname: kvm1
Diskname: vm1
path:     /var/lib/libvirt/images/vm1.img 

- Firstly, you need to shutdown your VM, doing it which it's running can cause disk corruption.
#virsh shutdown vm1

- Check your VM is in shut off state
#virsh list --all

- Get an unused loop device
#losetup -f
/dev/loop0

- Map VM image to your loop device
#losetup /dev/loop0 /var/lib/libvirt/images/vm1.img

- Print your partition table of the image file which is been mapped to the loop device and identify the correct partition where your root file system mounted upon.
#fdisk -l /dev/loop0

Disk /dev/loop0: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000907df

      Device Boot      Start         End      Blocks   Id  System
/dev/loop0p1   *        2048     1050623      524288   83  Linux
/dev/loop0p2         1050624     3147775     1048576   82  Linux swap / Solaris
/dev/loop0p3         3147776    20971519     8911872   83  Linux

- In order to mount the VM's partitions, you need to create partition mappings
#kpartx -av /dev/loop0
Disk /dev/loop0: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000907df

      Device Boot      Start         End      Blocks   Id  System
/dev/loop0p1   *        2048     1050623      524288   83  Linux
/dev/loop0p2         1050624     3147775     1048576   82  Linux swap / Solaris
/dev/loop0p3         3147776    20971519     8911872   83  Linux

- Here my root file system was on /dev/vda3 which is associated with 
/dev/loop0p3 which would be mounted.
#mount /dev/mapper/loop0p3 /mnt

- remove password field from root user in /etc/shadow 
#vim /mnt/etc/shadow

Note: If your system has been enabled with SELinux, it is very much required to autorelabel else you will be unable to login. Check below snap.




#touch /mnt/.autorelabel

- Once done, remove your mappings and start the VM. 
#umount /mnt
#kpartx -dv /dev/loop0
#losetup -d /dev/loop0

- Start your VM 
#virsh start vm1

- In your console, when you login with 'root' without providing any password.

Friday, 11 September 2015

zfs cheat sheet - Creation of Storagepools & Filesystems using zpool & zfs #Solaris 11

The ZFS file system is a file system that fundamentally changes the way file systems are administered, with features and benefits not found in other file systems available today. ZFS is robust, scalable, and easy to administer.ZFS uses the concept of storage pools to manage physical storage, ZFS eliminates volume management altogether. Instead of forcing you to create virtualized volumes, ZFS aggregates devices into a storage pool.File systems are no longer constrained to individual devices, allowing them to share disk space with all file systems in the pool. You no longer need to predetermine the size of a file system, as file systems grow automatically within the disk space allocated to the storage pool. When new storage is added, all file systems within the pool can immediately use the additional disk space without additional work

                     zpool commands                        Description
zpool create testpool c0t0d0Create simple pool named testpool with single disk
creating default mount point as poolname(/testpool)
OPTIONAL:
-n do a dry run on pool creation
-f force creation of the pool
zpool create testpool mirror c0t0d0 c0t0d1Create testpool mirroring c0t0d0 with c0t0d1
creating default mount point as poolname(/testpool)
zpool create -m /mypool testpool c0t0d0Create pool with different mount point than default
zpool create testpool raidz c2t1d0 c2t2d0 c2t3d0Create RAID-Z testpool
zpool add testpool raidz c2t4d0 c2t5d0 c2t6d0Add RAID-Z disks to testpool
zpool create testpool raidz1 c2t1d0 c2t2d0 c2t3d0 c2t4d0 c2t5d0 c2t6d0Create RAIDZ-1 testpool
zpool create testpool raidz2 c2t1d0 c2t2d0 c2t3d0 c2t4d0 c2t5d0 c2t6d0Create RAIDZ-2 testpool
zpool add testpool spare c2t6d0Add spare device to the testpool
zpool create testpool mirror c2t1d0 c2t2d0 mirror c2t3d0 c2t4d0Disk c2t1d0 mirrored with c2t2d0 &
c2t3d0 mirrored with c2t4d0
zpool remove testpool c2t6d0Removes hot spares and cache disks
zpool detach testpool c2t4d0Detach the mirror from the pool
zpool clear testpool c2t4d0Clears specific disk fault
zpool replace testpool c3t4d0Replace disk like disk
zpool replace testpool c3t4d0 c3t5d0Replace one disk with another disk
zpool export testpoolExport the pool from the system
zpool import testpoolImports specific pool
zpool import -f -D -d /testpool testpoolImport destroyed testpool
zpool import testpool newtestpoolImport a pool originally named testpool under
new name newtestpool
zpool import 88746667466648Import pool using ID
zpool offline testpool c2t4d0Offline the disk in the pool
Note: zpool offline testpool -t c2t4d0 will offline temporary
zpool upgrade -aupgrade all pools
zpool upgrade testpoolUpgrade specific pool
zpool status -xHealth status of all pools
zpool status testpoolStatus of pool in verbose mode
zpool get all testpoolLists all the properties of the storage pool
zpool set autoexpand=on testpoolSet the parameter value on the storage pool
Note: zpool get all testpool gives you all the properties
on which it could be used to set value
zpool listLists all pools
zpool list -o name,size,altrootshow properties of the pool
zpool historyDisplays history of the pool
Note: Once the pool is removed, history is removed.
zpool iostat 2 2Display ZFS I/O stastics
zpool destroy testpoolRemoves the storage pool


                       zfs commands                      Description
zfs listLists the ZFS file system's
zfs list -t filesystem
zfs list -t snapshot
zfs list -t volume
zfs create testpool/filesystem1Creates ZFS filesystem on testpool storage
zfs create -o mountpoint=/filesystem1 testpool/filesystem1Different mountpoint created after ZFS creation
zfs rename testpool/filesystem1 testpool/filesystem2Renames the ZFS filesystem
zfs unmount testpoolunmount the storagepool
zfs mount testpoolmounts the storagepool
NFS exports in ZFSzfs share testpool - shares the file system for export
zfs set share.nfs=on testpool - make the share persistent
across reboot
svcs -a nfs/server - NFS server should be online
cat /etc/dfs/dfstab - Exported entry in the file
showmount -e - storage pool has been exported
zfs unshare testpoolRemove NFS exports
zfs destroy -r testpoolDestroy storage pool and all datasets under it
zfs set quota=1G testpool/filesystem1set quota of 1GB on the filesystem1
zfs set reservations=1G testpool/filesystem1set reservation of 1GB on the filesystem1
zfs set mountpoint=legacy testpool/filesystem1Disable ZFS auto mounting and enable
through /etc/vsftab
zfs unmount testpool/filesystem1unmounts ZFS filesystem1 in testpool
zfs mount testpool/filesystem1mounts ZFS filestystem1 in testpool
zfs mount -amounts all the ZFS filesystems
zfs snapshot testpool/filesystem1@fridayCreates a snapshot of the filesystem1
zfs hold keep testpool/filesystem1@fridayHolds existing snapshot & attempts to destroy using zfs destroy will fail
zfs rename testpool/filesystem1@friday FRIDAYRename the snapshots
Note: snapshots must exists in the same pools
zfs diff testpool/filesystem1@friday testpool/filesystem1@friday1Identify the difference between two snapshots
zfs holds testpool/filesystem1@fridayDisplays the list of snapshots help
zfs rollback -r testpool/fileystem1@fridayRoll back yesterday snapshot recursively
zfs destroy testpool/fileystem1@thursdayDestroy snapshot created yesterday
zfs clone testpool/filesystem1@friday testpool/clones/fridayClone was created on the same snapshot
Note: Cannot create clone of a filesystem in a pool that is different from where original snapshot resides.
zfs destroy testpool/clones/FridayDestroy the clone

Thanks,

Sunday, 30 August 2015

Create Local Repository - Solaris 11.2

Image packaging system(IPS) and important concept from Solaris 11 onwards. I would like to create local repository on the local system by downloading the files from the Oracle website using oracle's automated script.

Download below files and copy to your local Solaris server.
Below are the files copied on my local Solaris server. 

root@solnode1:/var/share/pkg# pwd
/var/share/pkg
root@solnode1:/var/share/pkg# ls -l
total 14373953
-rwx------   1 root     root        5594 Aug 29 20:52 install-repo.ksh
drwxr-xr-x   3 pkg5srv  pkg5srv        7 Aug 30 08:42 repositories
-rw-r--r--   1 root     root     1771800121 Aug 29 15:32 sol-11_2-repo-1of4.zip
-rw-r--r--   1 root     root     1889867782 Aug 29 15:35 sol-11_2-repo-2of4.zip
-rw-r--r--   1 root     root     1902167161 Aug 29 16:46 sol-11_2-repo-3of4.zip
-rw-r--r--   1 root     root     1790358735 Aug 29 16:44 sol-11_2-repo-4of4.zip
-rw-r--r--   1 root     root         227 Aug 29 16:14 sol-11_2-repo-md5sums.txt
root@solnode1:/var/share/pkg#

root@solnode1:/var/share/pkg#./install-repo.ksh -d /var/share/pkg/repositories/ -v -c

The script would compare the checksums of the downloaded files, uncompress and would initiate the repository creation. 

your current publisher would be pointing to "pkg.oracle.com" and it needs to be changed to your local repository. 

root@solnode1:~# pkg publisher
PUBLISHER                   TYPE     STATUS P LOCATION
solaris                     origin   online F http://pkg.oracle.com/solaris/release/
root@solnode1:~#

root@solnode1:~# pkg set-publisher -G '*' -M '*' -g file:///var/share/pkg/repositories solaris
root@solnode1:~#

root@solnode1:~# pkg publisher
PUBLISHER                   TYPE     STATUS P LOCATION
solaris                     origin   online F file:///var/share/pkg/repositories/
root@solnode1:~#

To enable clients to access the local repository via HTTP, enable the application/pkg/server Service Management Facility (SMF) service.

root@solnode1:~# svccfg -s application/pkg/server setprop pkg/inst_root=/var/share/pkg/repositories
root@solnode1:~# 

check does repos work
root@solnode1:~# svcprop -p pkg/inst_root application/pkg/server
/var/share/pkg/repositories
root@solnode1:~#

Reload the pkg.depotd repository service.

root@solnode1:~# svcadm refresh application/pkg/server
root@solnode1:~#

we had successfully created solaris 11.2 local repository.

Saturday, 8 August 2015

Contiguous space re-partition - Linux

To recap, root FS has run out of space and data contained on it couldn't be removed or compressed to free up the space. Since there was swap space, so I planned to reclaim from the swap partition without losing the data and would extend the space for root file system. 

Host : susenode2
OS   : SuSE 11 / CentOS / Redhat 
Disk : sda

It could have been easy we could have extended volume using LVM, since our disks are not under LVM, we are here trying to re-create the entire partition without losing data.

Current scenario :

Disk (sda) is been into 3 partitons, of which sda1, sda2 are my data partitions & sda3 swap partition. I would recreate the swap partition and would extend the root file system. It could also be seen that the sector are contiguous between sda2 and sda3 hence I could destroy those two partitions and re-create.

df & fdisk & swap output :

susenode2:~ # df -hT
Filesystem     Type   Size  Used Avail Use% Mounted on
/dev/sda2      ext3    13G   12G  570M  96% /
udev           tmpfs  369M  128K  369M   1% /dev
tmpfs          tmpfs  369M     0  369M   0% /dev/shm
/dev/sda1      ext3   1.1G 1015M   18M  99% /application/logs
susenode2:~ #

susenode2:~ # fdisk -l /dev/sda

Disk /dev/sda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000cc8af

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048     2265087     1131520   83  Linux
/dev/sda2   *     2265088    31625215    14680064   83  Linux                  <<===  root fs
/dev/sda3        31625216    41943039     5158912   82  Linux swap / Solaris   <<===  swap 
susenode2:~ #

susenode2:~ # swapon -s
Filename                                Type            Size    Used    Priority
/dev/sda3                               partition       5158908 0       -1
susenode2:~ #

Since we are resizing the root file system, we need to get into rescue environment to do so. 
I am using an SuSE DVD or you could also use knoppix and would get into rescue environment and make necessary changes to the partition table. 


I would delete the partition sda2 and sda3 and would recreate the sda2 from the sector which was at the start i..e 2265088 and would end for my required size say 18G which I would add to this file system.

I would later create a new partition for the swap for remaining space. 

snaps are as below :


Once partition are created we need to change the type of the partition to swap for sda3 and since sda2 was bootable, make sure you would toggle boot flag on the partition.


Make sure you now has enough space in the disk where you could resize your root file system and make the swap partition. 


Rescue:~ # mkswap /dev/sda3
Rescue:~ # reboot

After reboot, check your root file system space. It would have been increased and swap space decreased. 

Snap after increasing file system :

susenode2:~ # df -hT
Filesystem     Type   Size  Used Avail Use% Mounted on
/dev/sda2      ext3    18G   12G  5.3G  69% /
udev           tmpfs  369M  128K  369M   1% /dev
tmpfs          tmpfs  369M     0  369M   0% /dev/shm
/dev/sda1      ext3   1.1G 1015M   18M  99% /application/logs
susenode2:~ #

susenode2:~ # fdisk -l /dev/sda

Disk /dev/sda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000cc8af

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048     2265087     1131520   83  Linux
/dev/sda2   *     2265088    40013823    18874368   83  Linux
/dev/sda3        40013824    41943039      964608   82  Linux swap / Solaris
susenode2:~ #

susenode2:~ # swapon -s
Filename                                Type            Size    Used    Priority
/dev/sda3                               partition       964604  0       -1
susenode2:~ #

Thanks

Sunday, 26 July 2015

speedtest mini : check your internet speed locally - CentOS 7

speedtest.net is one of the most popular internet speed tests. It is very helpful if you want to determine your Internet download and upload speed similarly speedtest-mini can be performed on local server. 

Install your apache, PHP,  start httpd service and make sure 'httpd' service is allowed by your firewall. 

#yum install -y httpd php php-mysql php-gd php-mcrypt
#systemctl start httpd
#firewall-cmd --add-service=http 

Download "speedtest mini" from speedtest.net from their official site : 
#cd /var/www/html

Register to speedtest.net and download the latest version of mini. 
#unzip mini.zip

Make sure your apache doccument root is in /var/www/html
# grep -i "^documentroot" /etc/httpd/conf/httpd.conf
DocumentRoot "/var/www/html"
#cd /var/www/html/mini
#mv index-php.html index.html

point your browser to http://<ipaddress>/mini and start testing your speed on the local servers. 

Thanks