Sunday, 17 April 2016

Disaster Recovery using Relax-and-Recover (REAR) - Redhat Linux

This is very simple to use, I wished to write it anyway because before I do OS/application/security patches and much more I wanted to ensure that I would keep the complete backup of server in worst case as this be being production critical.

we would first need to install rear package which can be downloaded from the epel repository. Before I proceed I would provide you environment.

Environment : Oracle Linux 6 with Redhat kernel(2.6.32-573.el6.x86_64)
rear verion : rear-1.18-3.el6.x86_64
DR copy :     NFS storage(nfsserver.testlabs.com)

You would have your rear package in EPEL repository and can be downloaded from their mirror, else just copy & paste below 
hostname#cat > /etc/yum.repos.d/epel.repo 

[epel]
name=Extra Packages for Enterprise Linux 6 - $basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6

cntrl-d

hostname# yum install rear 

Note: Make sure you are already installed with 'genisoimage' and 'syslinux' without which rear wont be able to install. 
hostname#yum install genisoimage

Let rear know as to which location it needs to be backuped ? It could be defined below :

hostname#cat >/etc/rear/local.conf 
OUTPUT=ISO
BACKUP=NETFS
BACKUP_URL="nfs://nsfserver.testlabs.com/dr/"

cntrl-d


Resulting ISO image would be used for DR recovery purpose which takes backup from the NFS in order to restore files/dirs. 

hostname# rear -v mkbackup
Relax-and-Recover 1.18 / Git
Using log file: /var/log/rear/rear-hostname.log
Creating disk layout
Creating root filesystem layout
TIP: To login as root via ssh you need to set up /root/.ssh/authorized_keys or SSH_ROOT_PASSWORD in your configuration file
Copying files and directories
Copying binaries and libraries
Copying kernel modules
Creating initramfs
Making ISO image
Wrote ISO image: /var/lib/rear/output/rear-hostname.iso (74M)
Copying resulting files to nfs location
Encrypting disabled
Creating tar archive '/tmp/rear.NZP1vXar0Vmq5nr/outputfs/hostname/backup.tar.gz'
Archived 14 MiB [avg 3584 KiB/sec]
.
.
.
.

Archived 5644 MiB [avg 8268 KiB/sec]OK
Archived 5644 MiB in 700 seconds [avg 8256 KiB/sec]

You would have all your system files backuped to NFS server, you can just confirm by logging to storage box.

nsfserver#pwd
/dr/hostname
nsfserver# /dr/hostname# ls
./                  README              backup.log          rear-hostname.iso
../                 VERSION             backup.tar.gz       rear.log
nsfserver#  /dr/hostname#

When your server unable to boot or any libraries corruption etc etc for any reasons you can copy the iso image from NFS path and boot from CDROM and recover.

I tested and I would share to all readers.

I will corrupt the server, remove binaries, remove bootable files etc etc and will restore from ISO image from the NFS location.

hostname# rm -rf /boot/*
hostname# ls -l /boot
total 0
hostname# 

hostname# rm -rf /bin/*
hostname# ls
-bash: /bin/ls: No such file or directory
hostname#

when you boot from ISO image you need to choose Recover hostname from the below options


After booting, it would go to RESCUE shell, check you are able to reach to your NFS server in order to restore the files.

RESCUE: rear recover



This would start copying all your data from the NFS to the client server. It might take minutes/hours depending on the data. Once its been completed just boot up your client machine it would be operational.

hostname: # ls -l /boot/ | wc -l
15
hostname:

Now you have restored you system with backup.

There can't be excuse incase if you are not using this tool. its very easy, simple and would request readers to take 1 copy of ISO image would save lot of time and efforts in worst cases. Plan for the best, but prepare for the worst.

Thanks for all who read this post.

Sunday, 20 March 2016

Xen disk hot addition/removing from guests

Since I had to add new disk for the guest to add swap space, Xen allows you to hot add (and remove) disks to a guest domU while the system is running.  

Lets take a look at how to add disks to the guests :

I would share image based disk from the Xen Dom0 which would be available to the guest, so later it could be formated, mounted and do anything just like a block device. xm-attach would be used to make this online. 

I just created a image file for 4GB to add swap partition for guest server 
xendom0#dd if=/dev/zero of=testvm-swapdisk.img bs=1M count=4k

xm block-attach <Domain> <BackDev> <FrontDev> <Mode> [BackDomain]
    Domain   - Guest domain which needs to attach disk
    BackDev  - Location of the block device
    FrontDev - The device name to assign the new device in the domU
    Mode       - read/write mode

xendom0# xm block-attach testvm testvm-swapdisk.img /dev/xvdd w 

On Guest :

testvm ~]# lsblk -i | tail -1
xvdd                        202:48   0    4G  0 disk
testvm ~]#fdisk /dev/xvdd 
testvm ~]#mkswap /dev/xvdd1
testvm ~]#swapon /dev/xvdd1
testvm ~]# swapon -s
Filename                                Type            Size    Used    Priority
/dev/dm-1                               partition       819196  0       -1
/dev/xvdd1                              partition       4192928 0       -2

remove disk from guest:

If you need to remove disks which are no longer needed for the guests, you need to unmount, delete the partitions. from the xen Dom0 domains you can detach the disk.

xendom0#xm block-detach testvm /dev/xvdd

Make sure, you need to edit the xen config file for permanent changes so that it would be available for next reboot. 

Sunday, 28 February 2016

Centralized Log Management using rsyslog CentOS 6/7

​I am creating an centralized log server where it can store all the logs from the clients. In order to do that make sure you have enough space to store logs for all the clients. I would also configure the log rotation to save space on the disk.

Environment - CentOS/Redhat 6.6
rsyslog version5.8.10

rsyslog would be installed by default. incase its not there use an yum to install.
#yum install rsyslog

It could be helpful incase you read man rsyslog.conf documentation. It has mainly 3 parts 

1. Modules - rsyslog follows modular design
2. Global directives - Set global properties for rsync
3. Rules - what to be logged and where

Destination log server would contain all the logs ( audit, sudo, su, history, kernel, ..etc) to be logged from all the clients to the centralized log server.

Let's configure server first,  

Edit, 
#vim /etc/rsyslog.conf 

#Make sure you have syslog reception as TCP/UDP communication. 
$ModLoad imudp
$UDPServerRun 514

$ModLoad imtcp
$InputTCPServerRun 514

# Create a template so that all the logs would write to respective clients with the name of the program being logged in below destination path. You can keep your priority.facilitator to be marked and would be sent to rsyslog daemon which used to log centrally.

$template TmplAuth,"/scratch/remote-sys-logs/%fromhost%/%PROGRAMNAME%.log"
authpriv.*   ?TmplAuth
*.info,mail.none,authpriv.none,cron.none,local6.*  ?TmplAuth

# Since I needed audit.log there by I would create a new rule to make sure that it would reach to the same destination folder.

$template TmplAudit,"/scratch/remote-sys-logs/%fromhost%/audit.log"
local6.*        ?TmplAudit

# logging all the bash terminal commands and storing in the centralized location.

$template TmplCmds,"/scratch/remote-sys-logs/%fromhost%/hist.log"
local0.debug    ?TmplCmds

save and quit the file.

# mkdir /scratch/remote-sys-logs
# service rsyslog restart

Since the logs would be big, I would like to rotate in such way that two files in which last rotated file would be zipped, whereas last but one not to be zipped. The maximum time I would like to keep the logs are for 60 days. It would store with the date format as a extension. These would be processed once the rsyslog restarts.

Edit, 
#vim /etc/logrotate.d/remote-sys-logs
/scratch/remote-sys-logs/*/*.log {
    daily
    dateext
    rotate 2
    compress
    create 644 root root
    notifempty
    missingok
    maxage 60
    sharedscripts
    postrotate
     /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true
    endscript
}

Client: 

Edit. 
vim /etc/rsyslog.conf

# the below module doesn't exist by default, make an entry so that it has an ability to convert any standard text file into a syslog message.
$ModLoad imfile

# Enter the module definition as below to copy logs from audit to centralized log server, failing to do this so would not be able to write to central log.


$InputFileName /var/log/audit/audit.log
$InputFileTag audit:
$InputFileStateFile audit.log
$InputFileSeverity info
$InputFileFacility local6
$InputRunFileMonitor

# forward all the logs to the centralized server. 

*.*                     @centrallogserver:514

save and quit
#service rsyslog restart

In order to log all the bash commands to the logger, make an entry in /etc/bashrc global config file.


#export so that all the new bash commands are being logged to the file
export PROMPT_COMMAND='RETRN_VAL=$?;logger -p local0.debug "$(whoami):[$$] $(history 1 | sed "s/^[ ]*[0-9]\+[ ]*//" ) [$RETRN_VAL]#"'

exit or fork the bash shell to log all the history to the centralized log server.

Hope this helps someone who would like to create central log server. Below are the references that could be checked to suit your requirements.

Thanks for studying and re-sharing !

References :
https://en.wikipedia.org/wiki/Syslog - Syslog facility and priorities explained 
man logrotate.conf
man rsyslog.conf

Monday, 18 January 2016

Rescue environment paravirtualized VM Xen Virtualization - Redhat 7 on OVM333

Objective: how to work on rescue environment from the dom0, where mount ISO image bypassing pygrub and renaming root volume groups.

Environment: 
            : Oracle Virtual server 3.3.3 X86_64 (HVM)
            : Redhat 7.0 x86_64 (Redhat OS)

recently I had to rename volume groups in one of my guest machines, it was Redhat and was paravirtualized. since the root partition(/root) was running on the LV, I had to boot the guest in the rescue environment, since I had to boot guest directly from the kernel and initrd used in installations, so I copied to /OVS/Repositories/redhat7/vmlinuz and /OVS/Repositories/redhat7/initrd.img to my OVM HV. I would now tell my guest to use kernel and initrd which i had copied to boot from the rescue environment.

kernel='/OVS/Repositories/redhat7/vmlinuz'
ramdisk='/OVS/Repositories/redhat7/initrd.img'
extra="rescue method=/mnt" or extra="install=hd:xvdc rescue=1 xencons=tty"

There are two ways making for rescue environment:

1. If your ISO image is being mounted to some temp mount point (mount -o loop redhat7.iso /mnt), provide the rescue path location pointing to that. 

extra="rescue method=/mnt"

2. If you have your ISo image being mounted to the guest OS and you know the name of the block device, then you could provide the name of the disk for rescue 

In my case I know the name of the block device was xvdc. 

extra="install=hd:xvdc rescue=1 xencons=tty"

This would pass as an extra arguments to kernel, and would tell anaconda installer to get the install files from the location.


I remember I had in the past written post on how to rename VG/LV for CentOS 6(http://goo.gl/M71G5a), however i didn't find here much difference except it was GRUB2.

I brought down the LV's offline and renamed the VG, and entries in /etc/fstab and /boot/grub/grub.cfg was changed and remade grub.

- scan all disks in the LV - lvscan, if the disks are not offline, make it offline


sh-4.2# lvchange -an /dev/<vgname>/swap
sh-4.2# lvchange -an /dev/<vgname>/root

- Change the VG name 
sh-4.2# lvm vgrename <old_vgname> <new_vgname>
Volume group "<old_vgname>" successfully renamed to "<new_vgname>"

- make sure you point all your /etc/fstab entries to new volume group.

- let your /boot/grub/grub.conf know the changes made to your new volume group, and generate the GRUB config file
sh-4.2# grub2-mkconfig -o /boot/grub/grub.cfg

Once all done, remove entries made for rescue in the config file, and then boot.

OVM# xm create -c redhat7.cfg


Thank you for reading and re-sharing.

Sunday, 10 January 2016

Security updates and installation using YUM - RHEL 5/6/7

Hello All, 

I had come across a situation where I wanted to check, verify and update the security on the different releases of RHEL, since I was not able to find all at one place. I thought of putting across all at one place. I thought of keeping all at once place helps and so sharing in public !
Operating systems
Explanation on security updates on RHELRHEL 5RHEL 6RHEL 7
yum could install the security updates
using the plugin yum-security
yum install yum-securityyum install yum-plugin-securityNo plugin required as it is
already part of yum
list all available errata without installingyum list-secyum updateinfo list available
list all available security updates without
installing
yum list-security --securityyum updateinfo list security all
yum updateinfo list sec
list of currently installed security updatesyum list-secyum updateinfo list security installed
list all security update with verbose descriptionsyum list-sec
apply all security updates from RHNyum -y update --security
updates based on CVE referenceyum update --cve <CVE>
view available advisories by severityyum updateinfo list
more detailed information about
advisory before applying
yum updateinfo RHSA-2015:XXXX
apply only one specific advisoryyum update --advisory=RHSA 2015:XXXX
More information could be found atman yum-security

First post in year 2016, wishing all of you - HAPPY NEW YEAR :)

Thanks

Sunday, 1 November 2015

Reset root password by accessing file system on Guest OS from Physical host - CentOS

Everyone know as how to reset your forgotten root password on Linux - (http://goo.gl/6j9u2k), but in this article since I'm using Guest OS on KVM hypervisor, I would demonstrate as how to mount the root file system and reset password.

Details:

Hostname: kvm1
Diskname: vm1
path:     /var/lib/libvirt/images/vm1.img 

- Firstly, you need to shutdown your VM, doing it which it's running can cause disk corruption.
#virsh shutdown vm1

- Check your VM is in shut off state
#virsh list --all

- Get an unused loop device
#losetup -f
/dev/loop0

- Map VM image to your loop device
#losetup /dev/loop0 /var/lib/libvirt/images/vm1.img

- Print your partition table of the image file which is been mapped to the loop device and identify the correct partition where your root file system mounted upon.
#fdisk -l /dev/loop0

Disk /dev/loop0: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000907df

      Device Boot      Start         End      Blocks   Id  System
/dev/loop0p1   *        2048     1050623      524288   83  Linux
/dev/loop0p2         1050624     3147775     1048576   82  Linux swap / Solaris
/dev/loop0p3         3147776    20971519     8911872   83  Linux

- In order to mount the VM's partitions, you need to create partition mappings
#kpartx -av /dev/loop0
Disk /dev/loop0: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000907df

      Device Boot      Start         End      Blocks   Id  System
/dev/loop0p1   *        2048     1050623      524288   83  Linux
/dev/loop0p2         1050624     3147775     1048576   82  Linux swap / Solaris
/dev/loop0p3         3147776    20971519     8911872   83  Linux

- Here my root file system was on /dev/vda3 which is associated with 
/dev/loop0p3 which would be mounted.
#mount /dev/mapper/loop0p3 /mnt

- remove password field from root user in /etc/shadow 
#vim /mnt/etc/shadow

Note: If your system has been enabled with SELinux, it is very much required to autorelabel else you will be unable to login. Check below snap.




#touch /mnt/.autorelabel

- Once done, remove your mappings and start the VM. 
#umount /mnt
#kpartx -dv /dev/loop0
#losetup -d /dev/loop0

- Start your VM 
#virsh start vm1

- In your console, when you login with 'root' without providing any password.

Friday, 11 September 2015

zfs cheat sheet - Creation of Storagepools & Filesystems using zpool & zfs #Solaris 11

The ZFS file system is a file system that fundamentally changes the way file systems are administered, with features and benefits not found in other file systems available today. ZFS is robust, scalable, and easy to administer.ZFS uses the concept of storage pools to manage physical storage, ZFS eliminates volume management altogether. Instead of forcing you to create virtualized volumes, ZFS aggregates devices into a storage pool.File systems are no longer constrained to individual devices, allowing them to share disk space with all file systems in the pool. You no longer need to predetermine the size of a file system, as file systems grow automatically within the disk space allocated to the storage pool. When new storage is added, all file systems within the pool can immediately use the additional disk space without additional work

                     zpool commands                        Description
zpool create testpool c0t0d0Create simple pool named testpool with single disk
creating default mount point as poolname(/testpool)
OPTIONAL:
-n do a dry run on pool creation
-f force creation of the pool
zpool create testpool mirror c0t0d0 c0t0d1Create testpool mirroring c0t0d0 with c0t0d1
creating default mount point as poolname(/testpool)
zpool create -m /mypool testpool c0t0d0Create pool with different mount point than default
zpool create testpool raidz c2t1d0 c2t2d0 c2t3d0Create RAID-Z testpool
zpool add testpool raidz c2t4d0 c2t5d0 c2t6d0Add RAID-Z disks to testpool
zpool create testpool raidz1 c2t1d0 c2t2d0 c2t3d0 c2t4d0 c2t5d0 c2t6d0Create RAIDZ-1 testpool
zpool create testpool raidz2 c2t1d0 c2t2d0 c2t3d0 c2t4d0 c2t5d0 c2t6d0Create RAIDZ-2 testpool
zpool add testpool spare c2t6d0Add spare device to the testpool
zpool create testpool mirror c2t1d0 c2t2d0 mirror c2t3d0 c2t4d0Disk c2t1d0 mirrored with c2t2d0 &
c2t3d0 mirrored with c2t4d0
zpool remove testpool c2t6d0Removes hot spares and cache disks
zpool detach testpool c2t4d0Detach the mirror from the pool
zpool clear testpool c2t4d0Clears specific disk fault
zpool replace testpool c3t4d0Replace disk like disk
zpool replace testpool c3t4d0 c3t5d0Replace one disk with another disk
zpool export testpoolExport the pool from the system
zpool import testpoolImports specific pool
zpool import -f -D -d /testpool testpoolImport destroyed testpool
zpool import testpool newtestpoolImport a pool originally named testpool under
new name newtestpool
zpool import 88746667466648Import pool using ID
zpool offline testpool c2t4d0Offline the disk in the pool
Note: zpool offline testpool -t c2t4d0 will offline temporary
zpool upgrade -aupgrade all pools
zpool upgrade testpoolUpgrade specific pool
zpool status -xHealth status of all pools
zpool status testpoolStatus of pool in verbose mode
zpool get all testpoolLists all the properties of the storage pool
zpool set autoexpand=on testpoolSet the parameter value on the storage pool
Note: zpool get all testpool gives you all the properties
on which it could be used to set value
zpool listLists all pools
zpool list -o name,size,altrootshow properties of the pool
zpool historyDisplays history of the pool
Note: Once the pool is removed, history is removed.
zpool iostat 2 2Display ZFS I/O stastics
zpool destroy testpoolRemoves the storage pool


                       zfs commands                      Description
zfs listLists the ZFS file system's
zfs list -t filesystem
zfs list -t snapshot
zfs list -t volume
zfs create testpool/filesystem1Creates ZFS filesystem on testpool storage
zfs create -o mountpoint=/filesystem1 testpool/filesystem1Different mountpoint created after ZFS creation
zfs rename testpool/filesystem1 testpool/filesystem2Renames the ZFS filesystem
zfs unmount testpoolunmount the storagepool
zfs mount testpoolmounts the storagepool
NFS exports in ZFSzfs share testpool - shares the file system for export
zfs set share.nfs=on testpool - make the share persistent
across reboot
svcs -a nfs/server - NFS server should be online
cat /etc/dfs/dfstab - Exported entry in the file
showmount -e - storage pool has been exported
zfs unshare testpoolRemove NFS exports
zfs destroy -r testpoolDestroy storage pool and all datasets under it
zfs set quota=1G testpool/filesystem1set quota of 1GB on the filesystem1
zfs set reservations=1G testpool/filesystem1set reservation of 1GB on the filesystem1
zfs set mountpoint=legacy testpool/filesystem1Disable ZFS auto mounting and enable
through /etc/vsftab
zfs unmount testpool/filesystem1unmounts ZFS filesystem1 in testpool
zfs mount testpool/filesystem1mounts ZFS filestystem1 in testpool
zfs mount -amounts all the ZFS filesystems
zfs snapshot testpool/filesystem1@fridayCreates a snapshot of the filesystem1
zfs hold keep testpool/filesystem1@fridayHolds existing snapshot & attempts to destroy using zfs destroy will fail
zfs rename testpool/filesystem1@friday FRIDAYRename the snapshots
Note: snapshots must exists in the same pools
zfs diff testpool/filesystem1@friday testpool/filesystem1@friday1Identify the difference between two snapshots
zfs holds testpool/filesystem1@fridayDisplays the list of snapshots help
zfs rollback -r testpool/fileystem1@fridayRoll back yesterday snapshot recursively
zfs destroy testpool/fileystem1@thursdayDestroy snapshot created yesterday
zfs clone testpool/filesystem1@friday testpool/clones/fridayClone was created on the same snapshot
Note: Cannot create clone of a filesystem in a pool that is different from where original snapshot resides.
zfs destroy testpool/clones/FridayDestroy the clone

Thanks,