Sunday, 19 June 2016

Docker Basics & Container Customization - Linux

Objective:
Learn how to customize a Docker container image and use it to instantiate application instances across different Linux servers

Introduction: 
Docker captures full application environment into a virtual container that can be deployed across different Linux servers. System administrators and software developers are learning that Docker can help them deploy application images on Linux quickly, reliably, and consistently without dependency and portability problems. Docker containers can define application and its dependencies using small text file(Dockerfile) that can be moved to different Linux releases and quickly rebuilt.  Also Dockerized application are very easy to migrate to another different linux servers either executed as a bare metal in a virtual machine or Linux instances in the cloud.

I would demonstrate how to create Docker container on RHEL 7, modify and use to deploy multiple application instance.  Docker containers are a lightweight virtualization technology for Linux. They provide isolation from other applications and processes running on the same system but make system calls to the same shared Linux kernel, similar to Linux LXC application containers. Docker containers have their own namespace, so they are fully isolated from one another—processes running in one container can't see or impact processes running in another. By default, each container gets its own networking stack, private network interfaces, and IP address, and Docker creates a virtual bridge so containers can communicate.



Getting Started 
I am usig Docker installation on Redhat 7.2 and installtion document can be found at https://docs.docker.com/v1.8/installation/rhel/

You could have your own Docker hub repository to store images that can be used to build running containers. I would pull few of the images from the Docker hub repository for test environment.

[sunlnx@sandbox ~]$docker pull ubuntu:latest
[sunlnx@sandbox ~]$docker pull oraclelinux:6
[sunlnx@sandbox ~]$docker pull oraclelinux:7
[sunlnx@sandbox ~]$docker pull rhel:latest
[sunlnx@sandbox ~]$docker pull mysql/mysql-server
[sunlnx@sandbox ~]$docker pull nginx:latest

To list all the docker images that were pulled above 
[sunlnx@sandbox ~]$ docker images
REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
nginx                latest              0d409d33b27e        2 weeks ago         182.7 MB
ubuntu               latest              2fa927b5cdd3        3 weeks ago         122 MB
oraclelinux          6                   768a3d7b605a        4 weeks ago         222.8 MB
oraclelinux          7                   df602a268e64        5 weeks ago         276.1 MB
rhel                 latest              bf2034427837        6 weeks ago         203.4 MB
mysql/mysql-server   latest              18a962a188ee        11 days ago         366.9 MB
[sunlnx@sandbox ~]$

Container Customization 
I would like to provide multiple, identical web servers across multiple Linux servers, Docker makes it easy to create a preconfigured in a container image. I would then use this pre built image and deploy it across one or many other Linux hosts. I would install "myweb" container and would configure that to deliver web content to the clients. In order to customize I would get an interactive bash shell to run an rhel "myweb" container. 

[sunlnx@sandbox ~]$ docker run -it --name myweb oraclelinux:6 /bin/bash
[root@5b62adeb3abb /]#

In a shell on my Linux host, the docker ps command shows information about the running guest container, 

[sunlnx@sandbox ~]$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
5b62adeb3abb        oraclelinux:6       "/bin/bash"         6 minutes ago       Up 20 seconds                           myweb
[sunlnx@sandbox ~]$

on myweb, I will install httpd using yum and would configure the web server to display. I will create an index.html in /var/www/html on it. 

[root@5b62adeb3abb /]# yum install -y httpd
[root@5b62adeb3abb /]# echo "Web servers main page" > /var/www/html/index.html
[root@5b62adeb3abb /]# exit

Now, I want to create a new Docker image that reflects the contents of the guest container that I just configured. The following docker commit command captures the modified container into a new image named mywebser/httpd:r1

[sunlnx@sandbox ~]$ docker commit -m "ol6-httpd" `docker ps -l -q` mywebser/httpd:r1
sha256:79cf91b1a67f4ac6419b038e76c4e2de492f0eda978d2b07203e217290454108
[sunlnx@sandbox ~]$

The commit command takes as input the image ID number of the myweb container and assigns and returns an ID number for the new image. Running the docker images command now lists the new image mywebser/httpd 

[sunlnx@sandbox ~]$ docker images
REPOSITORY           TAG                 IMAGE ID            CREATED              SIZE
mywebser/httpd       r1                  79cf91b1a67f        About a minute ago   766 MB

Incase if I don't require this container I can remove with docker rm command

[sunlnx@sandbox ~]$docker rm myweb 

Because Docker containers persist even though they're no longer running, removing unneeded containers is simply a housekeeping step to reduce clutter on my host, and it allows me to reuse the name myweb1 for a new container.

Deploy Docker Image:
I can deploy any number of web servers now using the new Docker image as a template. The following docker run commands run the container image mywebser/httpd:r1, creating the containers myweb1, myweb2, myweb3, myweb4 and myweb5 executing httpd in each one:

[sunlnx@sandbox ~]$ docker run -d --name myweb1 -p 8080:80 mywebser/httpd:r1 /usr/sbin/httpd -D FOREGROUND
924018f9f7374b3a0ac24d71b6e7b41407dc1492344ef522a4796162fc0e6822
[sunlnx@sandbox ~]$ docker run -d --name myweb2 -p 8081:80 mywebser/httpd:r1 /usr/sbin/httpd -D FOREGROUND
2fc28962e5ab690edfc4e08c529a4206c3285c823ce924514da07ba0c196593a
[sunlnx@sandbox ~]$ docker run -d --name myweb3 -p 8082:80 mywebser/httpd:r1 /usr/sbin/httpd -D FOREGROUND
48964b1e06b29029781630b9734d734bb163603e13a00c2dd0a59f1e4d94ee23
[sunlnx@sandbox ~]$ docker run -d --name myweb4 -p 8083:80 mywebser/httpd:r1 /usr/sbin/httpd -D FOREGROUND
a5e970efd3f3586f8aa6d5e79b03484625ffcd4f22bac869878949eb6b5aaa48
[sunlnx@sandbox ~]$ docker run -d --name myweb5 -p 8084:80 mywebser/httpd:r1 /usr/sbin/httpd -D FOREGROUND
92bbf522aa41c07838626b03630bae63770c0678d06b7d698f05f203e8ed8b69
[sunlnx@sandbox ~]$

[sunlnx@sandbox ~]$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS                  NAMES
92bbf522aa41        mywebser/httpd:r1   "/usr/sbin/httpd -D F"   About a minute ago   Up About a minute   0.0.0.0:8084->80/tcp   myweb5
a5e970efd3f3        mywebser/httpd:r1   "/usr/sbin/httpd -D F"   About a minute ago   Up About a minute   0.0.0.0:8083->80/tcp   myweb4
48964b1e06b2        mywebser/httpd:r1   "/usr/sbin/httpd -D F"   About a minute ago   Up About a minute   0.0.0.0:8082->80/tcp   myweb3
2fc28962e5ab        mywebser/httpd:r1   "/usr/sbin/httpd -D F"   About a minute ago   Up About a minute   0.0.0.0:8081->80/tcp   myweb2
924018f9f737        mywebser/httpd:r1   "/usr/sbin/httpd -D F"   About a minute ago   Up About a minute   0.0.0.0:8080->80/tcp   myweb1
[sunlnx@sandbox ~]$

Using a web browser or curl, I can test the web server running in each guest:

[sunlnx@sandbox ~]$ curl http://sandbox:8080
Web servers main page
[sunlnx@sandbox ~]$ curl http://sandbox:8081
Web servers main page
[sunlnx@sandbox ~]$ curl http://sandbox:8082
Web servers main page
[sunlnx@sandbox ~]$ curl http://sandbox:8083
Web servers main page
[sunlnx@sandbox ~]$ curl http://sandbox:8084
Web servers main page

The Docker Engine also assigns each running container a virtual network interface, which you can see with the docker inspect command:
[sunlnx@sandbox ~]$ docker inspect myweb1
[sunlnx@sandbox ~]$ docker inspect -f '{{ .NetworkSettings.IPAddress }}' myweb1
172.17.0.2
[sunlnx@sandbox ~]$

Saving Docker image:
You could backup the image to a tar using docker command

[sunlnx@sandbox ~]$ docker save -o webserver1.tar mywebser/httpd:r1
[sunlnx@sandbox ~]$

Dockerfile:
Now that you've seen how to create and manipulate Docker containers using the command line, the preferred way to build and customize containers is actually using Dockerfiles. A Dockerfile is a small text file that contains the instructions required to construct a container. When a Dockerflle is built, each instruction adds a layer to the container in a step-by-step process. The build creates a container, runs the next instruction in that container, and then commits the container. Docker then runs the committed image as the basis for adding the next layer. The benefit of this layered approach is that Dockerfiles with the same initial instructions reuse layers.
Dockerfiles also create an easily readable and modifiable record of the steps used to create a Docker image. You can find the reference from https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/

[sunlnx@sandbox dockercfg]$ cat /home/sunlnx/dockercfg/Dockerfile
FROM centos
MAINTAINER sunlnx <sunlnx@doc.com>
RUN  yum install -y httpd
RUN echo "Web servers main page" > /var/www/html/index.html
EXPOSE 80
CMD /usr/sbin/httpd -D FOREGROUND
[sunlnx@sandbox dockercfg]$

The docker build command constructs a new Docker image from this Dockerfile, creating and removing temporary containers as needed during its step-by-step build process:

[sunlnx@sandbox dockercfg]$ docker build -t centos/httpd:r1 .
Sending build context to Docker daemon 3.584 kB
Step 1 : FROM centos
latest: Pulling from library/centos
a3ed95caeb02: Pull complete
da71393503ec: Pull complete
Digest: sha256:1a62cd7c773dd5c6cf08e2e28596f6fcc99bd97e38c9b324163e0da90ed27562
Status: Downloaded newer image for centos:latest
 ---> 904d6c400333
Step 2 : MAINTAINER sunlnx <sunlnx@doc.com>
 ---> Running in f9303082b870
 ---> fd756b44b2d3
Removing intermediate container f9303082b870
Step 3 : RUN yum install -y httpd
 ---> Running in f0affc8dc005
Loaded plugins: fastestmirror, ovl
.
.
<snip>

Complete!
 ---> d8f46afa67e1
Removing intermediate container f0affc8dc005
Step 4 : RUN echo "Web servers main page" > /var/www/html/index.html
 ---> Running in a732be9c4d06
 ---> f1825360762f
Removing intermediate container a732be9c4d06
Step 5 : EXPOSE 80
 ---> Running in 318e22854e4e
 ---> eeb133e3722a
Removing intermediate container 318e22854e4e
Step 6 : CMD /usr/sbin/httpd -D FOREGROUND
 ---> Running in 1da7959c9c03
 ---> 47416f98d5ad
Removing intermediate container 1da7959c9c03
Successfully built 47416f98d5ad
[sunlnx@sandbox dockercfg]$

[sunlnx@sandbox dockercfg]$ docker images
REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
centos/httpd         r1                  47416f98d5ad        28 minutes ago      311 MB

[sunlnx@sandbox ~]$ docker run -d --name centosweb -p 8085:80 centos/httpd:r1 /usr/sbin/httpd -D FOREGROUND

[sunlnx@sandbox ~]$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS                  NAMES
7779813db3df        centos/httpd:r1     "/usr/sbin/httpd -D F"   About a minute ago   Up About a minute   0.0.0.0:8085->80/tcp   centosweb

[sunlnx@sandbox ~]$ curl http://sandbox:8085
Web servers main page
[sunlnx@sandbox ~]$

More information can be found in https://docs.docker.com/ please do visit and enjoy Dockering !!

Thanks for re-sharing !

Monday, 23 May 2016

NFS common errors and troubleshooting - Linux/Unix

I have seen some of the most common NFS Error/Issues which occurs in very common now and then to most of Linux/Unix based system admins. So I decided to put at one palace. Hope this helps most of them.

Environment: Linux/Unix

Error: "Server Not Responding"

Check your NFS server and the client using RPC message and they must be functional/online. 

use ping, traceroute to check are they reaching each other, if not check your NIC using ethtool to verify IP address.

sometimes due to heavy server or network loads causes the RPC message response to time out causing error message. try to increase timeout option.

Error: "rpc mount export: RPC: Timed out " 

NFS server or client was unable to resolve DNS. check forward/reverse DNS name resolution works. 
Check your DNS servers or /etc/hosts

 Error: "Access Denied" or "Permission Denied"

check export permission for the NFS file systems.
#showmount -e nfsserver  ==> client 
#exportfs -a ==> server

check you dont have any syntax issues in file /etc/exports(e.g  space, permissions, typos..etc) 

Error: "RPC: Port mapper failure - RPC: Unable to receive"

NFS requires both NFS service and portmapper service running on both client and the server

#rpcinfo -p
       or
#/etc/init.d/portmap status

if not, start the portmap service

Error: "NFS Stale File Handle"

system call 'open' calls to access NFS file in the same way application uses local file they by returns a file descriptor or handle which programs useses I/O commands to identify the file manipulations

When an NFS file share is either unshared or NFS server changes the file handler, and any NFS client which attempts to do further I/O on the share will receive the 'NFS Stale File Handler'.

on the client :

umount -f /nfsmount or if it is unable to inmount and remount 
kill the processes which uses that /nfsmount

or 

incase if above options didn't work, you can reboot the client to clear the stale NFS.

Error: "No route to host"

this could be reported when client attempts to mount the NFS file system, even when the client can ping them successfully.

This can be due to RPC messages being filtered by either host firewall, client firewall or network switch. verify firewall rules. 
stop suing iptables and try to check the port 2049 

Hope this helps all who might use NFS most of the times. I have figured out these commonly in my experience.

Thanks for sharing !

Sunday, 15 May 2016

CentOS/RHEL 7 kernel dump & debug

Applies : CentOS / RHEL / OEL 7 

Arch : x86_64

When kdump enabled, the system is booted from the context of another kernel. This second kernel reserves a small amount of memory, and its only purpose is to capture the core dump image in case the system crashes. Since being able to analyze the core dump helps significantly to determine the exact cause of the system failure.

Configuring kdump :

kdump service comes with kexec-tools package which needs to be installed

#yum install kexec-tools

Modify the amount of memory needs to be configured for kdump and set crashkernel=<size> parameter


# cat /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rd.lvm.lv=centos/swap vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos/root crashkernel=128M  vconsole.keymap=us rhgb quiet"
GRUB_DISABLE_RECOVERY="true"
#

Re-generate grub and reboot to make kernel parameter effect

# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.10.0-123.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-123.el7.x86_64.img
Warning: Please don't use old title `CentOS Linux, with Linux 3.10.0-123.el7.x86_64' for GRUB_DEFAULT, use `Advanced options for CentOS Linux>CentOS Linux, with Linux 3.10.0-123.el7.x86_64' (for versions before 2.00) or `gnulinux-advanced-1a06e03f-ad9b-44bf-a972-3a821fca1254>gnulinux-3.10.0-123.el7.x86_64-advanced-1a06e03f-ad9b-44bf-a972-3a821fca1254' (for 2.00 or later)
Found linux image: /boot/vmlinuz-0-rescue-ae1ddf63f5e04857b5e89cd8fcf1f9e1
Found initrd image: /boot/initramfs-0-rescue-ae1ddf63f5e04857b5e89cd8fcf1f9e1.img
done
#

Modify Kump in /etc/kdump.conf

By default vmcore will be stored in /var/crash directory and if you like it needs to be dumped in which ever partition or disk or you want or NFS it must be defined here.

ext3 /dev/sdd1
or
net nfs.yourdomain.com:/export/dump

compress the vmcore file to reduce the size 
core_collector makedumpfile -c

when crash is captured, root fs will be mounted and /sbin/init is run. change the behaviour as below
default reboot

Start your kdump: 

# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.10.0-123.el7.x86_64 root=UUID=1a06e03f-ad9b-44bf-a972-3a821fca1254 ro rd.lvm.lv=centos/swap vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos/root crashkernel=128M vconsole.keymap=us rhgb quiet

# grep -v  '#' /etc/sysconfig/kdump | sed '/^$/d'
KDUMP_KERNELVER=""
KDUMP_COMMANDLINE=""
KDUMP_COMMANDLINE_APPEND="irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail acpi_no_memhotplug"
KEXEC_ARGS=""
KDUMP_BOOTDIR="/boot"
KDUMP_IMG="vmlinuz"
KDUMP_IMG_EXT=""
#

# systemctl enable kdump.service
# systemctl start kdump.service
# systemctl is-active kdump
active
#

Test your configuration 

# echo 1 > /proc/sys/kernel/sysrq
# echo c > /proc/sysrq-trigger



You could see that the crash was generated and we could install debug kernel packages to analyse crash. 

#yum install crash

I was able to download from https://oss.oracle.com/ol7/debuginfo/ and check your kernel version to download the version of debug kernel.

#rpm -ivh kernel-debuginfo-common-x86_64-3.10.0-123.el7.x86_64.rpm \
               kernel-debuginfo-3.10.0-123.el7.x86_64.rpm \
               kernel-debug-debuginfo-3.10.0-123.el7.x86_64.rpm

# ls -lh /var/crash/127.0.0.1-2016.05.15-04\:50\:40/vmcore
-rw-------. 1 root root 168M May 15 04:51 /var/crash/127.0.0.1-2016.05.15-04:50:40/vmcore
#

# crash /var/crash/127.0.0.1-2016.05.15-04\:50\:40/vmcore /usr/lib/debug/lib/modules/`uname -r`/vmlinux

WARNING: kernel version inconsistency between vmlinux and dumpfile

      KERNEL: /usr/lib/debug/lib/modules/3.10.0-123.el7.x86_64/vmlinux
    DUMPFILE: /var/crash/127.0.0.1-2016.05.15-04:50:40/vmcore
        CPUS: 1
        DATE: Sun May 15 04:50:38 2016
      UPTIME: 00:10:24
LOAD AVERAGE: 0.02, 0.07, 0.05
       TASKS: 104
    NODENAME: slnxcen01
     RELEASE: 3.10.0-123.el7.x86_64
     VERSION: #1 SMP Mon Jun 30 12:09:22 UTC 2014
     MACHINE: x86_64  (2294 Mhz)
      MEMORY: 1.4 GB
       PANIC: "Oops: 0002 [#1] SMP " (check log for details)
         PID: 2266
     COMMAND: "bash"
        TASK: ffff880055650b60  [THREAD_INFO: ffff880053fb2000]
         CPU: 0
       STATE: TASK_RUNNING (PANIC)

crash>


crash> bt
PID: 2266   TASK: ffff880055650b60  CPU: 0   COMMAND: "bash"
 #0 [ffff880053fb3a98] machine_kexec at ffffffff81041181
 #1 [ffff880053fb3af0] crash_kexec at ffffffff810cf0e2
 #2 [ffff880053fb3bc0] oops_end at ffffffff815ea548
.
.
.
crash> files
PID: 2266   TASK: ffff880055650b60  CPU: 0   COMMAND: "bash"
ROOT: /    CWD: /root
 FD       FILE            DENTRY           INODE       TYPE PATH
  0 ffff880053c47a00 ffff8800563383c0 ffff880055bad2f0 CHR  /dev/tty1
  1 ffff8800542a9100 ffff88004dd4ff00 ffff88004dc0b750 REG  /proc/sysrq-trigger
.
.
.
That will conclude the article. 

References : 

Sunday, 17 April 2016

Disaster Recovery using Relax-and-Recover (REAR) - Redhat Linux

This is very simple to use, I wished to write it anyway because before I do OS/application/security patches and much more I wanted to ensure that I would keep the complete backup of server in worst case as this be being production critical.

we would first need to install rear package which can be downloaded from the epel repository. Before I proceed I would provide you environment.

Environment : Oracle Linux 6 with Redhat kernel(2.6.32-573.el6.x86_64)
rear verion : rear-1.18-3.el6.x86_64
DR copy :     NFS storage(nfsserver.testlabs.com)

You would have your rear package in EPEL repository and can be downloaded from their mirror, else just copy & paste below 
hostname#cat > /etc/yum.repos.d/epel.repo 

[epel]
name=Extra Packages for Enterprise Linux 6 - $basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6

cntrl-d

hostname# yum install rear 

Note: Make sure you are already installed with 'genisoimage' and 'syslinux' without which rear wont be able to install. 
hostname#yum install genisoimage

Let rear know as to which location it needs to be backuped ? It could be defined below :

hostname#cat >/etc/rear/local.conf 
OUTPUT=ISO
BACKUP=NETFS
BACKUP_URL="nfs://nsfserver.testlabs.com/dr/"

cntrl-d


Resulting ISO image would be used for DR recovery purpose which takes backup from the NFS in order to restore files/dirs. 

hostname# rear -v mkbackup
Relax-and-Recover 1.18 / Git
Using log file: /var/log/rear/rear-hostname.log
Creating disk layout
Creating root filesystem layout
TIP: To login as root via ssh you need to set up /root/.ssh/authorized_keys or SSH_ROOT_PASSWORD in your configuration file
Copying files and directories
Copying binaries and libraries
Copying kernel modules
Creating initramfs
Making ISO image
Wrote ISO image: /var/lib/rear/output/rear-hostname.iso (74M)
Copying resulting files to nfs location
Encrypting disabled
Creating tar archive '/tmp/rear.NZP1vXar0Vmq5nr/outputfs/hostname/backup.tar.gz'
Archived 14 MiB [avg 3584 KiB/sec]
.
.
.
.

Archived 5644 MiB [avg 8268 KiB/sec]OK
Archived 5644 MiB in 700 seconds [avg 8256 KiB/sec]

You would have all your system files backuped to NFS server, you can just confirm by logging to storage box.

nsfserver#pwd
/dr/hostname
nsfserver# /dr/hostname# ls
./                  README              backup.log          rear-hostname.iso
../                 VERSION             backup.tar.gz       rear.log
nsfserver#  /dr/hostname#

When your server unable to boot or any libraries corruption etc etc for any reasons you can copy the iso image from NFS path and boot from CDROM and recover.

I tested and I would share to all readers.

I will corrupt the server, remove binaries, remove bootable files etc etc and will restore from ISO image from the NFS location.

hostname# rm -rf /boot/*
hostname# ls -l /boot
total 0
hostname# 

hostname# rm -rf /bin/*
hostname# ls
-bash: /bin/ls: No such file or directory
hostname#

when you boot from ISO image you need to choose Recover hostname from the below options


After booting, it would go to RESCUE shell, check you are able to reach to your NFS server in order to restore the files.

RESCUE: rear recover



This would start copying all your data from the NFS to the client server. It might take minutes/hours depending on the data. Once its been completed just boot up your client machine it would be operational.

hostname: # ls -l /boot/ | wc -l
15
hostname:

Now you have restored you system with backup.

There can't be excuse incase if you are not using this tool. its very easy, simple and would request readers to take 1 copy of ISO image would save lot of time and efforts in worst cases. Plan for the best, but prepare for the worst.

Thanks for all who read this post.

Sunday, 20 March 2016

Xen disk hot addition/removing from guests

Since I had to add new disk for the guest to add swap space, Xen allows you to hot add (and remove) disks to a guest domU while the system is running.  

Lets take a look at how to add disks to the guests :

I would share image based disk from the Xen Dom0 which would be available to the guest, so later it could be formated, mounted and do anything just like a block device. xm-attach would be used to make this online. 

I just created a image file for 4GB to add swap partition for guest server 
xendom0#dd if=/dev/zero of=testvm-swapdisk.img bs=1M count=4k

xm block-attach <Domain> <BackDev> <FrontDev> <Mode> [BackDomain]
    Domain   - Guest domain which needs to attach disk
    BackDev  - Location of the block device
    FrontDev - The device name to assign the new device in the domU
    Mode       - read/write mode

xendom0# xm block-attach testvm testvm-swapdisk.img /dev/xvdd w 

On Guest :

testvm ~]# lsblk -i | tail -1
xvdd                        202:48   0    4G  0 disk
testvm ~]#fdisk /dev/xvdd 
testvm ~]#mkswap /dev/xvdd1
testvm ~]#swapon /dev/xvdd1
testvm ~]# swapon -s
Filename                                Type            Size    Used    Priority
/dev/dm-1                               partition       819196  0       -1
/dev/xvdd1                              partition       4192928 0       -2

remove disk from guest:

If you need to remove disks which are no longer needed for the guests, you need to unmount, delete the partitions. from the xen Dom0 domains you can detach the disk.

xendom0#xm block-detach testvm /dev/xvdd

Make sure, you need to edit the xen config file for permanent changes so that it would be available for next reboot. 

Sunday, 28 February 2016

Centralized Log Management using rsyslog CentOS 6/7

​I am creating an centralized log server where it can store all the logs from the clients. In order to do that make sure you have enough space to store logs for all the clients. I would also configure the log rotation to save space on the disk.

Environment - CentOS/Redhat 6.6
rsyslog version5.8.10

rsyslog would be installed by default. incase its not there use an yum to install.
#yum install rsyslog

It could be helpful incase you read man rsyslog.conf documentation. It has mainly 3 parts 

1. Modules - rsyslog follows modular design
2. Global directives - Set global properties for rsync
3. Rules - what to be logged and where

Destination log server would contain all the logs ( audit, sudo, su, history, kernel, ..etc) to be logged from all the clients to the centralized log server.

Let's configure server first,  

Edit, 
#vim /etc/rsyslog.conf 

#Make sure you have syslog reception as TCP/UDP communication. 
$ModLoad imudp
$UDPServerRun 514

$ModLoad imtcp
$InputTCPServerRun 514

# Create a template so that all the logs would write to respective clients with the name of the program being logged in below destination path. You can keep your priority.facilitator to be marked and would be sent to rsyslog daemon which used to log centrally.

$template TmplAuth,"/scratch/remote-sys-logs/%fromhost%/%PROGRAMNAME%.log"
authpriv.*   ?TmplAuth
*.info,mail.none,authpriv.none,cron.none,local6.*  ?TmplAuth

# Since I needed audit.log there by I would create a new rule to make sure that it would reach to the same destination folder.

$template TmplAudit,"/scratch/remote-sys-logs/%fromhost%/audit.log"
local6.*        ?TmplAudit

# logging all the bash terminal commands and storing in the centralized location.

$template TmplCmds,"/scratch/remote-sys-logs/%fromhost%/hist.log"
local0.debug    ?TmplCmds

save and quit the file.

# mkdir /scratch/remote-sys-logs
# service rsyslog restart

Since the logs would be big, I would like to rotate in such way that two files in which last rotated file would be zipped, whereas last but one not to be zipped. The maximum time I would like to keep the logs are for 60 days. It would store with the date format as a extension. These would be processed once the rsyslog restarts.

Edit, 
#vim /etc/logrotate.d/remote-sys-logs
/scratch/remote-sys-logs/*/*.log {
    daily
    dateext
    rotate 2
    compress
    create 644 root root
    notifempty
    missingok
    maxage 60
    sharedscripts
    postrotate
     /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true
    endscript
}

Client: 

Edit. 
vim /etc/rsyslog.conf

# the below module doesn't exist by default, make an entry so that it has an ability to convert any standard text file into a syslog message.
$ModLoad imfile

# Enter the module definition as below to copy logs from audit to centralized log server, failing to do this so would not be able to write to central log.


$InputFileName /var/log/audit/audit.log
$InputFileTag audit:
$InputFileStateFile audit.log
$InputFileSeverity info
$InputFileFacility local6
$InputRunFileMonitor

# forward all the logs to the centralized server. 

*.*                     @centrallogserver:514

save and quit
#service rsyslog restart

In order to log all the bash commands to the logger, make an entry in /etc/bashrc global config file.


#export so that all the new bash commands are being logged to the file
export PROMPT_COMMAND='RETRN_VAL=$?;logger -p local0.debug "$(whoami):[$$] $(history 1 | sed "s/^[ ]*[0-9]\+[ ]*//" ) [$RETRN_VAL]#"'

exit or fork the bash shell to log all the history to the centralized log server.

Hope this helps someone who would like to create central log server. Below are the references that could be checked to suit your requirements.

Thanks for studying and re-sharing !

References :
https://en.wikipedia.org/wiki/Syslog - Syslog facility and priorities explained 
man logrotate.conf
man rsyslog.conf