Tuesday, 15 July 2014

Storage replication with DRBD

Objective: Storage replication with DRBD

Environment : CentOS 6.5 (32-bit)

DRBD Version : 8.3.16

In this article, I am using DRBD(Distributed Replicated Block Devicereplicated storage solution mirroring the content of block devices (hard disks) between serves. Not everyone can afford network-attached storage but somehow the data needs to be kept in sync, DRBD which can be thought of as network based RAID-1.

DRBD's position within the Linux I/O stack

Below are some of the basic requirements.

- Two disks  (preferably same size /dev/sdb )
- Networking between machines (drbd-node1 & drbd-node2)
Working DNS resolution.
- NTP synchronized times on both nodes

Install DRBD packages :

drbd-node1# yum install -y  drbd83-utils kmod-drbd83
drbd-node2# yum install -y  drbd83-utils kmod-drbd83

Load the DRBD modules.

Reboot or /sbin/modprobe drbd

Partition the disk:
drbd-node1# fdisk /dev/sdb
drbd-node2# fdisk /dev/sdb

Create the Distributed Replicated Block Device resource file

Readers are required to change according to your specifications on the servers, which are marked in RED

drbd-node1# cat /etc/drbd.d/drbdcluster.res 
resource drbdcluster
 startup {
 wfc-timeout 30;
 outdated-wfc-timeout 20;
 degr-wfc-timeout 30;
net {
 cram-hmac-alg sha1;
 shared-secret sync_disk;
syncer {
 rate 10M;
 al-extents 257;
 on-no-data-accessible io-error;
 on drbd-node1 {
 device /dev/drbd0;
 disk /dev/sdb1;
 address 192.168.1.XXX:7788;
 flexible-meta-disk internal;
 on drbd-node2 {
 device /dev/drbd0;
 disk /dev/sdb1;
 address 192.168.1.YYY:7788;
 meta-disk internal;

Copy DRBD configured to the secondary node(drbd-node2)

drbd-node1# scp /etc/drbd.d/drbdcluster.res root@192.168.1.YYY:/etc/drbd.d/drbdcluster.res

Initialize DRBD on both the nodes and start their services(drbd-node1 & drbd-node2)

ALL]# drbdadm create-md drbdcluster
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.

ALL# service drbd start
Starting DRBD resources: [ d(drbdcluster) s(drbdcluster) n(drbdcluster) ]........

- Since both the disks contain garbage values, we are required to tell to the DRBD which set of data would be used as primarily.

drbd-node1# drbdadm -- --overwrite-data-of-peer primary drbdcluster

- Device would start for an initial sync, we are required to wait until it completes.

drbd-node1# cat /proc/drbd
version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build32R6, 2013-09-27 15:59:12
 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
    ns:123904 nr:0 dw:0 dr:124568 al:0 bm:7 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:923580
[=>..................] sync'ed: 12.2% (923580/1047484)K
finish: 0:01:29 speed: 10,324 (10,324) K/sec

Create a file system and populate the data  on the device.

drbd-node1# mkfs.ext4 /dev/drbd0 
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
65536 inodes, 261871 blocks
13093 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
32768, 98304, 163840, 229376

Writing inode tables: done                            
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 30 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

drbd-node1# mount /dev/drbd0 /data
drbd-node1# touch /data/file1

You don't need to mount the disk from secondary machines. All data you write on /data folder will be synced to secondary server.

In order to view it, umount /data from the drbd-node1 and maker secondary node as primary node and mount back /data on the drbd-node2. you could see the same contents of /data.

drbd-node1# drbdadm secondary drbdcluster
drbd-node2# drbdadm -- --overwrite-data-of-peer primary drbdcluster

We were successful on storage replication with DRBD.

On further improvement, since DRBD is functioning I would configure cluster and file system as a resource to use it. further in-addition to the Filesystem definition, we also need to tell the cluster where it can be located (only on the DRBD Primary) and when it is allowed to start(after
the Primary was promoted). 

I would publish an article in near future for the same.

Saturday, 5 July 2014

Identify Open Files #Linux

It was been very often asked by few of the system administrators as how to identify the open files in the Linux environment. 

Here we will identify the open files.

Environment : SuSE 11 

Given our intent, we use /var FS. Below, we see that /var has its own FS and we would see if any process are holding any files under "/var"

 linux:~ # df -h /var
Filesystem              Size  Used Avail Use% Mounted on
/dev/mapper/vg00-lvvar 1008M 1008M     0 100% /var
linux:~ # 

linux:~ # fuser -c /var 2>/dev/null
  1570  1585  1603  1691  1694  2626  2628  2663  3127  3142  3232  3257  3258  3299  3300  3301  3328  3349  3350  3351  3352  3353  3354  3486  3517  3518  3521  3525  3526  3528  3531  3540  3541  3549  3560  3563  3596  3599  5611  5614  6883linux:~ # 

Since we now know that there are opened files under "/var", let's see which particular files are opened.  We can do this by running a "for" loop on the output of 'fuser', using the resulting variable as part of our "/proc" path that we list out with 'ls'.

linux:~ # for i in `fuser -c /var 2>/dev/null` ; do echo "${i}:  `cat /proc/${i}/cmdline`" ; ls -ld /proc/${i}/fd/* | awk '/\/var/ {print "\t"$NF}' ; done
1570:  /sbin/acpid
1585:  /bin/dbus-daemon--system
1603:  /sbin/syslog-ng
3560:  sshd: root@pts/0

3563:  -bash
3596:  sshd: root@pts/1
3599:  -bash
5611:  sshd: root@pts/2
5614:  -bash
6883:  /usr/lib/gdm/gdm-session-worker
cat: /proc/6928/cmdline: No such file or directory
ls: cannot access /proc/6928/fd/*: No such file or directory
linux:~ # 

of intrest PID 3599 is holding an open file, that's because subsequent 'ls' on '/proc/3599/fd/*' shows that it is for writing STDOUT

linux:/var # ls -ld /proc/3599/fd/* | grep /var
l-wx------ 1 root root 64 Jul  5 10:47 /proc/3599/fd/1 -> /var/tmp/openfile.txt (deleted)
linux:/var # 

linux:/var # ls -l /var/tmp/openfile.txt 
ls: cannot access /var/tmp/openfile.txt: No such file or directory
linux:/var # 

by killing or restarting any application process, we could re-claim space in the disk.

You could also find an open files using 'lsof'

linux:/var # for u in `fuser -c /var 2>/dev/null`;do lsof -p $u | awk '{ if ( $7 > 100000000 ) print $0 }' ; done | grep del
bash    3599 root    1w   REG  253,7 924344320  49159 /var/tmp/openfile.txt (deleted)
linux:/var # 

linux:~ # kill -9 3599
linux:~ #

linux:~ # df -h /var
Filesystem              Size  Used Avail Use% Mounted on
/dev/mapper/vg00-lvvar 1008M  126M  832M  14% /var
linux:~ # 

Open file has been identified & deleted successfully.

Friday, 27 June 2014

Deployment of Kerberos #Redhat

Objective: Installation and configuration of Kerberos

Environment: Redhat 5.1 32-bit

Package version: 

Kerberos - 1.6
OpenLDAP - 2.3.27

I had already explained the mechanism behind kerberos in my previous article, in this article I would be kerberosing SSH/TELNET as a service.


I had configured NTP, DNS, OpenLDAP as an authentication, I would leave it to reader to configure and would not be explaining in this article, instead would focus on kerberos.

Make sure your NTP synchronized to all the servers, as it must generate the kerberos tickets TTL,

Hostname && Services:

1. Server - OpenLDAP and KDC center [ Authentication Server (AS) && Ticket Granting Server (TGS) ]

2. Client1 - an application server generally refers to Kerberized programs that clients communicate with using Kerberos tickets for authentication.

3. Client2 - This would be an user (i.e typically would be an LDAP user)which is used for testing the kerberos.

Testing results would be the below:

Once the Client2 user logs to the application server(Client1), it should NOT prompt for the password as it would have already generated an ticket for the user from the  KDC ( AS+TGS) to authenticate SSH service to application server.


Install the below packages, 

· krb5-libs
· pam_krb5
· krb5-workstation
· krb5-auth-dialog
. Xinetd

Config files: 
/etc/krb5.conf - file used by kerberos libraries
/var/kerberos/krb5kdc/kdc.conf - config file for KDC.
/var/kerberos/krb5kdc/kdc.conf - file defined for ACL.

Explanation of config file:

· [logging] – sets the way Kerberos component will perform there logging, the components that use the logging parameters are the KDC and Kerberos Admin Server both are used when you will use Linux as the Kerberos server, our Kerberos server is the Active Directory so we can leave the default for the logging section.

· [libdefaults] - Contains various default values used by the Kerberos V5 library. Values like default encryption type and if to use dns lookups or not.

· [realms] – list of realms and where to find there Kerberos server and some other realm related information.

· [domain_realm] – this file the mapping file from domain names to Kerberos realms.

· [appdefaults] – Contains default values that can be used by Kerberos V5 applications.


[root@server ~]# cat /etc/krb5.conf 
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

 default_realm = EXAMPLE.COM
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 24h
 forwardable = yes

  kdc = server.example.com:88
  admin_server = server.example.com:749
  default_domain = example.com


 pam = {
   debug = false
   ticket_lifetime = 36000
   renew_lifetime = 36000
   forwardable = true
   krb4_convert = false
[root@server ~]# 

[root@server ~]#  cat /var/kerberos/krb5kdc/kdc.conf
 v4_mode = nopreauth
 kdc_tcp_ports = 88

  #master_key_type = des3-hmac-sha1
  acl_file = /var/kerberos/krb5kdc/kadm5.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
  supported_enctypes = des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal des-cbc-crc:v4 des-cbc-crc:afs3
[root@server ~]# 

[root@server krb5kdc]# cat kadm5.acl 

*/admin@EXAMPLE.COM *
[root@server krb5kdc]# 

[root@server ~]# ls /var/kerberos/krb5kdc/

kadm5.acl  kdc.conf
[root@server krb5kdc]#

We need to create the database containing all the principles and their passwords. An utility called kdb5_util is mainly used for low level maintenance( creation, dumping, saving, destruction of KDC DB and etc )

During creation, you will be prompted for the master password. It is the main key that is used by Kerberos to encrypt all the principals' keys in its database. Without it, Kerberos won't be able to parse it. For later convenience, this master password can be stored in a stash file, in order to avoid to retype it each time you restart Kerberos

[root@server krb5kdc]# kdb5_util create 
Loading random data
Initializing database '/var/kerberos/krb5kdc/principal' for realm 'EXAMPLE.COM',
master key name 'K/M@EXAMPLE.COM'
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
Enter KDC database master key: 
Re-enter KDC database master key to verify: 

[root@server krb5kdc]# ls
kadm5.acl  kdc.conf  principal  principal.kadm5  principal.kadm5.lock  principal.ok
[root@server krb5kdc]# 

Connection to administration server is done through kadmin. Since we did not create any principal yet, connecting to administration server is impossible, as KDC can not authenticate us. So, we use the "local" counterpart of kadmin, kadmin.local, to connect. It will access directly the Kerberos administration interface without password, but can only be run as root on the KDC's host.

[root@server krb5kdc]# kadmin.local
Authenticating as principal root/admin@EXAMPLE.COM with password.

First, we will list the content of the database, through the listprincs command. You will notice that the database contains already some principals. They are needed for Kerberos to work,during ticket negotiations.

kadmin.local:  listprincs

I would go on to add an user 'user1' and admin principles to a DB.

kadmin.local:  addprinc user1
WARNING: no policy specified for user1@EXAMPLE.COM; defaulting to no policy
Enter password for principal "user1@EXAMPLE.COM": 
Re-enter password for principal "user1@EXAMPLE.COM": 
Principal "user1@EXAMPLE.COM" created.
kadmin.local:  addprinc root/admin
WARNING: no policy specified for root/admin@EXAMPLE.COM; defaulting to no policy
Enter password for principal "root/admin@EXAMPLE.COM": 
Re-enter password for principal "root/admin@EXAMPLE.COM": 
Principal "root/admin@EXAMPLE.COM" created.

kadmin.local:  addprinc -randkey host/client1.EXAMPLE.com
WARNING: no policy specified for host/client1.EXAMPLE.com@EXAMPLE.COM; defaulting to no policy
Principal "host/client1.EXAMPLE.com@EXAMPLE.COM" created.
kadmin.local:  listprincs


Start the kerberos and admin services  

[root@server ~]# service krb5kdc start
[root@server ~]# service kadmin start


Install the package krb5-workstation.
Sync the time.
Make sure your GSSAPI authentication is enabled in /etc/ssh/sshd_config
 74 GSSAPIAuthentication yes
 75 GSSAPICleanupCredentials yes

[root@client1 ~]# service sshd restart

Execute authconfig-tui in the client to add the required PAM modules.

[root@client1 ~]# authconfig-tui

On application server, run kadmin authenticating as root/admin after making the machine as a client of kerberos server.

[root@client1 ~]# kadmin -p root/admin
Authenticating as principal root/admin with password.
Password for root/admin@EXAMPLE.COM
kadmin:  listprincs

Extract KDC host principal in local key tab file using kadmin service.

kadmin:  ktadd -k /etc/krb5.keytab host/client1.EXAMPLE.com
Entry for principal host/client1.EXAMPLE.com with kvno 3, encryption type Triple DES cbc mode with HMAC/sha1 added to keytab WRFILE:/etc/krb5.keytab.
Entry for principal host/client1.EXAMPLE.com with kvno 3, encryption type ArcFour with HMAC/md5 added to keytab WRFILE:/etc/krb5.keytab.
Entry for principal host/client1.EXAMPLE.com with kvno 3, encryption type DES with HMAC/sha1 added to keytab WRFILE:/etc/krb5.keytab.
Entry for principal host/client1.EXAMPLE.com with kvno 3, encryption type DES cbc mode with RSA-MD5 added to keytab WRFILE:/etc/krb5.keytab.
kadmin:  quit
[root@client1 ~]# 


- client authenticate to the application server without password as the ticket was granted for the server for communication.

-bash-3.1$ kinit
-bash-3.1$ klist
Ticket cache: FILE:/tmp/krb5cc_505
Default principal: user1@EXAMPLE.COM

Valid starting     Expires            Service principal
06/27/14 09:11:26  06/28/14 09:11:26  krbtgt/EXAMPLE.COM@EXAMPLE.COM
06/27/14 09:11:47  06/28/14 09:11:26  host/client1.EXAMPLE.com@EXAMPLE.COM

Kerberos 4 ticket cache: /tmp/tkt505
klist: You have no tickets cached

-bash-3.1$ ssh user1@client1
Could not create directory '/home/user1/.ssh'.
The authenticity of host 'client1 (' can't be established.
RSA key fingerprint is 19:6b:9c:62:02:be:07:a9:0b:d9:72:86:f1:73:14:59.
Are you sure you want to continue connecting (yes/no)? yes
Failed to add the host to the list of known hosts (/home/user1/.ssh/known_hosts).
Last login: Fri Jun 27 09:15:19 2014 from client1.EXAMPLE.com
Could not chdir to home directory /home/user1: No such file or directory

-bash-3.1$ id
uid=505(user1) gid=505(user1) groups=505(user1)
-bash-3.1$ hostname

TELNET was also made as a service for authentication

I was able to telnet to the user 'user1' without password.

-bash-3.1$ hostname
-bash-3.1$ telnet -Fxl user1 client1
Connected to client1.EXAMPLE.com (
Escape character is '^]'.
Waiting for encryption to be negotiated...
[ Kerberos V5 accepts you as ``user1@EXAMPLE.COM'' ]
[ Kerberos V5 accepted forwarded credentials ]
Last login: Fri Jun 27 09:40:52 from client2
No directory /home/user1!
Logging in with home = "/".
-bash-3.1$ hostname
-bash-3.1$ klist
Ticket cache: FILE:/tmp/krb5cc_p5086
Default principal: user1@EXAMPLE.COM

Valid starting     Expires            Service principal
06/27/14 09:41:12  06/28/14 09:09:57  krbtgt/EXAMPLE.COM@EXAMPLE.COM

Installed and configured successfully.

Wednesday, 25 June 2014

Kerberos mechanism explained

Understanding the mechanism behind kerberos.

How does kerberos work ?

This part of the article will explain the mechanisms behind Kerberos: 
Ticket exchange principles
Key Distribution Center(KDC)
Authentication mechanisms.

A commonly found description for Kerberos is "a secure, single sign on, trusted third party
mutual authentication service". It doesn't store any information about UIDs, GIDs, or home's
path. In order to propagate this information to hosts, you will eventually need yellow page
services: NIS, LDAP, or Samba

As Kerberos is only dealing with Authentication, it does neither Authorization, nor Accounting. it delegates those to the services requesting Kerberos' help for user's identification. Anyway, Kerberos being a "service" by itself, it can partially provide such functionalities, but in a very limited range.

Ticket Exchange Service

Kerberos' communication is designed to provide a distributed secure authentication service, through secret key cryptography.

For a user, the secret key is his "hashed password" (the password is reworked through a one-
way hash function and the resulted string is used as a key), usually stored in the Key
Distribution Center. For a service, the key is a random generated sequence, acting like a
password; it is also stored in Key Distribution Center, and in a file called a keytab on the
machine's service side.

The Kerberos communication is based around tickets. Tickets are a kind of encrypted data
scheme that is transmitted over the network, and stored on the client's side. The type of storage
depends on the client's operating system and configuration. Traditionally, it's stored as a small
file in /tmp, for compatibility reasons

The main central part of a Kerberos network is the the Key Distribution Center (KDC). It
consists of three parts:

• an Authentication Server, which answers requests for Authentication issued by clients.
Here, we're in the AS_REQUEST and AS_REPLY challenging part (see below for details),
where the client gets a Ticket Granting Ticket (TGT).

• a Ticket Granting Server, which issues Ticket Granting Service (TGS) to a client. This is
the TGS_REQUEST and TGS_REPLY part, where a client gets a TGS that allows him to
authenticate to a service accessible on the network.

• a database, that stores all the secret keys (clients' and services' ones), as well as some
information relating to Kerberos accounts (creation date, policies, ...).

Authentication mechanism—Ticket Granting Tickets  

AS_REQUEST & AS_REPLY, in conclusion Authentication mechanism can be represented as below which is self explanatory.

Service's use mechanism—Ticket Granting Service

TGS_REQUEST & TGS_REPLY in conclusion service's use mechanism can be represented as below which is self explanatory.


We can divide the Kerberos protocol into three main steps:

1. Authentication process, where the user (and host) obtain a Ticket Granting Ticket (TGT)
as authentication token,

2. Service request process, where the user obtain a Ticket Granting Service (TGS) to access
a service,

3. Service access, where the user (and host) use TGS to authenticate and access a specific

The service access step is not really Kerberos related, but merely depends on the service we are
authenticating to.

The below tutorial will explain the same in simplicity. 

I would further write an article to demonstrate deploying kerberos on Linux.

Saturday, 26 April 2014

automount configuration from NFS

Objective:  Execute shared scripts from NFS server to local server, which is configured via auto-mounts.

Environment: CentOS 6.3 32-bit

Package version:  autofs-5.0

Why do we need to use an autofs ?

One drawback to using /etc/fstab is that, regardless of how infrequently a user accesses the NFS mounted file system, the system must dedicate resources to keep the mounted file system in place. This is not a problem with one or two mounts, but when the system is maintaining mounts to many systems at one time, overall system performance can be affected. An alternative to /etc/fstab is to use the kernel-based automount utility. An automounter consists of two components. One is a kernel module that implements a file system, while the other is a user-space daemon that performs all of the other functions. The automount utility can mount and unmount NFS file systems automatically (on demand mounting) therefore saving system resources.

On RPM based systems, autofs is not installed by default, hence I would assume you might be knowing on how to install the 'auotfs' package using package manager. 
All my scripts are been placed in the central server which is NFS, and I would share to my local client which is configured via auto-mounts in-order to save the system performance.

NFS config's:

The directory containing the scripts are shared in /etc/exportfs and access controls are provided to the client servers. Once your configurations are completed, make sure to start the nfs services.

#vi /etc/exporfs
/scripts    <IP address of the client>(ro,sync)

You can get the system information script from getsysinfo.sh. The same file will be shared to all the clients.

 autofs config's:

The primary configuration file for the automounter is /etc/auto.master, also referred to as the master map which may be changed. The master map lists autofs-controlled mount points on the system, and their corresponding configuration files or network sources known as automount maps.

# cat /etc/auto.master
/autofs         /etc/auto.fs    --timeout=3

I use a shorter time vaule common user will not recognize as a timespan or anything the user could get nervous about when waiting. 

# tail -2 /etc/auto.fs
scripts -rw,soft,intr,rsize=8192,wsize=8192     nfs.domain.com:/scripts

Save the file and make sure you start the service.
#service autofs start

Now, you can traverse to the directoy which will mount when in use and unmounts when not in use. 

#df -h /autofs/scripts
Filesystem            Size  Used Avail Use% Mounted on
nfs.domain.com:/scripts  8.7G  4.1G  4.3G  49% /autofs/scripts

# ls -l  /autofs/scripts
total 8
-rwxr-xr-x 1 root root 7391 Sep  1  2013 getsysinfo.sh

You could also use the same for the local file system. I would conclude this article here and the 'automounts' can be configured for NIS, CIFS where I may explain in coming articles.