Sunday, 27 July 2014

autoyast configuration for PXE boot #OpenSuSE #SLES11

Objective: autoyast configuration and boot via PXE

Environment: OpenSuSE 11/SuSE 11

w.r.t my earlier post on PXE boot for SLES ( click here ) , I would continue as how autoyast could be configured and could be combined into PXE environment.

Yast ->Miscellaneous -> Autoinstallation    

In above Groups and their corresponding Modules, Clone the Modules, so that the current system Modules would be copied the the destination host.
Below is one such example shown,

Pic -1:                                                                                                                                                      
 Autoinstallation - Configuration
 │Hardware                          ││Add-On Products               
 │High Availability                 ││Image deployment              
 │Miscellaneous                     ││Online Update Configuration   
 │Network Services                  ││Package Selection             
 │Network Devices                   ││                              
 │Security and Users                ││                              
 │Software                          ││                              
 │Support                           ││                              
 │System                            ││                              
 │Virtualization                    ││                              
 │                                  ││                              

Pic -2:

│Selected Patterns                                                  │
│                                                                   │
│ *  Minimal                                                        │
│ *  WBEM                                                           │
│ *  apparmor                                                       │
│ *  base                                                           │
│ *  dhcp_dns_server                                                │
│ *  documentation                                                  │
│ *  file_server                                                    │
│ *  gnome                                                          │
│ *  lamp_server                                                    │
│ *  print_server                                                   │
│ *  x11                                                            │
│                                                                   │
│Individually Selected Packages                                     │
│                                                                   │
│149                                                                │
│                                                                   │
│Packages to Remove                                                 │
│                                                                   │
│20                                                                 │
│                                                                   │
│                                                                   │
      [Clone]                                                [Edit]
 [Apply to system]                                           [Clear]

Once the above package cloning and all other Groups are completed, you need to save the file(XML format) which by default resides in the directory /var/lib/autoinstall/repository.

Note: During the User and Group Management selection, make sure you would de-select 'gdm' as it is created during the installation and hence can be omitted. You may receive the error as below incase you have selected the user & group.

Error: Could not update ICEauthority file /var/lib/gdm/.ICEauthority

# ls -l /var/lib/autoinstall/repository/*.xml
-rw-r--r-- 1 root root 47703 Jul 25 13:55 /var/lib/autoinstall/repository/autoyast_pxe.xml

- Make a directory inside the apache's default DocumentRoot /srv/www/htdocs/

# mkdir /srv/www/htdocs/autoyast

- Copy the default XML file from /var/lib/autoinstall/repository/autoyast_pxe.xml to /srv/www/htdocs/autoyast
- Make sure that your PXE finds the autoyast configuration from the TFTP server to start the installations, hence need to append as below in the default configs of the PXE config's.

APPEND initrd=sles/11/x86_64/initrd splash=silent showopts install= autoyast=

Config file for autoyast could be downloaded from the location

Now, your installations are successful and is automated.

Friday, 25 July 2014

PXE Installation on SLES 11

Objective: PXE installation for autoyast

In an effort to help automate OS installation, I had set up a Preboot Execution Environment (PXE) server.

"The Preboot eXecution Environment (PXE, also known as Pre-Execution Environment, or 'pixie') is an environment to boot computers using a network interface independently of available data storage devices (like hard disks) or installed operating systems."

Environment: SLES 11

I had already discusses how PXE works in my earlier posts where I had installed PXE environment for kick-starting the Redhat/CentOS flavors. If the reader is interested to know how PXE is configured on Redhat/CentOS - click here

Change Plan:

1. Create an ISO from the DVD installation media.
2. Mount the ISO permanently(/etc/fstab) to a particular mount point directory structure which is accessed through server, instead of extracting images. This could be more efficient in storage utilization.
3. Add a software repository for the web-server/ISO image which you have created.
4. Install packages like TFTP, DHCP, APACHE, SYSLINUX if they were not installed by default.
5. Modify TFTP and DHCP configurations as to lease IP addresses according to your environment you are building your enterprise server.
6. Poweron the destination host, boot from the LAN in which NIC makes a request to the DHCP which in-turns provides with information like(IP, subnet, gateway...etc), additionally provides the TFTP location from where it has to get the booting image.

I assume the reader would be aware of creating an ISO image, mounting it permanently, also would be skipping the package installations.

I would be providing more of the configuration details along with the screen shots, which could be helpful incase if you are configuring from " YAST "

Executions :

I had mounted by ISO image on /srv/www/htdocs/sles/11/x86_64 and has added into my repository as below shown, 

Repository Additions :

(Yast -> Software Repositories -> Add -> HTTP -> Server and Directory )

 Repository Name
                           (x) Edit Parts of the URL  
  │            ( ) FTP            (x) HTTP            
 Server Name                                   
 Directory on Server
  │[x] Anonymous                                      
  │User Name                                          

TFTP Enable/Configurations :

Install/enable TFTP and make a boot image directory(/tftpboot), as below :

(Yast -> Network Services -> TFTP Server )

  ( ) Disable
  (x) Enable

  Boot Image Directory

  /tftpboot                  [Browse...]

  [ ] Open Port in Firewall  [Firewall Details...]
  Firewall is disabled

                     [View Log]

DHCP configurations :

Once DHCP is installed on the server, use DHCP server wizard.

Pic 1 :

Domain Name

Primary Name server IP

                                            [ Next ]

Pic 2 :

IP Address Range
First IP Address           Last IP Address   

                                             [ Next ]

Pic 3 :

Service start
[X] When Booting
[ ] Manually

Pic 4: 

Global Options                        

    │Option                               │Value                    
    │ddns-update-style                    │none                     
    │ddns-updates                         │Off                      
    │authoritative                        │On                       
    │log-facility                         │local7                   
    │default-lease-time                   │14400                    
    │option domain-name                   │""            
    │option domain-name-servers           │           
Pic 5 :

Subnet Configuration                                        

    Network Address                                 Network Mask                             

    │Option          │Value                                                      
    │range           │                 
    │next-server     │                                        
    │filename        │"pxelinux.0"                                    
    │option routers  │              

                                                    Click OK and then finish

- Creating a directory structure for TFTP server

mkdir -p /tftpboot/pxelinux.cfg
mkdir -p /tftpboot/sles/11/x86_64

- Copy necessary files for boot to the TFTP server directory structure:

# cd /srv/www/htdocs/sles/11/x86_64/boot/x86_64/loader/
# cp linux initrd message biostest memtest /tftboot/sles/11/x86_64/
# cp /usr/share/syslinux/pxelinux.0 /tftpboot/
# cp /usr/share/syslinux/menu.c32 /tftpboot/

- Create a default menu as below :

#  cat /tftpboot/pxelinux.cfg/default 
default menu.c32
prompt 0
timeout 100

LABEL sles11sp3
KERNEL sles/11/x86_64/linux
APPEND initrd=sles/11/x86_64/initrd splash=silent showopts install= ramdisk_size=65536 

- Below would be the skeleton for our configured TFTP server.

# ls -lar /tftpboot/*

-rw-r--r-- 1 root root 16462 Jul 24 18:14 /tftpboot/pxelinux.0
-rw-r--r-- 1 root root 57140 Jul 24 18:14 /tftpboot/menu.c32

total 12
drwxr-xr-x 3 root root 4096 Jul 24 18:23 11
drwxr-xr-x 4 root root 4096 Jul 24 18:23 ..
drwxr-xr-x 3 root root 4096 Jul 24 18:23 .

total 12
-rw-r--r-- 1 root root  669 Jul 25 11:07 default
drwxr-xr-x 4 root root 4096 Jul 24 18:23 ..
drwxr-xr-x 2 root root 4096 Jul 25 11:07 .

- On BIOS booting press F12, and select LAN(l) to further boot from the media.

Hence we could conclude that the PXE installation is successful and in my further posts I would configure an autoyast file to PXE which would automate SLES.
Thank you for reading and re-sharing.

Tuesday, 15 July 2014

Storage replication with DRBD

Objective: Storage replication with DRBD

Environment : CentOS 6.5 (32-bit)

DRBD Version : 8.3.16

In this article, I am using DRBD(Distributed Replicated Block Devicereplicated storage solution mirroring the content of block devices (hard disks) between serves. Not everyone can afford network-attached storage but somehow the data needs to be kept in sync, DRBD which can be thought of as network based RAID-1.

DRBD's position within the Linux I/O stack

Below are some of the basic requirements.

- Two disks  (preferably same size /dev/sdb )
- Networking between machines (drbd-node1 & drbd-node2)
Working DNS resolution.
- NTP synchronized times on both nodes

Install DRBD packages :

drbd-node1# yum install -y  drbd83-utils kmod-drbd83
drbd-node2# yum install -y  drbd83-utils kmod-drbd83

Load the DRBD modules.

Reboot or /sbin/modprobe drbd

Partition the disk:
drbd-node1# fdisk /dev/sdb
drbd-node2# fdisk /dev/sdb

Create the Distributed Replicated Block Device resource file

Readers are required to change according to your specifications on the servers, which are marked in RED

drbd-node1# cat /etc/drbd.d/drbdcluster.res 
resource drbdcluster
 startup {
 wfc-timeout 30;
 outdated-wfc-timeout 20;
 degr-wfc-timeout 30;
net {
 cram-hmac-alg sha1;
 shared-secret sync_disk;
syncer {
 rate 10M;
 al-extents 257;
 on-no-data-accessible io-error;
 on drbd-node1 {
 device /dev/drbd0;
 disk /dev/sdb1;
 address 192.168.1.XXX:7788;
 flexible-meta-disk internal;
 on drbd-node2 {
 device /dev/drbd0;
 disk /dev/sdb1;
 address 192.168.1.YYY:7788;
 meta-disk internal;

Copy DRBD configured to the secondary node(drbd-node2)

drbd-node1# scp /etc/drbd.d/drbdcluster.res root@192.168.1.YYY:/etc/drbd.d/drbdcluster.res

Initialize DRBD on both the nodes and start their services(drbd-node1 & drbd-node2)

ALL]# drbdadm create-md drbdcluster
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.

ALL# service drbd start
Starting DRBD resources: [ d(drbdcluster) s(drbdcluster) n(drbdcluster) ]........

- Since both the disks contain garbage values, we are required to tell to the DRBD which set of data would be used as primarily.

drbd-node1# drbdadm -- --overwrite-data-of-peer primary drbdcluster

- Device would start for an initial sync, we are required to wait until it completes.

drbd-node1# cat /proc/drbd
version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build32R6, 2013-09-27 15:59:12
 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
    ns:123904 nr:0 dw:0 dr:124568 al:0 bm:7 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:923580
[=>..................] sync'ed: 12.2% (923580/1047484)K
finish: 0:01:29 speed: 10,324 (10,324) K/sec

Create a file system and populate the data  on the device.

drbd-node1# mkfs.ext4 /dev/drbd0 
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
65536 inodes, 261871 blocks
13093 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
32768, 98304, 163840, 229376

Writing inode tables: done                            
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 30 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

drbd-node1# mount /dev/drbd0 /data
drbd-node1# touch /data/file1

You don't need to mount the disk from secondary machines. All data you write on /data folder will be synced to secondary server.

In order to view it, umount /data from the drbd-node1 and maker secondary node as primary node and mount back /data on the drbd-node2. you could see the same contents of /data.

drbd-node1# drbdadm secondary drbdcluster
drbd-node2# drbdadm -- --overwrite-data-of-peer primary drbdcluster

We were successful on storage replication with DRBD.

On further improvement, since DRBD is functioning I would configure cluster and file system as a resource to use it. further in-addition to the Filesystem definition, we also need to tell the cluster where it can be located (only on the DRBD Primary) and when it is allowed to start(after
the Primary was promoted). 

I would publish an article in near future for the same.

Saturday, 5 July 2014

Identify Open Files #Linux

It was been very often asked by few of the system administrators as how to identify the open files in the Linux environment. 

Here we will identify the open files.

Environment : SuSE 11 

Given our intent, we use /var FS. Below, we see that /var has its own FS and we would see if any process are holding any files under "/var"

 linux:~ # df -h /var
Filesystem              Size  Used Avail Use% Mounted on
/dev/mapper/vg00-lvvar 1008M 1008M     0 100% /var
linux:~ # 

linux:~ # fuser -c /var 2>/dev/null
  1570  1585  1603  1691  1694  2626  2628  2663  3127  3142  3232  3257  3258  3299  3300  3301  3328  3349  3350  3351  3352  3353  3354  3486  3517  3518  3521  3525  3526  3528  3531  3540  3541  3549  3560  3563  3596  3599  5611  5614  6883linux:~ # 

Since we now know that there are opened files under "/var", let's see which particular files are opened.  We can do this by running a "for" loop on the output of 'fuser', using the resulting variable as part of our "/proc" path that we list out with 'ls'.

linux:~ # for i in `fuser -c /var 2>/dev/null` ; do echo "${i}:  `cat /proc/${i}/cmdline`" ; ls -ld /proc/${i}/fd/* | awk '/\/var/ {print "\t"$NF}' ; done
1570:  /sbin/acpid
1585:  /bin/dbus-daemon--system
1603:  /sbin/syslog-ng
3560:  sshd: root@pts/0

3563:  -bash
3596:  sshd: root@pts/1
3599:  -bash
5611:  sshd: root@pts/2
5614:  -bash
6883:  /usr/lib/gdm/gdm-session-worker
cat: /proc/6928/cmdline: No such file or directory
ls: cannot access /proc/6928/fd/*: No such file or directory
linux:~ # 

of intrest PID 3599 is holding an open file, that's because subsequent 'ls' on '/proc/3599/fd/*' shows that it is for writing STDOUT

linux:/var # ls -ld /proc/3599/fd/* | grep /var
l-wx------ 1 root root 64 Jul  5 10:47 /proc/3599/fd/1 -> /var/tmp/openfile.txt (deleted)
linux:/var # 

linux:/var # ls -l /var/tmp/openfile.txt 
ls: cannot access /var/tmp/openfile.txt: No such file or directory
linux:/var # 

by killing or restarting any application process, we could re-claim space in the disk.

You could also find an open files using 'lsof'

linux:/var # for u in `fuser -c /var 2>/dev/null`;do lsof -p $u | awk '{ if ( $7 > 100000000 ) print $0 }' ; done | grep del
bash    3599 root    1w   REG  253,7 924344320  49159 /var/tmp/openfile.txt (deleted)
linux:/var # 

linux:~ # kill -9 3599
linux:~ #

linux:~ # df -h /var
Filesystem              Size  Used Avail Use% Mounted on
/dev/mapper/vg00-lvvar 1008M  126M  832M  14% /var
linux:~ # 

Open file has been identified & deleted successfully.

Friday, 27 June 2014

Deployment of Kerberos #Redhat

Objective: Installation and configuration of Kerberos

Environment: Redhat 5.1 32-bit

Package version: 

Kerberos - 1.6
OpenLDAP - 2.3.27

I had already explained the mechanism behind kerberos in my previous article, in this article I would be kerberosing SSH/TELNET as a service.


I had configured NTP, DNS, OpenLDAP as an authentication, I would leave it to reader to configure and would not be explaining in this article, instead would focus on kerberos.

Make sure your NTP synchronized to all the servers, as it must generate the kerberos tickets TTL,

Hostname && Services:

1. Server - OpenLDAP and KDC center [ Authentication Server (AS) && Ticket Granting Server (TGS) ]

2. Client1 - an application server generally refers to Kerberized programs that clients communicate with using Kerberos tickets for authentication.

3. Client2 - This would be an user (i.e typically would be an LDAP user)which is used for testing the kerberos.

Testing results would be the below:

Once the Client2 user logs to the application server(Client1), it should NOT prompt for the password as it would have already generated an ticket for the user from the  KDC ( AS+TGS) to authenticate SSH service to application server.


Install the below packages, 

· krb5-libs
· pam_krb5
· krb5-workstation
· krb5-auth-dialog
. Xinetd

Config files: 
/etc/krb5.conf - file used by kerberos libraries
/var/kerberos/krb5kdc/kdc.conf - config file for KDC.
/var/kerberos/krb5kdc/kdc.conf - file defined for ACL.

Explanation of config file:

· [logging] – sets the way Kerberos component will perform there logging, the components that use the logging parameters are the KDC and Kerberos Admin Server both are used when you will use Linux as the Kerberos server, our Kerberos server is the Active Directory so we can leave the default for the logging section.

· [libdefaults] - Contains various default values used by the Kerberos V5 library. Values like default encryption type and if to use dns lookups or not.

· [realms] – list of realms and where to find there Kerberos server and some other realm related information.

· [domain_realm] – this file the mapping file from domain names to Kerberos realms.

· [appdefaults] – Contains default values that can be used by Kerberos V5 applications.


[root@server ~]# cat /etc/krb5.conf 
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

 default_realm = EXAMPLE.COM
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 24h
 forwardable = yes

  kdc =
  admin_server =
  default_domain =


 pam = {
   debug = false
   ticket_lifetime = 36000
   renew_lifetime = 36000
   forwardable = true
   krb4_convert = false
[root@server ~]# 

[root@server ~]#  cat /var/kerberos/krb5kdc/kdc.conf
 v4_mode = nopreauth
 kdc_tcp_ports = 88

  #master_key_type = des3-hmac-sha1
  acl_file = /var/kerberos/krb5kdc/kadm5.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
  supported_enctypes = des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal des-cbc-crc:v4 des-cbc-crc:afs3
[root@server ~]# 

[root@server krb5kdc]# cat kadm5.acl 

*/admin@EXAMPLE.COM *
[root@server krb5kdc]# 

[root@server ~]# ls /var/kerberos/krb5kdc/

kadm5.acl  kdc.conf
[root@server krb5kdc]#

We need to create the database containing all the principles and their passwords. An utility called kdb5_util is mainly used for low level maintenance( creation, dumping, saving, destruction of KDC DB and etc )

During creation, you will be prompted for the master password. It is the main key that is used by Kerberos to encrypt all the principals' keys in its database. Without it, Kerberos won't be able to parse it. For later convenience, this master password can be stored in a stash file, in order to avoid to retype it each time you restart Kerberos

[root@server krb5kdc]# kdb5_util create 
Loading random data
Initializing database '/var/kerberos/krb5kdc/principal' for realm 'EXAMPLE.COM',
master key name 'K/M@EXAMPLE.COM'
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
Enter KDC database master key: 
Re-enter KDC database master key to verify: 

[root@server krb5kdc]# ls
kadm5.acl  kdc.conf  principal  principal.kadm5  principal.kadm5.lock  principal.ok
[root@server krb5kdc]# 

Connection to administration server is done through kadmin. Since we did not create any principal yet, connecting to administration server is impossible, as KDC can not authenticate us. So, we use the "local" counterpart of kadmin, kadmin.local, to connect. It will access directly the Kerberos administration interface without password, but can only be run as root on the KDC's host.

[root@server krb5kdc]# kadmin.local
Authenticating as principal root/admin@EXAMPLE.COM with password.

First, we will list the content of the database, through the listprincs command. You will notice that the database contains already some principals. They are needed for Kerberos to work,during ticket negotiations.

kadmin.local:  listprincs

I would go on to add an user 'user1' and admin principles to a DB.

kadmin.local:  addprinc user1
WARNING: no policy specified for user1@EXAMPLE.COM; defaulting to no policy
Enter password for principal "user1@EXAMPLE.COM": 
Re-enter password for principal "user1@EXAMPLE.COM": 
Principal "user1@EXAMPLE.COM" created.
kadmin.local:  addprinc root/admin
WARNING: no policy specified for root/admin@EXAMPLE.COM; defaulting to no policy
Enter password for principal "root/admin@EXAMPLE.COM": 
Re-enter password for principal "root/admin@EXAMPLE.COM": 
Principal "root/admin@EXAMPLE.COM" created.

kadmin.local:  addprinc -randkey host/
WARNING: no policy specified for host/; defaulting to no policy
Principal "host/" created.
kadmin.local:  listprincs


Start the kerberos and admin services  

[root@server ~]# service krb5kdc start
[root@server ~]# service kadmin start


Install the package krb5-workstation.
Sync the time.
Make sure your GSSAPI authentication is enabled in /etc/ssh/sshd_config
 74 GSSAPIAuthentication yes
 75 GSSAPICleanupCredentials yes

[root@client1 ~]# service sshd restart

Execute authconfig-tui in the client to add the required PAM modules.

[root@client1 ~]# authconfig-tui

On application server, run kadmin authenticating as root/admin after making the machine as a client of kerberos server.

[root@client1 ~]# kadmin -p root/admin
Authenticating as principal root/admin with password.
Password for root/admin@EXAMPLE.COM
kadmin:  listprincs

Extract KDC host principal in local key tab file using kadmin service.

kadmin:  ktadd -k /etc/krb5.keytab host/
Entry for principal host/ with kvno 3, encryption type Triple DES cbc mode with HMAC/sha1 added to keytab WRFILE:/etc/krb5.keytab.
Entry for principal host/ with kvno 3, encryption type ArcFour with HMAC/md5 added to keytab WRFILE:/etc/krb5.keytab.
Entry for principal host/ with kvno 3, encryption type DES with HMAC/sha1 added to keytab WRFILE:/etc/krb5.keytab.
Entry for principal host/ with kvno 3, encryption type DES cbc mode with RSA-MD5 added to keytab WRFILE:/etc/krb5.keytab.
kadmin:  quit
[root@client1 ~]# 


- client authenticate to the application server without password as the ticket was granted for the server for communication.

-bash-3.1$ kinit
-bash-3.1$ klist
Ticket cache: FILE:/tmp/krb5cc_505
Default principal: user1@EXAMPLE.COM

Valid starting     Expires            Service principal
06/27/14 09:11:26  06/28/14 09:11:26  krbtgt/EXAMPLE.COM@EXAMPLE.COM
06/27/14 09:11:47  06/28/14 09:11:26  host/

Kerberos 4 ticket cache: /tmp/tkt505
klist: You have no tickets cached

-bash-3.1$ ssh user1@client1
Could not create directory '/home/user1/.ssh'.
The authenticity of host 'client1 (' can't be established.
RSA key fingerprint is 19:6b:9c:62:02:be:07:a9:0b:d9:72:86:f1:73:14:59.
Are you sure you want to continue connecting (yes/no)? yes
Failed to add the host to the list of known hosts (/home/user1/.ssh/known_hosts).
Last login: Fri Jun 27 09:15:19 2014 from
Could not chdir to home directory /home/user1: No such file or directory

-bash-3.1$ id
uid=505(user1) gid=505(user1) groups=505(user1)
-bash-3.1$ hostname

TELNET was also made as a service for authentication

I was able to telnet to the user 'user1' without password.

-bash-3.1$ hostname
-bash-3.1$ telnet -Fxl user1 client1
Connected to (
Escape character is '^]'.
Waiting for encryption to be negotiated...
[ Kerberos V5 accepts you as ``user1@EXAMPLE.COM'' ]
[ Kerberos V5 accepted forwarded credentials ]
Last login: Fri Jun 27 09:40:52 from client2
No directory /home/user1!
Logging in with home = "/".
-bash-3.1$ hostname
-bash-3.1$ klist
Ticket cache: FILE:/tmp/krb5cc_p5086
Default principal: user1@EXAMPLE.COM

Valid starting     Expires            Service principal
06/27/14 09:41:12  06/28/14 09:09:57  krbtgt/EXAMPLE.COM@EXAMPLE.COM

Installed and configured successfully.