Thursday, July 21, 2016

XFS Filesystem

Was studying the XFS filesyetsm to understand . Come through some good and easily understandable docs. below are the extracts .'.......................


The XFS file system was developed as a journaling file system that uses a B-tree balanced tree algorithm to allocate data as fast as possible.
 One of the major design goals was support for large files and large file systems. The maximum file size currently supported is 2 Exabytes,
and the maximum file system size is 8 Exabytes.

The direct I/O option guarantees that a file is not buffered in buffer cache, but written to disk immediately after it has been committed.
 XFS exclusively offers a guaranteed rate I/O, which guarantees that certain file systems have a minimum I/O-bandwidth.

Features of XFS Filesystem


1. Journaling

 journaling is a capability which ensures consistency of data in the file system, despite any power outages or system crash that may occur. XFS provides journaling for file system metadata,
 where file system updates are first written to a serial journal before the actual disk blocks are updated.

2.Allocation Groups

XFS file systems are internally partitioned into allocation groups, which are equally sized linear regions within the file system. Files and directories can span allocation groups. Each allocation group manages its own inodes and free space separately,
providing scalability and parallelism so multiple threads and processes can perform I/O operations on the same file system simultaneously.

3.Stripped allocation

If an XFS file system is to be created on a striped RAID array, a stripe unit can be specified when the file system is created. This maximizes throughput by ensuring that data allocations,
 inode allocations and the internal log (the journal) are aligned with the stripe unit.

Variable block sizes

When many small files are expected, a small block size would typically maximize capacity, but for a system dealing mainly with large files,
 a larger block size can provide a performance efficiency advantage.

Delayed allocation

When a file is written to the buffer cache, rather than allocating extents for the data, XFS simply reserves the appropriate number of file system blocks for the data held in memory. The actual block allocation occurs only when the data is finally flushed to disk.
This improves the chance that the file will be written in a contiguous group of blocks, reducing fragmentation problems and increasing performance.

Direct I/O

For applications requiring high throughput to disk, XFS provides a direct I/O implementation that allows non-cached I/O operations to be applied directly to the userspace. Data is transferred between the buffer of the application and the disk using DMA,
 which allows access to the full I/O bandwidth of the underlying disk devices.



time spent removing large files

Comparison of  b lock dev ice, XF S , ex t4, and ex t3 w h e n w r i t i n g  large  f i l e 


How to create new FS/VG/LV in suse linux


How to create new FS/VG/LV in suse linux
=========================================

pre-requisite :

1. take the  df -h;fdisk -l ;multipath -ll ,vgs ,pvs,lvs command output .
2.take the backup of /etc/fstab.


2. once the storage team allocate the LUN's . NOte down the LUN-ID provided by storage team .
   Suppose LUN-ID = AB0004lm0000000008n00876d00005e9e

 run the below command to detetct at server level.

#rescan-scsi-bus.sh

3. Validate that the New LUN has been detetcted .
    #ls -ltr /dev/mapper/  - check the latest(last) entry to cross-check
    #multipath -ll |grep "AB0004lm0000000008n00876d00005e9e"
    # ls -l /dev/disk/by-id/ |grep -i "AB0004lm0000000008n00876d00005e9e"

4.  Once you are able to find out the new LUN's , note down the logical name of it .



5. If you are need to create PV .

 # pvcreate /dev/mapper/mpathdi

6. Create the volume group
 #vgcreate  abhidata3vg /dev/mapper/mpathdi

7. Once done you need to create the LV .
#lvcreate -L 20G -n lvabhidata3 abhidata3vg

8. Here we are going to create xfs filesystem
   #mkfs.xfs /dev/mapper/abhidata3vg-lvabhidata3

9. create the directory on which you want to mount the new FS  "/oracle/SQ7/sapdata3 and mount the filesystem.


#mkdir /aks
#mount /dev/abhidata3vg/lvabhidata3 /aks

10. To make the changes permanent add the entry of this filesystem in /etc/fstab

  /dev/abhidata3vg/lvabhidata3    /aks xfs     rw,noatime,nodiratime,barrier=0  0 0

  below options normally we prefer for xfs filesystems .

Friday, July 15, 2016

HACMP Failover Test Scenario's


                          

      CLUSTER FAILOVER TEST SCENARIO’S IN AIX ENVIRONMENT

This document covers the  Cluster Failover Test Scenarios  in AIX Environment .

In AIX  ,We have  normally three ways for performing the Failover Testing  .
1.       Manual Failover by moving the Resource Group
2.       Automatic Failover by abruptly halting the nodes
3.       Failover Testing by removing the attached hardware(disabling the NIC’s ,cables etc)




Important points that need to be validated before performing any failover test  as a System Administrator .

1. Data backup should be handy .

2. Cluster snapshot should be taken .

3. Configuration backup (including the RG attributes  ,FS details ).

4. If crossmount is configured kindly verify the exports file and compare the FS crossmounted . 
    In 1 case we noticed that the cluster filesystem was mounted as normal nfs mount  leading to issue while performing the failover test .  Since cluster will look for the entries in file "/usr/es/sbin/cluster/etc/exports "  if it exists to mount and unmount the FS . 

5. Also if going for failover test , if the RG's goes to error state , there are cases where it will not allow you to execute any cluster commands . In this case you may require to reboot the nodes . So better  keep the required team updated ,that we may require the server reboot of both the nodes in case of any issues. 



    Manual Failover Testing by moving the RG’s

Steps :
1.  Take the console  session of both the nodes.
2.   Verify  the Resource Group availability on nodes before the failover test .
               Command to be used #/usr/es/sbin/cluster/utilities/clRGinfo
# clRGinfo
-----------------------------------------------------------------------------
Group Name     Group State          Node          
-----------------------------------------------------------------------------
RES_01     ONLINE                   node1      >>>>>.    RG (RES_01) currently active on node1
                  OFFLINE                  node2       

RES_02     ONLINE                    node2       
                  OFFLINE                   node1 

3.   Here in this case .we are going to manually move the resource group (RES_GRP_01) from node1 to      node2
4.    From node1  run the command #smitty clstop
                  node1# smitty clstop
                               Stop Cluster Services

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

                                                        [Entry Fields]
* Stop now, on system restart or both                 now                    +
  Stop Cluster Services on these nodes               [node1 ]                +           >>>>>>   select the node
  BROADCAST cluster shutdown?                         true                   +
* Select an Action on Resource Groups            Move Resource Groups      >>>>>  need to select this option for  manual failover


5. Next screen will ask for the  Resource group to move and the node where to move . Select the appropiate Resource Group and press enter , it will start the failover .

6. From node 2 , verify the RG status using the command #/usr/es/sbin/cluster/utilities/clRGinfo
1st probable output


     # clRGinfo
-----------------------------------------------------------------------------
Group Name     Group State                  Node          
-----------------------------------------------------------------------------
RES_01     OFFLINE                       node1       
                  ACQUIRING                  node2         >>>>>>>>>>       failover initiated and node2 is acquiring the Resource group    

RES_02     ONLINE                       node2       
                   OFFLINE                      node1 


2nd probable output

# clRGinfo
-----------------------------------------------------------------------------
Group Name     Group State            Node          
-----------------------------------------------------------------------------
RES_GRP_01     OFFLINE               node1       
                            ONLINE                 node2     Failover completed successfully ,node2 has acquired Resource Group  (RES_GRP_01)

RES_GRP_02     ONLINE                  node2       
                            OFFLINE                 node1 

Note: When stopping the cluster on node 1 the first thing executed is the cluster stop script. It brings down the applications and unmounts all application filesystems. If your application stop script is not able to stop all application processes some filesystems can't be unmounted and the failover fails.When all resources are down on node 1 HACMP starts to bring up all resources on node 2. The application start script is the last thing hacmp does.

7. Verify the status of the cluster using the command #lssrc -ls clstrmgrES .  It should be in "stable" state . If so everything is fine . 
7. Perform the server-level health-checkup to validate the FS and  Cluster IP'S have moved successfully.
8. Inform APP/DB Team to start the APP/DB Services or validate the APP/DB Status after failover  


  Force of auto failover by rebooting active node (typically not recommended, but an option)
HACMP is intelligent enough to differentiate between deliberate shutdown and   abrupt shutdown of node due to  any hardware failures.  Whenever we are forcing the failover by bringing down the active node  , shutdown ,reboot command will not trigger failover.
                                 The halt command will only force the automatic  RG failover from Server end .

1.       Login to node1 , run the command #halt –q as root user . This will bring down the node1 abruptly and Force the RG available on node1 to automatically  failover to node2 .
2.       Login to node2 ,Verify the Resource group status on node2    using the below command .

# clRGinfo
-----------------------------------------------------------------------------
Group Name     Group State            Node          
-----------------------------------------------------------------------------
RES_01     OFFLINE               node1       
                   ONLINE                node2           Failover completed successfully ,node2 has acquired Resource Group  (RES_01)

RES_02     ONLINE                  node2       
                   OFFLINE                node1 

3.       Verify that all the filesystems and IP’s are available on node2 after the automatic failover.
4.       Inform APP/DB Team to validate the APP/DB Status and Startup(if applicable)














Saturday, April 23, 2016

Introduction to OPENSSH


           Introduction to  openSSH & SSH (Secure Shell)                                                       



opensSSH?

OpenSSH is a free implementation of the SSH 1 and SSH 2 protocols. It was originally developed as part of the OpenBSD (Berkeley Software Distribution) operating system and is now released as a generic solution for UNIX or Linux® and similar operating systems.

What Does openSSH Package Provides?                                    
Basically openSSH provides three kind of services
Ø  logging to the server(SSH)
Ø   secure file transfer(SFTP)
Ø  Secure Copy (SCP)

Why SSH?


   SSH was designed as a replacement for Telnet and for unsecured remote shell 

           protocols such as the Berkeley rlogin, rsh, and rexec protocols.

             Those protocols send information, notablypasswords, in plaintext, rendering

           them  susceptible to interception and disclosure using packet analysis.

Note:  The encryption used by SSH is intended to provide confidentiality and integrity of data over an unsecured network, such as the Internet.


In UNIX the configuration files for ssh is sshd_config and for older version it's ssh_config. it is basically located under /etc/ssh directory.



What is SSH ?
The Secure Shell (SSH) protocol was developed to get around these limitations.
 The standard TCP port 22  has been assigned for contacting SSH servers:

1. SSH provides for encryption of the entire communication channel, including the login and password credential exchange

2.It can be used with public and private keys to provide automatic authentication for logins.

3.   You can also use SSH as an underlying transport protocol for other services 



How SSH Protocol works?

SSH architecture

IETF RFCs 4251 through 4256 define SSH as the "Secure Shell Protocol for remote login and other secure network services over an insecure network." The shell consists of three main elements.

·         Transport Layer Protocol: This protocol accommodates server authentication, privacy, and integrity with perfect forward privacy. This layer can provide optional compression and is run over a TCP/IP connection but can also be used on top of any other dependable data stream.
It sets up encryption, integrity verification, and (optionally) compression and exposes to the upper layer an API for sending and receiving plain text packets.

·         User Authentication Protocol: This protocol authenticates the client to the server and runs over the transport layer. Common authentication methods include password, public key, keyboard-interactive, GSSAPI, SecureID, and PAM.

·         Connection Protocol: This protocol multiplexes the encrypted tunnel to numerous logical channels, running  over the User Authentication Protocol. A single SSH connection can host multiple channels concurrently, each transferring data in both directions




What are the different SSH Protocol Versions?
When  first time SSH Protocol Version 1  was introduced , Many vulnerabilities were reported and for fixing the  vulnerabilities in between many versions were introduced like 1.3,1.5 etc 

Currently  we are having two major  SSH Protocol Versions.
1.       SSH Protocol Version 1
2.       SSH Protocol Version 2

What is SSH Protocol Version 1 ?
SSH version 1 makes use of several patented encryption algorithms (however, some of these patents have expired) and is vulnerable to a well known security exploit that allows an attacker to insert data into the communication stream.
What is SSH Protocol Version 2 ?
SSH protocol version 2 is the default protocol used these days.
 This is due to some major advancements in version 2 compared to version 1.
 The workflow of the ssh login is almost same as that of version 1, however there are some major changes done in the protocol level.
Some of these changes include improved encryption standards, Public key certification, much better message authentication codes, reassignment of session key etc.

Various types of encryption are available, ranging from 512-bit encryption to as high as 32768 bits, inclusive of ciphers, like Blowfish, Triple DES, CAST-128, Advanced Encryption Scheme (AES), and ARCFOUR.

Why is SSH Protocol Version 1 not encouraged?
                                                                After using the SSH Version 1 ,it was noticed that , hackers are able to do  unauthorized insertion of content into an encrypted SSH stream due to insufficient data integrity protection from CRC-32 used in this version of the protocol . Later the developers of SSH released fixes but the vulnerability detection continued  due to the flaw in the design flaw of this protocol .



Differences between SSH1 and SSH2 protocols

SSH protocol, version 2
SSH protocol, version 1
Separate transport, authentication, and connection protocols
One monolithic protocol
Strong cryptographic integrity check
Weak CRC-32 integrity check; admits an insertion attack in conjunction with some bulk ciphers.
Supports password changing
N/A
Any number of session channels per connection (including none)
Exactly one session channel per connection (requires issuing a remote command even when you don't want one)
Full negotiation of modular cryptographic and compression algorithms, including bulk encryption, MAC, and public-key
Negotiates only the bulk cipher; all others are fixed
Encryption, MAC, and compression are negotiated separately for each direction, with independent keys
The same algorithms and keys are used in both directions (although RC4 uses separate keys, since the algorithm's design demands that keys not be reused)
Extensible algorithm/protocol naming scheme allows local extensions while preserving interoperability
Fixed encoding precludes interoperable additions
User authentication methods:
  • publickey (DSA, RSA*, OpenPGP)
  • hostbased
  • password
  • (Rhosts dropped due to insecurity)
Supports a wider variety:
  • public-key (RSA only)
  • RhostsRSA
  • password
  • Rhosts (rsh-style)
  • TIS
  • Kerberos
Use of Diffie-Hellman key agreement removes the need for a server key
Server key used for forward secrecy on the session key
Supports public-key certificates
N/A
User authentication exchange is more flexible, and allows requiring multiple forms of authentication for access.
Allows for exactly one form of authentication per session.
hostbased authentication is in principle independent of client network address, and so can work with proxying, mobile clients, etc. (though this is not currently implemented).
RhostsRSA authentication is effectively tied to the client host address, limiting its usefulness.
periodic replacement of session keys
N/A







How to know which  SSH protocol version is used for connection ?








[root@saks20161 ~]# telnet 192.168.0.115 22
Trying 192.168.0.115...
Connected to 192.168.0.105 (192.168.0.115).
Escape character is '^]'.
SSH-2.0-OpenSSH_4.3 >>>>>>>  this will show the protocol used


 SSH security and configuration best practices

SSH Security hardening is also required to minimize the security attacks . openSSH provides lot of flexibility where we can enable/disable the various features  using the ssh configuration file.
Below are the  list of processes and configurations that you can use to tighten and enhance SSH security with regard to remote host access:

      Restrict the root account to console access only:

# vi /etc/ssh/sshd_config
PermitRootLogin no

Create private-public key pairs using a strong passphrase and password protection for the private key 

a:) never generate a password-less key pair or a password-less passphrase key-less login
b:) Use a higher bit rate for the encryption for more security

ssh-keygen -t rsa -b 4096


Restrict SSH access by controlling user access

We can restrict the user access  through ssh as per our need in ssh configuration files . Below mentioned 4 funtions  can be used for doing this.

·         AllowUsers
·         AllowGroups
·         DenyUsers
·         DenyGroups

·         # vi /etc/ssh/sshd_config
·         AllowUsers fsmythe bnice swilson



Only use SSH Protocol 2

·         # vi /etc/ssh/sshd_config
            Protocol 2


Don't allow Idle sessions, and configure the Idle Log Out Timeout interval:

·         # vi /etc/ssh/sshd_config
·         ClientAliveInterval 600                           # (Set to 600 seconds = 10 minutes)

Disable host-based authentication:

·         # vi /etc/ssh/sshd_config
           HostbasedAuthentication no

Disable users' .rhosts files

·         # vi /etc/ssh/sshd_config
            IgnoreRhosts yes


Confine SFTP users to their own home directories by using Chroot SSHD

·         # vi /etc/ssh/sshd_config
·         ChrootDirectory /data01/home/%u


Disable empty passwords:

·         # vi /etc/ssh/sshd_config
           PermitEmptyPasswords no

Configure an increase in SSH logging verbosity:

·         # vi /etc/ssh/sshd_config
            LogLevel DEBUG


IMP: after doing any of the above changes in the ssh configuration files ,you need to stop and start the ssh services.  This changes will impact only the new connections . The existing SSH Connections will be using the earlier configuration .

******************************************************************************









         

Thursday, January 28, 2016

NTP Configuration

                        NTP

Network Time Protocol (NTP) is a networking protocol for clock synchronization
between computer systems over packet-switched, variable-latency data networks.

NTP is one of the oldest Internet protocols . NTP was originally designed by David L. Mills of the University of Delaware,

The protocol uses  client-server model.  NTP Uses  UDP port Number 123 for sending  and receiving timestamps(packets) 

NTP uses a hierarchical, semi-layered system of time sources. Each level of this hierarchy is termed a "stratum" .


 Note: Suppose your NTP Master Server is in Stratum 3. Then the Client's will be of Stratum 4.


* stratum 16 is used to indicate that a device is unsynchronized.


How to configure NTP Server .

On client
  1. Verify that you have a server suitable for synchronization. Enter:
    # ntpdate -d ip.address.of.server
    
    The offset must be less than 1000 seconds for xntpd to synch. If the offset is greater than 1000 seconds, change the time manually on the client and run the ntpdate -d again.
    If you get the message, "no server suitable for synchronization found", verify xntpd is running on the server (see above) and that no firewalls are blocking port 123.

    2. Specify your xntp server in /etc/ntp.conf, enter:
    # vi /etc/ntp.conf
          (Comment out the "broadcastclient" line and add server ip.address.of.server prefer.)
           Leave the driftfile and tracefile at their defaults.
  1. Start the xntpd daemon:
    # startsrc -s xntpd
    

  2. Uncomment xntpd from /etc/rc.tcpip so it will start on a reboot.
    # vi /etc/rc.tcpip
    
    Uncomment the following line:
    start /usr/sbin/xntpd "$src_running"
    
    If using the -x flag, add "-x" to the end of the line. You must include the quotes around the -x.

  3. Verify that the client is synched.
    # lssrc -ls xntpd
    
    NOTE: Sys peer should display the IP address or name of your xntp server. This process may take up to 12 minutes.



*****************Under Construction*********************************