Tuesday, October 04, 2011

asynchronus I/O (aioserver)


creating raw logical volumes

Any logical volume that doesn't have filesystem on it is raw logical volume.

some applications like oracle,informix needs and  uses  raw`LV's.

***Every logical volume you are creating will have two entries in /dev  directory i.e. one block device and one character device.

suppose yours application team asks for some raw LV'S . i.e. oracle team.

step1:  let me create one LV named "list"

 step 2: goto /dev  directory and check that device files are available.

 #cd /dev


# ls -l list
brw-rw----   1 root     system       10, 11 Oct 04 15:27 list
# ls -l rlist
crw-rw----   1 root     system       10, 11 Oct 04 15:27 rlist
#

here we see that the block device is having the name same as the logical device name"list" . but the chacter device is having the name "rlist"

step 3:  now we have to change the ownership  of the file rlist to oracle:oracle  ,so that it can be used by oracle team.

#chown  oracle:oracle   /dev/ rlist


Wednesday, August 17, 2011

N-port Virtualization(NPIV)


through NPIV partitions of managed systems  can access the SAN storage directly through the same physical fibre channel adapter.

ex...
         suppose you have assigned 200 GB storage to vio server.this  200GB storage you can assign to the  logical partitions through mapping lv, pv  to the scsi adapters of the particular partitions. what happens when a  single LPAR needs  100GB  and there is only 50 GB storage left in VIO Server partition.
                                               .again you have to assign a LUN to VIO Server and then  you have to create  lv to provide the more,space. this can be tough  and time-taking if many server's are there. for reducing this over head NPIV was introduced through which we can directly assign the storage to the partitions.


 SAN  ------------->  LUN -------------->MAPPED TO THE PORT OF PHYSICAL FC ADAPTER





* Each virtual fibre channel adapter on each client logical partition receives a pair of unique WWPNs. The client logical partition uses one WWPN to log into the SAN at any given time. The other WWPN is used when you move the client logical partition to another managed system.

*Using their unique WWPNs and the virtual fibre channel connections to the physical fibre channel adapter, the operating systems that run in the client logical partitions discover, instantiate, and manage their physical storage located on the SAN


To enable N_Port ID Virtualization (NPIV) on the managed system, you create the required virtual fibre channel adapters and connections as follows:
1.using HMC  create virtual fibre channel adapters on the VIO Server logical partition and map  them  with virtual fibre channel adapters on the client logical partitions.


 2..When you create a virtual fibre channel adapter on a client logical partition, the HMC generates a pair of unique WWPNs for the client virtual fibre channel adapter.this you can see through HMC by viewing the properties of client FC adapter.

 3. we need to check whether the HBA port is connected to the SAN switch on which NPIV  is enabled.
that that we run $lsnport command on vio server.


 if fabric parameter is "1" . then NPIV is supported
 if it is "0" it means NPIV is not supported.



$lsnports


name             physloc                fabric 
fcs0              ---                          1
fcs1              ---                          1
fcs2              ---                           1


here fabric parameter is set to"1". that means NPIV is supported and we can do mapping.

now check the virtual FC adapter on the vio server that you have created

$lsdev  -vpd|grep vfchost
vfchost0  
vfchost1

for mapping the virtual FC adapter to physical FC adapter use the "vfcmap" command

$vfcmap -vadapter vfchost1   -fcp  fcs1

to check that mapping has been done correctly and the clients are able to login to the SAN.

#lsmap -vadapter vfchost1 -npiv 

Thursday, August 11, 2011

micro partitioning.

we are distributing the processing capability of one or more physical processor among the partitions

using micro partitioning we can increase the overall utilization of processor resources within the system.


suppose we are having   4 processors  i.e. 4 processing units(PU) and we are having four partitions.you have assigned 1PU to all partitions.you  find that on all the partitions about 50% of processor is un-utilized  now .you want to create some more partitions . options are

1.that you have to add more processors to the server  that will add extra cost or burden to ur's firm.or

2.  you can free some processing units from all the lpar's as they are only 50% utilized atmost. this you can do by DLPAR operation through HMC ,if you don't want to reboot.

you can re-assign the processors according to ur's need :here i have reassigned the processing units so, that i can utilize those processor more efficiently and assign the freed PU to the new partitions that i want to create..

lpar1      minimum  .1      desired .3        maximum. .5
lpar 2     minimum .1       desired .2         maximum .3
lpar3      minimum .1       desired .4         maximum .6
lpar4      minimum .1       desired .3         maximum. .6


PROCESSING UNITS(PU) -  the capability of assigning less than 1 processor to a partition is call ed micro-partitioning.
                          for allocating less than 1 processor ,we use processing units(PU)

1 processor corresponds to  1 PU

** we can assign minimun of 1/10 i.e. 0.1 PU to a  partition.
** minimum granuality for assigning extra PU is .01

what is minimum,desired and maximum ?

minimum --  partition will not start if this much PU is not available
desired------  partition will use this much of PU ,if available
maximum ---  partition can be increased to this amount if using Dlpar.



here click to "advanced"

what is capped and uncapped?

capped---processing capability can never exceed the entitled (assigned) processing capability of partition.

uncapped --  the processing capability can exceed the entitled capacity when resources are available in there shared -processor pool and the partition is eligible to run.

* higher the uncapped weight of partition the more processing units it will receive.
* uncapped weight ranges between 0 to 255. default is 128.

power hypervisor

power hypervisor is the firmware layer sitting between the   hosted operating system and the server hardware.

suppose you are having p570 box in which you have created 4 Logical Partitions. you have assigned logical resources to  each of them. power hypervisor keeps track of the resources allocated to each partition and also take care that partitions don't access the other partition's assigned resources .

**power hypervisor enforces partition integrity by providing security layer between logical partitions.

** it provides VLAN channel between logical partitions that helps to reduce the need of physical ethernet adapter.


** it also monitors the service processor.if there is any loss of SP it will perform reset/reload operation.if it is not corrected it will notify the operating system.

power hypervisor provides following types of virtual I/O adapters.
  1. virtual scsi
   2. virtual ethernet
   3. virtual FC
  4. virtual console

what is virtual scsi?
for virtualization of storage,power hypervisor gives you virtual scsi mechanism.

virtual scsi adapter is needed for this which is defined in vio server partition.


there are two types of virtual scsi adapter
 virtual client scsi adapter
 virtual server scsi adapter



 all the scsi physical storage devices are assigned  to vio server.

how the adapters are connected?

1.using DLPAR operation , you can create  virtual scsi  server adapter if you don't want to reboot the server.
  2.on client partitions also  you define the virtual client scsi adapters. through DLPAR operation. mapping should be correct.

ex. of mapping the adaapters

while defining server adapter, note the following
slot no.  3
remote partition  partition2
remote partition virtual slot number  4

for client adapter
slot no. - 4
remote partition  vios
remote partition slot no.  3



3. after that run#cfgdev on vio server . one virtual host device (vhost) device will be available on vio server that represents that particular partition.

4. you cam map the logical  volume etc  to  a particular partition virtual host device and that will be available as a disk on client partition  after running the #cfgmgr command on that partition.


 vitual scsi can be used as
1. virtual disk
2. virtual optical devices(vtopt)
3. virtual tape.

VIRTUAL ETHERNET

power hypervisor provides a virtual ethernet switch function that allows partitions on the same server to use a fast and secure communication without any need of physical intercconnection.

***virtual ethernet is a part of base system configuration and doesn't need vio server.




Saturday, August 06, 2011

creating users in company environment

creating a admin user won
# mkuser -a won

assigning the password to user won
#passwd won

it will not prompt for password when user logs in first time.
#pwdadm -c won

Friday, August 05, 2011

removing a disk from user-defined VG

check the activated volume group.

#lsvg -o

rootvg
datavg
quavg

#lsvg -p datavg

vpath12
vpath13

now using the migratepv command migrate all the  filesystem from vpath13 to vapth12

#migratepv vpath13  vpath12


check that any pp is allocated

#lspv -M vpath13 or # lspv -l vpath13


check whether all the filesytems are available

#lsvg -l datavg

now reduce the disk vpath13 from VG

#reducevg vpath13  datavg

now remove the vpath13 .

#rmdev -Rdl vpath13
vpath13 deleted



note: there is no need to unmount the filesystems or varyoff vg while running migratepv command.

Thursday, August 04, 2011

SMIT

The System Management Interface Tool (SMIT) provides a menu-driven interface that
provides access to most of the common system management functions within one
consistent environment.
                   SMIT is managed by ODM.It also contains all of the menus, screens and
commands that SMIT uses.


..........Special symbols on the screen are used to indicate how data is to be entered:

*              A required field
#              A numeric value is required for this field.
/               A pathname is required for this field.
X             A hexadecimal value is required for this field.
?              The value entered will not be displayed.
+              A pop-up list or ring is available.


                                                    log files

$HOME/smit.log 
this log files keeps records all the menus and dialog box visited and also all the commands runwith the output.it also records any errors during the SMIT session.




 $HOME/smit.script 
shell script containing all aix commands executed through SMIT.

performance monitoring.


Introduction to service processor

on P -series servers , there is an extra processor known as the service processor.
                             In reality ,service processor is a firmware that is not a part of AIX OS.

The purpose of service processor is to continuously monitor the system from failure and allow for system re-configuration.

funtions of service processors

1.  access the local or remote ASCII terminals.
2.  console mirroring for dual support.
3.remote power on
4.unattended start after power failure

HMC management


introduction to inittab file


differences between aix5.3,aix6.1


aix boot sequence

STEP-1          POST(power on self test)

it's purpose is to verify that the basic hardware is in functional state.
     the memory,communication and audio devices are all initialized.

STEP 2           System ROS (Read only storage)

system ROS is necessary for AIX 5L version 5.3 to boot, but it doesn't built the data structures required for booting.

it will locate the bootstrap code.

system ROS contains generic boot information and is OS independent.

STEP 3             SOFTWARE ROS(also called bootstrap)

Software ROS forms an IPL(initial program load) block ,that takes control and builds AIX5L specific boot information.

A special filesystem called RAMFS filesystem is created in memory.


softwareROS locates and loads Boot logical volume(BLV)in ram.


  • contents of BLV

               1. AIX 5L kernel
               2. reduced version of ODM
               3. rc.boot script
               4. boot commands called during boot process like bootinfo,cfgmgr.

AIX 5L kernel is loaded. the kernel will complete the boot process and starts the init process.

  LED =0299

now, rc.boot script will be called three times,and is passed through different parameters each time.


BOOT PHASE 1

Init process started from RAMFS execute the boot script rc.boot1

if init fails LED will display c06

"restbase "command  -- copies partial image of ODM  from BLV to ramfs.

LED =548 if successful

"cfgmgr -f" command

Reads the config-rules class from the reduced ODM.
                                             in this class, devices with attribute phase=1 are considered base devices
base devices are all devices that are necessary to access rootvg.

"bootinfo -b" command

determines the last boot device

LED =511 ,if successful

 BOOT PHASE 2




in this phase,the rc.boot script is passed  with parameter 2.

"ipl-varyonvg rootvg" command

rootvg is activated.

LED =517 ,if sucessfull
LED =552,554 or 556 ,if boot process is halted.

root(/) filesystem (hd4) is checked using  "fsck -f"  to verify whether the filesystem was unmounted clearly before the last shutdown.

/dev/hd4 is temporarly mounted in RAMFS


LED=557,if fails

/user,/var are verified using  " fsck -f"  command and then mounted temporarily
LED =518 ,if /usr fails tomount 

"copycore" command

checks if a dump occurred. if it is copied ffrom default dump device /dev/hd6 to the default copy directory/var/adm/ras/

afterward /var and /usr are unmounted.

the primary paging space from rootvg ,/dev/hd6 is activated.


"mergedev " process is called and all /dev files from REAMFS are copied to disk

All customized ODM files RAMFS are copied to disk.
                                                   both ODM versions from hd4 and hd5 are synchronized.

finally root file system from rootvg is mounted over the original mountpoint from RAMFS.

/var and /usr are also mounted to their mount-point.

BOOT PHASE 3

After boot phase 2, rootvg is activated.

Init process is started . it reads the/etc/inittab file and calls rc.boot with arguement 3
/tmp filesystem is mountd.

"syncvg" command  - rootvg is synchronized . it runs as backgroundprocess. it checks all stale partitions from rootvg and updates it.
LED =553,if successfull.

"cfgmgr" command

if booted in
  normal mode - option is p2
 service boot - option is p3.

it reads the config-rules class from ODM and calls all methods correponding to either phase=2 or phase=3

all other devices ,that are not base devices are configured this time

"cfgcon "command

console is configured by calling the cfgcon command.
     after the configuration of the console,boot messages are sent to the consoleif no STDOUT direction is made.
         all missed messages can be found in /var/adm/ras/conslog


LED C31 - console not yet configured.
        C33 - console is tty.
        C34 - console is a file on disk.

"savebase" command

synchronization of ODM  in BLV wiith the ODM  in root filesystem is done.

syncd daemon and errdaemon are started.
LED display is turned off.

if /etc/nologin exists,it is removed.

recovering root password in aix

There are different ways to recover the root password depending on the situations:

 Case -1. In case of standalone server with NO-NIM configured in the environment . we are left with only one option to recover the root password that is through CD.

Case-2. If NIM is configured in the environment we can recover the root passwrd by booting the server in maintenance mode through network.

Case- 3. If it is cluster node and we are able to login to the other node ,we can reset the root password using hacmp fron the other node.

HACMP basics


VIO SERVER BASICS

virtual I/O is the term used to describe the ability to share physical I/O resources across partiotions.


physical resouces that are being shared are adapter cards located in pci-x slots of managed system.


vioserver software runs on seperate partitions. which is created using HMC.






benifits of vio server


partitions can be created without requiring additional physical adapter.


 suppose you are having 3 ethernet adapters in your P570 box. and you have planned to create 12 partiotions. it means you need 12 physical ethernet adapters for this that means extra cost . because of this only vio server came into picture. by using vio server if you are having only one ethernet adapter that is enough and is assigned to vio server .  in this scenario you can create 12 virtual ethernet adapter through HMC  and assign to the partitions.


going to aix environment


$help  - it is very useful  .it will show you list of commands used in vio server .


$oem_setup_env


you will get #prompt. now you can run aix commands




to view the mapping




$ lsmap -all
SVSA            Physloc                             Client Partition ID
--------------- -------------------------------------- ------------------
vhost0          U8234.ZMA.0123494-V5-C22            0x00000006

VTD                   vtscsi0
Status                Available
LUN                   0x7100000000000000
Backing device        abhivg_rootvg
Physloc

VTD                   vtscsi44
Status                Available
LUN                   0x8800000000000000
Backing device        hdisk99
Physloc               U7311.i20.063CD7C-P1-C02-T1                                                                                                
SVSA Physloc Client Partition ID --------------- ----------------------------------------------------- vhost1 U8234.zmc.01234C94-V5-C22 0x00000001 VTD NO VIRTUAL TARGET DEVICE FOUND


here, you see the  vhost0, vhost1 these are the adapters corresponding to particular partition.

here , vhost0  corresponds to partition id 6
         vhost1  correspond to partition id 1

VIRTUAL TARGET DEVICE(VTD)
it can be logical volume,PV or a file that you are assigning  to particular  partition by mapping that with the related adapter(vhost). 

here for vhost0  ,
 a VTD  is defined named vtscsi0 which is backed by logical volume "abhivg_rootvg"
also vtscsi44 which is backed by hdisk99 attached to vio server.


how to map the VTD to particular partition.






1.create a logical volume of the required size you want to map.


$mklv -lv abhilv rootvg 5G
abhilv available


#lsvg -lv rootvg



2. map the logcal volume to particular adapter(vhost#) associated to the particular partition using "mkvdev" command.

$ mkvdev -vdev abhilv -vadapter vhost3 -dev abhi_disk
abhi_disk Available

in this command the following parameters are used,

-vdev -  to specify  the backing device
-vadapter - the adapter corresponding to particular partition you want to map
-dev - to give the name for VTD



$ lsmap -vadapter vhost3

SVSA            Physloc                       Client Partition ID
--------------- ------------------------------- ------------------
vhost3          U8234.EMA.0688C94-V5-C24      0x00000007

VTD                   abhi_disk
Status                Available
LUN                   0x8100000000000000
Backing device        abhilv
Physloc   


how to remove the virtual target device
            
$ rmdev -dev abhi_disk  or $ rmvdev -vtd  abhi_disk
abhi_disk removed
$lsmap -vadapter vhost3

SVSA            Physloc                            Client Partition ID
--------------- -----------------------------------------------------
vhost3         U8234.zmc.01234C94-V5-C22          0x00000007

VTD                   NO VIRTUAL TARGET DEVICE FOUND

Knowing the aix filesystem structure

what is filesystem?


A filesystem is a set of files ,directories and other structures.


filesystem maintains information and identify the location of a file or directory's data .


it may also contain  a boot block, a superblock ,bitmaps and one or more allocation groups.


*allocation group contains disk i-nodes and fragments.




filesystem supported by aix


    1.  JFS(journaled filesystem)
    2.  JFS2(enhanced journaled filesystem)
    3.  NFS(network filesystem)
    4.  CDRFS(cd-rom filesystem)


JFS


it uses database journaling techniques, such as recording file changes sequentially,  to a mintain the integrity of control structure.


each journaled filesysten resides on distint jfs logical volume


JFS2
it uses extent based allocation to allow higher performance,larger filesystem and larger file-size.


each enhanced journaled filesystem must reside on a distinct JFS2 logical volume.


when aix is installed using default option ,it creates JFS2 filesystem.


NFS


it is distributed filesystem that allows users to access files and directories located on remote computers and use those files and directories as they are local.


CDRFS


this filesystem allows you to access the content of CD-ROM through the normal filesystem interfaces.




filesystem structure


journaled file-system uses following data structures:


               1.)  superblock
               2.)  allocation group
               3.)  inodes
               4.)  blocks
               5.)  fragments
               6.)  device logs


superblock


super block contains controlinformation aboyt a filesyatem such as
 1. overall size of file-system in 512 byte blocks
 2. file-system name
 3. file-system log device
 4. version no.
 5. no. of inodes 
 6. list of free inodes,free blocks
 7. date and time of creation
 8. file-system state


corruption of super-block may lead the  filesystem to become un-usable.


***the system keeps a second copy of the super-block on logical block 31.


allocation group


An allocation group consists of inodes and its correspondong data blocks.


An allocation group spans multiple adjacentdisk blocks and improves the speed of i/o operations


inode


inode contains control information about the file .it contains following details of files

 1. type
2. size
3. owner
4. date and time of creation
5. last accessed
6. a pointer to block that stores the actual data


# istat  /abhi/aks


data block 


data block stores the actual data of the file or pointers to another data block




device logs


the jfs log stores transactional information about filesystem metadata changes. this data can be used to roll back incomplete operations if the machine crashes.




*filesystem used for logging is of type "jfs2log"
in aix, hd8 is the common log  


things you must know

/root            -  it is root user home directory.

/home          - here all users that you have created will be stored.

/bin             -  it contains executable binaries that any user can run.

/sbin           -  it contains the executable binaries that only admin or root  user can run.

/etc              - it contains all the configuration files. like files related to DNS ,DHCP configuration etc

/dev             -  as we know that in unix everything is file. so, all devices that are defined in aix will have one
                                     file in this directory.

/tmp             -      it contains all the temporary files.

/var               -     it contains all the log files                     

System Resource Controller.(SRC)

System Resource Controller(SRC) provides a set of commands and sub-routines to make it easier for the system and programmar to create and control sub-systems.


SRC is started during system initialization with a record for the /usr/sbin/srcmstr daemon in /etc/inittab file.


If the srcmstr daemon terminates abnormally, the respawn action specified in the /etc/inittab restarts the srcmstr daemon.

how to start the subsystems or groups

we are using startsrc command for this . this command sends SRC ,a request to start a subsystem or a group of subsystems.

#startsrc -g group  

#startsrc -g nfs(this group contains all daemons related to nfs)
#startsrc -g tcpip  ( tcpip is a group that contains all subsystems related to tcpip)
#startsrc -g spooler (this group contains all the deamons related to printing)


#startsrc -s subsystem
#startsrc -s inetd
#startsrc -s nfsd
#startsrc -s syslogd


#startsrc -t subserver
#startsrc -t tester


 as we know that ODM also have details of SRC. it is there in /etc/objrepos  directory.to view the details of the subsystems from ODM. 


#odmget -q subsyname=inetd  SRCsubsys


how to stop the subsystems or group


#stopsrc -s syslogd


#stopsrc -s inetd


#stopsrc -g nfs


#stopsrc -g tcpip




how to refresh a deamon


#refresh -s httpd

Network Filesystem (NFS)


NFS  is normally known as network file system.

The Network File System (NFS) is a distributed file system that allows users to access files and directories of remote servers as if they were local  for that particular machine.

NFS is independent of machine types, operating systems, and network architectures through the use of remote procedure calls (RPC)


Before configuring the NFS  services , WE should be  aware of the daemons used for this .

List of   daemons  used in NFS

/usr/sbin/rpc.lockd            Processes lock requests through the RPC package.

/usr/sbin/rpc.statd             Provides crash-and-recovery functions for the locking services on NFS.

/usr/sbin/biod                     Sends the client's read and write requests to the server.
                                           The biod daemon is SRC controlled.

/usr/sbin/rpc.mountd          Answers requests from clients for file system mounts.
                                            The mountd daemon is SRC    controlled.

/usr/sbin/nfsd                      Starts the daemons that handle a client's request for file
                                            system operations. nfsd is SRC controlled.

 /usr/sbin/portmap               Maps RPC program numbers to Internet port numbers.
                                             portmap is  Controlled  by inetd subsystems.



Steps to configure The NFS  Server

step 1
                       
                  Check whether  the NFS services  are started  and are active

                    #lssrc -g nfs
                        
                         Check the status  in the output of the NFS daemons .
                         The mentioned 5 daemons should be  started on the servers  and clients.
                    
                     # lssrc -g nfs
                     Subsystem         Group            PID             Status
                      biod                  nfs                233612          active
                      rpc.statd           nfs                217216          active
                      rpc.lockd          nfs               184458           active
                      nfsd                  nfs                561390          active
                      rpc.mountd       nfs                569442          active
                
            if it is not active ,start the NFS subsystem

                    #startsrc -g nfs

Step 2-
              Check  that portmap services are running on both  client and server,if not start  it.
          
                     #lssrc -g  portmap
                to start  use
                      #startsrc -g portmap

Step 3:    check whether /etc/exports file exists  on the server.if not create this file or this can be done automatically by adding the NFS Exports through smity
              
                       #touch /etc/exports or  use the #smitty nfs

Step 4 -   Add the NFS shares  in /etc/exports  file. this can  be done manually or through #smitty nfs>add the directory as an NFS exports



/share        -public,sec=sys,rw


Step 5 : make the entry of the NFS client,s in the /etc/hosts file .


Configuring and exporting the filesystem on client machine.

Step 1 :  Verify that portmap deamon is running on the server and also the rpc.mountd deamon is running on the client machine.

Step 2 : Verify that NFS Server entry is present in the /etc/hosts file of client machine.
.
Step 3: use the command #showmount -e <NFS-Server IP >  to verify that the exported Filesystem is accessible or visvible to the respective client machine.

Step 4:  Create a directory on which you are going to mount .
            #mkdir /Test

Step 5:  Mount the   NFS filesystem using the below command.
           #mount <NFS-Server-IP or Hostname>:/test  /Test


 

Object database management(ODM)

ODM is a database where system information is stored.in ODM informations are stored  and maintained as an object with its associated characterstics.


ODM manages

device information,
display information of smit
vital product data
SRC

finding files and directories in unix


Wednesday, July 27, 2011

umask

umask (user mask)  is used to set permissions on files and directory created by that  particular user.


as we know 


4 -read
2 - write 
1 - execute


ex- 432 represents read permission to owner,write & execute permission for group and write permissions for others.


***for every user we can set umask value.


***by default umask value for all users in aix is 022.


***default permissions for file if umask is not set is 666
***default permission for directories if umask is not set is 777


note: setting the proper umask is must for system admin's.  as it reflects to the permissions of file and directories. suppose  u have set your umask to 000.it means that you had given  all(owner,group ,others)  permissions to access and edit the file. .




example to understand what umask does?


*********As we know that default umask is 022. create one file and directory and note down the permissions on that file and directory.  after that change the umask value. and again create another file and a directory and notice the differences in permissions.**********


suppose , i changed the umask for user abhi as 024 . 


#umask 024


user abhi created a  file "test". now the permissions on this file will be 666-024=642


#touch test


#ls -l  test
-rw- r-- -w-  1  abhi staff   ...............


if user abhi is creating directory aks the  permissions on the directory will be 777-024=753


# mkdir aks


# ls -l | grep aks
drwx r-x -wx  2  abhi staff ...............




for setting the umask value .




to view the current umask value


#umask


to change the umask value use


# chuser umask=024 abhi


or
login as abhi and run 
#umask 024

gathering system information(snap)



why we use snap command?


to gathers system configuration information and compress the informatiion in  pax  file we use snap command.  we  can then save it to disk or tape, or send it  to a remote system. This information can be used for futher troubleshooting.


about 8mb of free disk space is needed for that in /tmp




 the default directory for snap command output is /tmp/ibmsupt directory,


you should be root user to run snap command.



The" #snap -g"   command gathers general system information, including the following:


 1.    Error report
 2.    Copy of the customized ODM  database
3.     Trace file
4.     User environment
5.     Amount of physical memory and paging space
6     .Device and attribute information
7.     Security user information






 " #snap -g"  command also gathers the output of the " #lslpp -hac" command, which is required to recreate exact operating system environments and writes output to the /tmp/ibmsupt/general/lslpp.hac file. Also collects general system information and writes the output to the /tmp/ibmsupt/general/general.snap file






****to gather HACMP specific information from nodes node1 and node2 belonging to a single cluster.Output is written to the /tmp/ibmsupt/hacmp directory
 #snap -e -m node1,node2




this command gathers LVM characterstics.
#snap -L


to remove the snap command output from /tmp/ibmsupt run
#snap -r



how to read the snap report?

you have to uncompress it.

# uncompress snap.pax.Z

you will see that snap.pax.Z will be replaced by snap.pax

after that run,

#pax -rvf snap.pax

after that you can view the file and get all details.

#more snap.pax

important files and directories related to snap



/usr/sbin/snap - Contains the snap command.


/tmp/ibmsupt - Contains snap command output.


/tmp/ibmsupt/general/lslpp.hac - Contains the output of the lslpp -hac command, which is required to recreate exact operating system environments.


/tmp/ibmsupt/general/general.snap - Contains general system information that is collected with the snap -g command.


/tmp/ibmsupt/testcase - Contains the test case that demonstrates your system problem.


Saturday, July 23, 2011

how to get firmware updates

what is firmware?


firmware is also known as microcode. it is a licensed internal code that fixes problems and enables new system features as they are introduced.

the new features are supported by new firmware level.


System Microcode or firmware initializes the system hardware and controls the boot process enabling the system to boot up and operate correctly; it also provides the interface between the operating system software and the hardware.
Adapter Microcode or firmware is the operating code of the adapter; it initializes the adapter when power is applied and it controls many of the ongoing operations executed by the adapter.
Device Microcode or firmware  provides these same functions for devices such as tape drives.


 naming convention 
firmware names are given as


01SF_XXX_YYY_ZZZ


here,


XXX - is stream release level
YYY - is service pack level
ZZZ - is last disruptive SP level.




ex.  01SF235_185 represents release level is 235 and service pack is 185.


each stream release level supports new machine types or new features.




service processor contains two copies of firmware.which helps to manage and reduce the frequency of downtime for maintenance.


1. temporary
2. permanent


server firmware fixes are installed on temporary side.


copying the temporary firmware level to the permanent side is known as committing or accepting the fix.




firmware updates can be of two types


1. disuptive
2.concurrent


a disruptive upgrade requires system to be shutdown and powered off prior to activating new firmware level.


A concurrent  upgrade can be made on running system. doesn't require downtime.


when is disruptive upgrade required:


1. when release level is different
ex-SF230 AND SF235


2. SP level (YYY) and last disruptive SP level (ZZZ) are equal
 ex. SF235_180_180


3. SP level currently installed on system is lower  than the last disruptive level(ZZZ)of the new SP to be installed.
ex:  installed on the system -SF235_180_160
  to be installed  - SF235_185_160




*** An installation is concurrent if SP level is higher than the SP level(ZZZ) of the new SP to be installed.


ex  currently installed  SF235_180_160
to be installed -  SF235_165_160




getting firmware updates


goto ibm fix central page.


from there select the following options


 product group -----system p or power
product     - firmware,sdmc,hmc
 machine type and model . ---
you can see that after running "#prtconf" command in aix box.


you will get five  options


1. all firmware components
2. system firmware
3.device firmware
4. SDMC codes
5. HMC firmware.


according to urs requirement select the appropiate option.


if you are going for device firmware.






you should know which tpe of adapter is attached to urs system.


in aix box, run
#lsdev  -Cc adapter




select   device firmware





you will get three options through which you can download the updates


1. machine model
2. feature code(to get feature code run #lscfg -vpl fcs0 the customer card id no. is urs feature code)
3.select by device


here i am choosing the option by device type


four options will come
1. adapter
2. hard disk
3. media
4. others


I selected adapter here


after selecting you will get output like this,from this select the appropriate device update and download


10/100/1000 Base-TX Ethernet PCI-X Adapter
10/100 Mbps Ethernet PCI Adapter II
10/100/1000 Base-TX Ethernet PCI-X Dual Port Adapter


........................................................




aix.


while going for system firmware update 


product group -----system p or power
product     - firmware,sdmc,hmc
 machine type and model . ---
you can see that after running "#prtconf" command in aix box.


after that you will get next screen, from where select "you need guidance" this is best practise.
it will ask for your current installed lsmcode
you can find using :#lsmcode"


select the lsmcode from the drop-list


if you want to install using an hmc select the appropiate option
if your server is hmc managed select the appropiate option
then


if you want to upgrade select that


it will show some list select the recommended one  and download.








how to apply the  system firmware updates




run  diag command
 #diag


select "Tasks and Service Aids
  select Update and Manage Flash
Select Validate and Update System Firmware


If the fix file is located on your hard drive, perform the following steps:
  1. Select File System.
  2. Enter the fully qualified path name of the file with the flash update image. The file will be copied to the/var/update_flash_image directory.
  3. When finished, select Commit. The server firmware level that you selected will be installed on the temporary side.