Thursday, November 09, 2017

HMC Commandline




Getting the frame details



hscroot@hmc-op:~> lssyscfg -r sys -F name
op710-1-xxxxxxx
op710-2-xxxxxxx
op720-1-xxxxxxx


Getting the LPAR details in the frame with status

hscroot@hmc-op:~> lssyscfg -m op710-2-SN1008B2A -r lpar -F name,lpar_id,state
op710-2-Client5-Fedora-Core-4,6,Running
op710-2-Client4-openSUSE-10.0,5,Running
op710-2-Client3-Debian-3.1,4,Running
op710-2-Client2-RHAS4U3,3,Running
op710-2-Client1-SLES9SP3,2,Running
op710-2-VIO-Server,1,Running

Getting the resource allocation for frame

hscroot@HMC:~> lshwres -r mem -m Server-8204-XXX-XXXX --level sys
configurable_sys_mem=114688,curr_avail_sys_mem=256,pend_avail_sys_mem=256,installed_sys_mem=114688,max_capacity_sys_mem=deprecated,
deconfig_sys_mem=0,sys_firmware_mem=2560,mem_region_size=256,configurable_num_sys_huge_pages=0,curr_avail_num_sys_huge_pages=0,pend_avail_num_sys_huge_pages=0,max_num_sys_huge_pages=6,requested_num_sys_huge_pages=0,huge_page_size=16384,total_sys_bsr_arrays=16,bsr_array_size=8,curr_avail_sys_bsr_arrays=0,max_mem_pools=0
hscroot@HMC:~>

Getting the resource allocation for LPAR

HMC:~> lssyscfg -m Server-8206-E48-XXXXXXX  -r prof --filter "lpar_names=test_retail"
name=test_retail_Profile_OK,lpar_name=test_retail,lpar_id=2,lpar_env=aixlinux,all_resources=0,min_mem=28872,desired_mem=28872,max_mem=28872,min_num_huge_pages=0,
desired_num_huge_pages=0,max_num_huge_pages=0,proc_mode=ded,min_procs=6,desired_procs=6,max_procs=6,sharing_mode=share_idle_procs,"io_slots=,lpar_io_pool_ids=none,
max_virtual_slots=10,"virtual_serial_adapters=0/server/1/any//any/1,1/server/1/any//any/1",virtual_scsi_adapters=none,virtual_eth_adapters=none,hca_adapters=none,boot_mode=norm,conn_monitoring=1,auto_start=1,power_ctrl_lpar_ids=none,work_group_id=none,redundant_err_path_reporting=0,bsr_arrays=0,lhea_logical_ports=none,lhea_capabilities=none,lpar_proc_compat_mode=default,
electronic_err_reporting=null,virtual_fc_adapters=none



changing the Memory Allocation for and LPAR .

chsyscfg -r prof -m Server-8206-E48-SN2239B16  -i "name=test_retail_Profile_OK,lpar_name=test_retail,min_mem=94208,desired_mem=94208,max_mem=94208"


Changing the Virtual CPU parameter for an LPAR

chsyscfg -r prof -m Server-8206-E48-SN2239B16  -i "name=test_retail_Profile_OK,lpar_name=test_retail,min_procs=7,desired_procs=7,max_procs=7"

Changing the entitiled capacity for an LPAR

chsyscfg -r prof -m Server-8206-E48-SN2239B16  -i "name=test_retail_Profile_OK,lpar_name=test_retail,min_proc_units=0.1,desired_proc_units=0.2,max_proc_units=2.0"


Starting and bringing down the LPAR .

To start the LPAR named "test_retail" with the profile "test_retail_Profile_OK"


hscroot@hmc-570:~> lssyscfg -m Server-9110-510-SN100129A -r lpar -F name,lpar_id,state,default_profile
VIOS1.3-FP8.0,1,Running,default
linux_test,2,Not Activated,client_default



chsysstate -m Server-8206-E48-SN2239B16  -r lpar -o on -n test_retail -f  test_retail_Profile_OK

Shutting down the Lpar "test12" immediately

chsysstate -m SYSTEM-9131-52A-SN10XXXXX -r lpar -o shutdown -n test12  --immed


IMP Commands 


lshmc -v  Shows vital product data, such as the serial number.
lshmc -V  Shows the release of the HMC.
lshmc -n  Shows network information of the HMC.
hmcshutdown -r -t now  Reboot the HMC.
lssysconn -r all Show the connected managed systems.
chhmcusr -u hscpe -t passwd -v abc1234  Change the password of user hscpe.
lshmcusr  List the users of the HMC.
monhmc -r disk Look at the filesystems of the HMC
monhmc -r proc details of the processor
monhmc -r mem details of memory
rmvterm -m SYSTEM-9117-570-SN10XXXXX -p name Forces the closure of a virtual terminal session.
lspartition -dlpar   shows dlpar capable partitions


And now let's initiate some commands to a VIOS using viosrvcmd.

hscroot@hmc-570:~> viosvrcmd -m Server-9115-520-SNxxxxx -p VIOS1.3-FP8.0 -c "mkvg -f -vg datavg hdisk2 hdisk3"
datavg
hscroot@hmc-570:~> viosvrcmd -m Server-9115-520-SNxxxxxx -p VIOS1.3-FP8.0 -c "mklv -lv testlv datavg 10G"
testlv
hscroot@hmc-570:~> viosvrcmd -m Server-9115-520-SNxxxxxx -p VIOS1.3-FP8.0 -c "lsvg -lv datavg"
datavg:
LV NAME             TYPE       LPs   PPs   PVs  LV STATE      MOUNT POINT
testlv              jfs        160   160   1    closed/syncd  N/A

Cluster Issue

Why the SP2 Failover failed?

Observations :

1.       After analyzing the logs ,we noticed the below error in cluster logs.  The error what we noticed that the cluster event “get_disk_vg_fs” failed . On further analysis  ,to pinpoint where was the actual issue and why this event failed we further deep-dived the logs . we found that the Cluster services   had issues while activating/mounting the Cluster filesystem /sapmnt/SP2.





   

2.       When we initiated the  SP2 cluster failover , it will un-mount the Filesystems , export the VG’s from node1 and after this it will import all the VG’s and mount the respected Filesystems  on node2 . As per  the logs ,the cluster VG’s  were successfully exported from  node1 and the VG’s  were imported successfully  on  node2   but while mouting the FS(/sapmnt/SP2) it was giving issues .   The cluster failed to mount the /sapmnt/SP2 filesystem  . 

3.       Once we got these details from cluster logs, we investigated further , to know why the cluster was facing issues with /sapmnt/SP2 filesystem during the failover . On further investigation ,we found that /sapmnt/SP2 filesystem was already  NFS -mounted  and also this filesystem was manually mounted on node2    though the normal NFS commands .  That means that cluster was not able to mount the FS since it was already mounted  . 


.

4.       We  verified with  the SAP/DB team in call ,about the requirement of /sapmnt/SP2 filesystem on node2  and upon confirmation, we have un-mounted it. As per the application team this filesystem is needed where SP2 application is running.  we   configured the filesystem  /sapmnt/SP2  as NFS-Crossmount inside cluster  to meet the requirement  and again performed the cluster failover test and application validation . 

Everything was fine .