Wednesday, October 31, 2012

src issue after restarting

SRC NOT WORKING

0513-053 The System Resource Controller is experiencing problems with its socket communications

0513-001

Possible Causes



sometime while you are rebooting the system srcmstr will not start automatically .because of this lssrc command will not work . and you will be experiencing issue with the services  managed by SRC

solution:

step 1  :   goto /dev directory.
              # cd /dev

check for the below mentioned file and directory  /dev/SRC file or the /dev/.SRC-unix directory

#ls -l SRC      # this is the socket file used
#ls -ld .SRC-unix
              
              
If the /dev/SRC file or the /dev/.SRC-unix directory does not exist, reboot your system by running the shutdown -Fr command. The shutdown -Fr command automatically creates the /dev/SRC file when the system comes up


step 2:

if after rebooting also srcmstr is not working then  check the /var directory.
check whether /var/adm/SRC  directory exists .

if it exists check for the content of the directory.

This directory is having 2 files." watch_list"  and "active_list"

check for watch_list file.

#cd /var/adm/SRC
#cat watch_list

10  1   /dev/SRC

also check thhe /etc/inittab entries and /etc/rc.tcpip .if everything is fine.

if this directory doesn,t exists . then best way will  be to restore the below directory from the backup
and then restart the system using "shutdown -Fr"

ii worked for me.

after that also if you are facing issue,then better to go for restoration from the backup.



glossary


  1. /dev/.SRC-unix Specifies the location for temporary socket files.
 2.  /dev/.SRC-unix/SRCD Specifies the AF_UNIX socket file for the srcd daemon.
3.    /var/adm/SRC/active_list Contains a list of active subsystems.
Caution: The structure of this file is internal to SRC and is subject to change.


4. /var/adm/SRC/watch_list            


  


.

Friday, October 19, 2012

Setting timeout parameters for ssh session



for removing  the timeout parameter   for ssh connection

clientAliveInterval  900 : means if the session is idle for  900 seconds session  will automatcally  timeout.




step 1 : and also comment following parameter in /etc/profile

# TMOUT=40


step 2 : uncomment in /etc/ssh/sshd_config file and set the "clientAliveInterval"  parameter:

clientAliveInterval 900
# TcpKeepAlive =yes
# ClientAlivecountMax 0

step 3 : stop the ssh services and again start the ssh services.


     # stopsrc -s sshd
     # startsrc -s sshd

 NOTE:  if you are not commenting TMOUT parameter, then the session will timeout if it is idle for 40 min even if you have commented the mentioned the parameters in sshd_config file.



for setting the timeout parameter to 10 minutes  for ssh connection

step 1: comment the TMOUT parameter in /etc/profile.
step 2: set the "clientAliveInterval 600" means session will timeout if idle for 10 min.
step 3: stop ths sshd services using "stopsrc -s sshd"
step 4: start the sshd services using "startsrc -s sshd"

Sunday, October 07, 2012

vio server patch management

Step 1

  *   Download the service pack from the fix central website.
  *   Go through the   Readme and Release Notes for Virtual I/O Server 2.2 VIOS 2.2.0.10 FixPack 24
  *   Copy  the  service pack to /tmp.


                         #  scp  –r    VIOS_2.2.0.10-FP24   padmin@<server-ip>:/tmp

Step 2


*        Take the configuration  backup

*      check the  current ioslevel
                
                     $  ioslevel
                         2.1.3.0

*       take the vios-mksysb

              $ backupios-file <file-location> -mksysb

*        take the viosbr backup
                   $ viosbr -backup -file  /home/padmin/ abhi_config 

*       check the readability of viosbr image

                     $ viosbr -view -file /tmp/abhi_config_bkp.tar.gz

If everything is ok  go ahead  with the upgradation activity.

step 3

Before applying any  new package we need to commit the older ioslevel.
                 
                       $ updateios  -commit
                       There are no uncommitted updates.

step 4

start  the vio-upgrade using the command updateios.

                         $ updateios -install -accept  -dev /tmp/VIOS_2.2.0.10-FP24/


Output looks like this.....
                          
                  *******************************************************************************
                                                  installp PREVIEW:  installation will not actually occur.
                                        +-----------------------------------------------------------------------------+
                                                                  Pre-installation Verification...
                                         +-----------------------------------------------------------------------------+
                                                                  Verifying selections...done
                                                                 Verifying requisites...done
                                                                  Results...
                                                                   WARNINGS
                                                                              --------
                                                                 Problems described in this section are not likely to be the source of any
                                                                 immediate or serious failures, but further actions may be necessary or
                                                                 desired.
                                                      Already Installed
                                                 -----------------
                                                                 The following filesets which you selected are either already installed
                                                                 or effectively installed through superseding filesets.
                                                                           tpc.rte 4.1.0.97                                          # TPC Runtime Install Files
                                                                          tivoli.tsm.client.msg.ZH_TW 6.1.0.0        # TSM Client Messages - Chines...
                                                                         tivoli.tsm.client.msg.ZH_CN 6.1.0.0         # TSM Client Messages - Chines...
                                                                       tivoli.tsm.client.msg.RU_RU 6.1.0.0           # TSM Client Messages - Russian
                                                                       tivoli.tsm.client.msg.PT_BR 6.1.0.0           # TSM Client Messages - Portug...
                                                                       tivoli.tsm.client.msg.PL_PL 6.1.0.0           # TSM Client Messages - Polish
                                                                       tivoli.tsm.client.msg.KO_KR 6.1.0.0        # TSM Client Messages - Korean
                                                                       tivoli.tsm.client.msg.JA_JP 6.1.0.0         # TSM Client Messages - Japanese
                                                                        tivoli.tsm.client.msg.IT_IT 6.1.0.0         # TSM Client Messages - Italian
   


---------------------------some output truncated-----------------------------------------------------


step 5


once  patch  upgradation is complete you will get the prompt.
now check the “installation summary”. check the result field.
if it contains "success" then  no issues you have successfully upgraded the server.
Installation Summary
-----------------------------------------------------------------------------------------------------------
Name                                                 Level          Part           Event          Result
------------------------------------------------------------------------------------------------------------
bos.rte.install                                    6.1.6.1         USR         APPLY       SUCCESS
bos.rte.install                                    6.1.6.1         ROOT      APPLY       SUCCESS
vios.agent.rte                                    1.0.0.0         USR         APPLY       SUCCESS
vios.agent.rte                                    1.0.0.0         ROOT      APPLY       SUCCESS
pool.basic.rte                                    6.1.6.0         USR         APPLY       SUCCESS
pool.basic.rte                                    6.1.6.0         ROOT      APPLY       SUCCESS
xlC.aix61.rte                                     11.1.0.1       USR         APPLY       SUCCESS
tivoli.tivguid                                     1.3.3.1         USR         APPLY       SUCCESS
tivoli.tivguid                                     1.3.3.1         ROOT      APPLY       SUCCESS
sysmgt.cimserver.pegasus.rte           2.9.0.20       USR         APPLY       SUCCESS
sysmgt.cimserver.pegasus.rte           2.9.0.20       ROOT      APPLY       SUCCESS
sysmgt.cim.providers.osbase            1.2.8.20       USR         APPLY       SUCCESS
sysmgt.cim.providers.smash             1.2.8.20       USR         APPLY       SUCCESS
sysmgt.cim.providers.scc                  1.2.8.20       USR         APPLY       SUCCESS
sysmgt.cim.providers.metric             1.2.8.20       USR         APPLY       SUCCESS


step 6

now you need to shutdown  the vio  server.
before shutting down ,check whether all vio-clients are shutdown if not shutdown
all  vio-client partitions.

once done shutdown the vio-server.

                             $ shutdown -restart


step 7


*       Once the vio server is up .login to the server  and accept the license.

                                 $ license -accept

*       Check the ioslevel.
                                   $ ioslevel
                                     2.2.0.10-FP-24

That means vio- upgradation  is complete.

step 8

check the mappings  and configuration  details from the configuration backup


 


  VIO server introduction.

The Virtual I/O Server is software that is located in a logical partition.

This software facilitates the sharing of physical I/O resources between client logical partitions within the server.

The Virtual I/O Server provides virtual SCSI target, virtual fibre channel, Shared Ethernet Adapter, and PowerVM™ Active Memory Sharing capability to client logical partitions within the system.

As a result, client logical partitions can share SCSI devices, fibre channel adapters, Ethernet adapters, and expand the amount of memory available to logical partitions using paging space devices.

The Virtual I/O Server software requires that the logical partition be dedicated solely for its use.

--------------------------------------------------------------------------------------------------------------------------

 Why to use vio server



Using the Virtual I/O Server facilitates the following functions:

  •  Sharing of physical resources between logical partitions on the system

  •  Creating logical partitions without requiring additional physical I/O resources

  •  Creating more logical partitions than there are I/O slots or physical devices available with the ability for logical partitions to have dedicated I/O, virtual I/O, or both

  •  Maximizing use of physical resources on the system

  •  Helping to reduce the Storage Area Network (SAN) infrastructure



--------------------------------------------------------------------------------------------------------------------------
                              



Requirement for vio server.

 Minimum Hardware requirements to create the Virtual I/O Server partition:
  1.     POWER5 server,     :    the VIO capable machine.
  2.     Hardware management console(HMC) : to create the partition and assign resources.
  3.     Storage adapter:         The server partition needs at least one storage adapter. 
  4.     Physical disk:             A disk large enough to make sufficient-sized logical volumes on it.
  5.     Ethernet adapter:        Allows securely route network traffic from a virtual  Ethernet   to                                                 real network adapter.
       6.      Memory:                     At least 128 MB of memory.

--------------------------------------------------------------------------------------------------------------------------

 NOTE:                     The Virtual I/O Server provides the Virtual SCSI (VSCSI) Target and Shared Ethernet adapter virtual I/O function to client partitions.
                            This is accomplished by assigning physical devices to the Virtual I/O Server partition, then configuring virtual adapters on the clients to allow communication between the client and the Virtual I/O Server.



-----------------------------------------------------------------------------------------------------------------------



SUPPORTED OS AS VIO CLIENT

Virtual I/O server supports the following operating systems as virtual I/O client:
 •    AIX
 •    SUSE LINUX Enterprise Server 9 for POWER
  •   Red Hat Enterprise Linux AS for POWER Version 3
  •   Red Hat Enterprise Linux AS for POWER Version 4










Capabilities of the Virtual I/O Server

  •        Ethernet Adapter Sharing
  •        Virtual SCSI disK
  •        Interacts with AIX and Linux partitions

  •                 The Virtual I/O Server provides a restricted scriptable command line user interface (CLI). All aspects of Virtual I/O server administration are accomplished through the CLI, including:-
                                 Device management (physical, virtual, LVM)
                                 Network configuration
                                 Software installation and update
                                 Security
                                 User management
                                 Installation of OEM software
                                 Maintenance tasks
  •     The creation and deletion of the virtual client and server adapter is managed by the HMC GUI and POWER5 server firmware. The association between the client and server adapters is defined when the virtual adapters are created.



VIRTUAL SCSI

 •       Virtual SCSI is based on a client/server relationship.
 •       The virtual I/O resources are assigned using an HMC.
 •       Virtual SCSI enables sharing of adapters as well as disk devices.
 •       Dynamic LPAR operations allowed.
 •       Dynamic mapping between physical and virtual resources on the virtual I/O server.

NOTE:        Virtual SCSI is based on a client/server relationship. The virtual I/O server owns the physical resources and acts as the server. The logical partitions access the virtual I/O resources provided by the virtual I/O server as the clients.

The virtual I/O resources are assigned using an HMC.

.Virtual SCSI enables sharing of adapters as well as disk devices.

>>>>>>>>                To make a physical or a logical volume available to a client partition, it is assigned to a virtual SCSI server adapter in the virtual I/O server partition.
                                   The client partition accesses its assigned disks through a virtual SCSI client adapter. It sees standard SCSI devices and LUNs through this virtual adapter.

>>>>>>>                     Virtual SCSI resources can be assigned and removed dynamically. On the HMC, virtual SCSI target and server adapters can be assigned and removed from a partition using dynamic logical partitioning.
                                      The mapping between physical and virtual resources on the virtual I/O server can also be done dynamically.

>>>>>>>                        A disk owned by the virtual I/O server can either be exported and assigned to a client partition as a whole or it can be split into several logical volumes. Each of these logical volumes can then be assigned to a different partition.




  EFFECT ON PERFORMNACE IF  USING VSCSI



 This is because there is an overhead associated with Hypervisor calls, and because of the several steps involved for the I/O requests from the initiator to target partition,

 VSCSI will use additional CPU cycles when processing I/O requests. This will not give the same performance from VSCSI devices as from dedicated devices.

 The use of Virtual SCSI will roughly double the amount of CPU time to perform I/O as compared to using directly attached storage. This CPU load is split between the Virtual I/O Server and the Virtual SCSI Client.

Performance is expected to degrade when multiple partitions are sharing a physical disk, and actual impact on overall system performance will vary by environment. The base-case configuration is when one physical disk is dedicated to a partition.



VIRTUAL ETHERNET


 •         Enables inter-partition communication.
 •         In-memory point to point connections
 •         Physical network adapters are not needed.
 •         Similar to high-bandwidth Ethernet connections.
 •         Supports multiple protocols (IPv4, IPv6, and ICMP).
 •         No Advanced POWER Virtualization feature required.


The Virtual Ethernet enables inter-partition communication without the need for physical network adapters in each partition.

 The Virtual Ethernet allows the administrator to define in-memory point to point connections between partitions.

 These connections exhibit similar characteristics, as high bandwidth Ethernet connections supports multiple protocols (IPv4, IPv6, and ICMP).

Virtual Ethernet requires a POWER5 system with either AIX 5L V5.3 or the appropriate level of Linux and a Hardware Management Console (HMC) to define the Virtual Ethernet devices.

Virtual Ethernet does not require the purchase of any additional features or software, such as the Advanced Virtualization Feature.

Virtual Ethernet is also called "Virtual LAN or even VLAN", which can be confusing, because these terms are also used in network topology topics. But the Virtual Ethernet, which uses virtual devices, has nothing to do with the VLAN known from Network-Topology, which divides a LAN in further Sub-LANs.




 Viosbr Command


The viosbr command  is used  to back up all the relevant data to recover a VIOS after an installation. 


 The viosbr command backs up following mentioned details :-


2.      Logical devices,  such as storage pools,  clusters (VIOS Version 2.2.0.11, Fix Pack 24, Service Pack 1, or later),  file-backed storage pools,  the virtual media repository, and paging space devices.


3.     Virtual devices, such as Etherchannel, shared Ethernet adapter, virtual server adapters, and virtual-server fiber channel adapters.


4.     Device attributes for devices like disks, optical devices, tape devices, fscsi controllers, Ethernet adapters, Ethernet interfaces, and logical Host Ethernet Adapters.


   While viosbr allows you to restore mappings





Backupios  command


 backupios is used to restore the whole VIOS operating system.


"The backupios command creates a backup of the Virtual I/O server and places it onto a file system, bootable tape or DVD. You can use this backup to reinstall a system to its original state     after it     has been corrupted

Again, be sure to backup your VIOS environment with both viobr and backupios. Together, they give you the tools you need should something go wrong.


Updateios  command

The updateios command is used to install fixes, or to update the Virtual I/O Server to the latest maintenance level.

to commit the installed updates, type the following command:


# updateios -commit


To update the Virtual I/O Server to the latest level, where the updates are located on the mounted file system /home/padmin/update, type the following command:


# updateios -dev /home/padmin/update

 

Saturday, May 05, 2012

Configuring the   rsh services in aix.


Step 1

 uncomment the following line in "/etc/inetd.conf"

shell   stream  tcp6    nowait  root    /usr/sbin/rshd         rshd



#  vi /etc/inetd.conf



## service  socket  protocol  wait/  user    server    server program
##  name     type             nowait         program     arguments
##
ftp     stream  tcp6    nowait  root    /usr/sbin/ftpd         ftpd
telnet  stream  tcp6    nowait  root    /usr/sbin/telnetd      telnetd -a 

shell   stream  tcp6    nowait  root    /usr/sbin/rshd         rshd


step 2
After making the changes in " /etc/inetd.conf" , you need to refresh the  inetd deamon.

  # refresh -s inetd



Step 3

 
Add the hosts from where you want to connect  to the   .rhosts  file  of the server






#  cd   ~

# vi .rhosts

server1
server2
server3










Kindly check  the permissions of this file also, it should be 600.

* Make sure that both files (/etc/hosts.equiv  & /.rhosts) have
permissions of 600; they're ignored otherwise.

  For non-root user's you need  to add  the hosts in the /etc/hosts.equiv  file.

 /.rhosts is used for  root rsh attempts.
password-less  ssh   authentication

why to use it?

1. suppose you are a system admin and you jump from 1 server to another frequently. that means that everytime you jump you have to give the password. that can be tough if the password's are complex .

2. Suppose you are scheduling the cron jobs and backup scripts which needs to login to remote servers. In this situation you can use this password-less  authentication .


How to configure the passwordless  ssh authentication ?

Step 1: .. Firstly we need to create  public and private keys using "ssh-key-gen" command  on server1.



[abhi@server1] $   ssh-keygen

Generating public/private rsa key pair.
Enter file in which to save the key (/home/abhi/.ssh/id_rsa):[Enter key]
Enter passphrase (empty for no passphrase): [Press enter key]
Enter same passphrase again: [Pess enter key]
Your identification has been saved in /home/abhi/.ssh/id_rsa.
Your public key has been saved in /home/abhi/.ssh/id_rsa.pub.

The key fingerprint is:
34:b3:de:af:56:68:18:18:34:d5:de:67:2fdf2:35:g7 abhi@server1


This command will  create two files in ".ssh" directory inside your's home directory (in this case it will  be /home/abhi/.ssh)


 1.   id_rsa 
 2.   id_rsa.pub --  this file will contain the public/private   key.


Step 2: You need  to Copy the public key to  the second server(let it's  ip be 192.168.20.1)   using "ssh-copy-id" command.

[abhi@server1] $ ssh-copy-id -i  ~/.ssh/id_rsa.pub  192.168.20.1

abhi@server2's password:
Now try logging into the machine, with "ssh 'remote-host'", and check in:

.ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

Note: ssh-copy-id appends the keys to the remote-host’s .ssh/authorized_key.

Step 3: Login to remote-host(192.168.20.1) without entering the password
 
[abhi@server1]  $ ssh  192.168.20.1
Last login: Sun April16 12:18:12 2012 from 192.168.20.1







it dosesn't ask's for password.


cheers                                                                                                                                                                              

Tuesday, October 04, 2011

asynchronus I/O (aioserver)


creating raw logical volumes

Any logical volume that doesn't have filesystem on it is raw logical volume.

some applications like oracle,informix needs and  uses  raw`LV's.

***Every logical volume you are creating will have two entries in /dev  directory i.e. one block device and one character device.

suppose yours application team asks for some raw LV'S . i.e. oracle team.

step1:  let me create one LV named "list"

 step 2: goto /dev  directory and check that device files are available.

 #cd /dev


# ls -l list
brw-rw----   1 root     system       10, 11 Oct 04 15:27 list
# ls -l rlist
crw-rw----   1 root     system       10, 11 Oct 04 15:27 rlist
#

here we see that the block device is having the name same as the logical device name"list" . but the chacter device is having the name "rlist"

step 3:  now we have to change the ownership  of the file rlist to oracle:oracle  ,so that it can be used by oracle team.

#chown  oracle:oracle   /dev/ rlist


Wednesday, August 17, 2011

N-port Virtualization(NPIV)


through NPIV partitions of managed systems  can access the SAN storage directly through the same physical fibre channel adapter.

ex...
         suppose you have assigned 200 GB storage to vio server.this  200GB storage you can assign to the  logical partitions through mapping lv, pv  to the scsi adapters of the particular partitions. what happens when a  single LPAR needs  100GB  and there is only 50 GB storage left in VIO Server partition.
                                               .again you have to assign a LUN to VIO Server and then  you have to create  lv to provide the more,space. this can be tough  and time-taking if many server's are there. for reducing this over head NPIV was introduced through which we can directly assign the storage to the partitions.


 SAN  ------------->  LUN -------------->MAPPED TO THE PORT OF PHYSICAL FC ADAPTER





* Each virtual fibre channel adapter on each client logical partition receives a pair of unique WWPNs. The client logical partition uses one WWPN to log into the SAN at any given time. The other WWPN is used when you move the client logical partition to another managed system.

*Using their unique WWPNs and the virtual fibre channel connections to the physical fibre channel adapter, the operating systems that run in the client logical partitions discover, instantiate, and manage their physical storage located on the SAN


To enable N_Port ID Virtualization (NPIV) on the managed system, you create the required virtual fibre channel adapters and connections as follows:
1.using HMC  create virtual fibre channel adapters on the VIO Server logical partition and map  them  with virtual fibre channel adapters on the client logical partitions.


 2..When you create a virtual fibre channel adapter on a client logical partition, the HMC generates a pair of unique WWPNs for the client virtual fibre channel adapter.this you can see through HMC by viewing the properties of client FC adapter.

 3. we need to check whether the HBA port is connected to the SAN switch on which NPIV  is enabled.
that that we run $lsnport command on vio server.


 if fabric parameter is "1" . then NPIV is supported
 if it is "0" it means NPIV is not supported.



$lsnports


name             physloc                fabric 
fcs0              ---                          1
fcs1              ---                          1
fcs2              ---                           1


here fabric parameter is set to"1". that means NPIV is supported and we can do mapping.

now check the virtual FC adapter on the vio server that you have created

$lsdev  -vpd|grep vfchost
vfchost0  
vfchost1

for mapping the virtual FC adapter to physical FC adapter use the "vfcmap" command

$vfcmap -vadapter vfchost1   -fcp  fcs1

to check that mapping has been done correctly and the clients are able to login to the SAN.

#lsmap -vadapter vfchost1 -npiv 

Thursday, August 11, 2011

micro partitioning.

we are distributing the processing capability of one or more physical processor among the partitions

using micro partitioning we can increase the overall utilization of processor resources within the system.


suppose we are having   4 processors  i.e. 4 processing units(PU) and we are having four partitions.you have assigned 1PU to all partitions.you  find that on all the partitions about 50% of processor is un-utilized  now .you want to create some more partitions . options are

1.that you have to add more processors to the server  that will add extra cost or burden to ur's firm.or

2.  you can free some processing units from all the lpar's as they are only 50% utilized atmost. this you can do by DLPAR operation through HMC ,if you don't want to reboot.

you can re-assign the processors according to ur's need :here i have reassigned the processing units so, that i can utilize those processor more efficiently and assign the freed PU to the new partitions that i want to create..

lpar1      minimum  .1      desired .3        maximum. .5
lpar 2     minimum .1       desired .2         maximum .3
lpar3      minimum .1       desired .4         maximum .6
lpar4      minimum .1       desired .3         maximum. .6


PROCESSING UNITS(PU) -  the capability of assigning less than 1 processor to a partition is call ed micro-partitioning.
                          for allocating less than 1 processor ,we use processing units(PU)

1 processor corresponds to  1 PU

** we can assign minimun of 1/10 i.e. 0.1 PU to a  partition.
** minimum granuality for assigning extra PU is .01

what is minimum,desired and maximum ?

minimum --  partition will not start if this much PU is not available
desired------  partition will use this much of PU ,if available
maximum ---  partition can be increased to this amount if using Dlpar.



here click to "advanced"

what is capped and uncapped?

capped---processing capability can never exceed the entitled (assigned) processing capability of partition.

uncapped --  the processing capability can exceed the entitled capacity when resources are available in there shared -processor pool and the partition is eligible to run.

* higher the uncapped weight of partition the more processing units it will receive.
* uncapped weight ranges between 0 to 255. default is 128.

power hypervisor

power hypervisor is the firmware layer sitting between the   hosted operating system and the server hardware.

suppose you are having p570 box in which you have created 4 Logical Partitions. you have assigned logical resources to  each of them. power hypervisor keeps track of the resources allocated to each partition and also take care that partitions don't access the other partition's assigned resources .

**power hypervisor enforces partition integrity by providing security layer between logical partitions.

** it provides VLAN channel between logical partitions that helps to reduce the need of physical ethernet adapter.


** it also monitors the service processor.if there is any loss of SP it will perform reset/reload operation.if it is not corrected it will notify the operating system.

power hypervisor provides following types of virtual I/O adapters.
  1. virtual scsi
   2. virtual ethernet
   3. virtual FC
  4. virtual console

what is virtual scsi?
for virtualization of storage,power hypervisor gives you virtual scsi mechanism.

virtual scsi adapter is needed for this which is defined in vio server partition.


there are two types of virtual scsi adapter
 virtual client scsi adapter
 virtual server scsi adapter



 all the scsi physical storage devices are assigned  to vio server.

how the adapters are connected?

1.using DLPAR operation , you can create  virtual scsi  server adapter if you don't want to reboot the server.
  2.on client partitions also  you define the virtual client scsi adapters. through DLPAR operation. mapping should be correct.

ex. of mapping the adaapters

while defining server adapter, note the following
slot no.  3
remote partition  partition2
remote partition virtual slot number  4

for client adapter
slot no. - 4
remote partition  vios
remote partition slot no.  3



3. after that run#cfgdev on vio server . one virtual host device (vhost) device will be available on vio server that represents that particular partition.

4. you cam map the logical  volume etc  to  a particular partition virtual host device and that will be available as a disk on client partition  after running the #cfgmgr command on that partition.


 vitual scsi can be used as
1. virtual disk
2. virtual optical devices(vtopt)
3. virtual tape.

VIRTUAL ETHERNET

power hypervisor provides a virtual ethernet switch function that allows partitions on the same server to use a fast and secure communication without any need of physical intercconnection.

***virtual ethernet is a part of base system configuration and doesn't need vio server.