This document is for xCAT 1.2.0 and VMware Workstation 5.0.0.
This HOWTO is not about running xCAT in a VMware session (that is no different than any other xCAT management node install). This HOWTO is about xCAT controlling VMware VMs.
xCAT supports VMware VMs like any other machine. rpower, getmacs, rcons using serial console, etc... all function as expected.
vmCAT can be setup the following ways:
Basic. All VMs are running on the xCAT management node. This is not recommend for production environments, however, it is adequate for development environments. E.g. I use them to build diskless images quickly for the rest of the system. (One VMware license required.)
Advanced. All VMs are running on physical nodes other than the xCAT management node. (One VMware license required/node). Follow the Basic notes first before exploring Advanced.
Accelerated. Builds on Advanced by allowing running virtual machines to migrate from node to node.
FAQ:
Q: Why?
A: This HOWTO is a POT (Proof of Technology) to explore grid and utility computing using virtual machines with embedded applications and is not recommended for production environments. YMMV.
Q: Is there a practical use for this HOWTO now?
A: Yes. I use the Basic setup to test and develop xCAT installation support. For image development and testing VMware is a very nice tool.
Q: What about Xen?
A: xenCAT is under development.
Q: What about Windows?
A: For Windows hosts with xCAT in a Linux VMware guest controlling Windows to launch additional Linux VMware guests as xCAT nodes is under development (lapCAT). lapCAT can be used to develop xCAT on airplanes.
Please read through all sets of instructions before setup.
Basic
Install xCAT 1.2.0.
Install VMware Workstation 5.0.0. (Build 13124 tested). Take the defaults.
Run VMware, create VMs. HINT: Create one, test it, and use copyvm.
Recommendations/Requirements:
Virtual Machine names and locations must not have any spaces. Since xCAT can image or network boot any type of OS consider using a generic name, e.g. vnode01, vnode02, etc...
You must use bridged networking.
VMware requires the creation of a virtual disk even in a diskless environment. Go ahead and create the disk, afterwards remove it if you like. It is so small, I would not worry about it.
After the VM is created edit and remove the Audio, CD-ROM, floppy, and USB devices. Remove HD if planning to use diskless.
While editing the VM add a serial port. The serial port must be a named pipe, the path must be unique (e.g. /root/.vmserial/vnode01). The first dropdown box should read "This end is the server.", the second dropdown box should read "The other end is an application.". Click on the "Advanced" button and select "Yield CPU on poll".
Do not put VMs on NFS mount points. VMs on NFS will not suspend (problems with writing out RAM contents). Always use local disk (RAM disk OK).
Consider hardcoding the MAC address. (Optional but required for Advanced and Accelerated setups).
As virtual machines wander from physical machine to physical machine VMware will generate new MAC addresses. This is undesirable. Edit each .vmx file and remove all lines start with ethernet0, then append to the end of the file:
ethernet0.present = "TRUE"
ethernet0.address = "00:50:56:XX:YY:ZZ"
ethernet0.addressType = "static"
Where XX is in the range 00-3f, and YY and ZZ are in the range 00-ff, e.g.:
00:50:56:00:00:01
Create required directorie(s) for the serial port socket. E.g. /root/.vmserial.
mkdir -p /root/.vmserial
Stage1 each VM: HINT: Create one, test it, and use copyvm.
Power on the VM.
Press F2 (quickly).
Right arrow to "boot".
Down arrow to "Network boot".
Press '+' until "Network boot" is at the top of the list.
Press F10 to save.
If all your VMs are going to be identical use the copyvm command (Optional). The virtual machine must be turned off, e.g.:
# cd /root/vmware
# copyvm vnode01 vnode05
copyvm: vnode05 created from vnode01
Update /opt/xcat/etc/mac.tab with:
vnode5-eth0 00:50:56:36:7e:09
You can ignore the "Update" message and use getmacs instead.
NOTE: copyvm will create a random VMware approved static MAC. A check against /etc/dhcpd.conf and $XCATROOT/etc/mac.tab for duplicates is also performed.
NOTE: copyvm will fail if any of the support files (e.g. HD images) are not contained within the VM directory. Normally not a probelm.
VMs have virtual physical consoles that need to be redirected.
Edit $XCATROOT/etc/site.tab, add/edit vmwdisplay and set to an X display to monitor the VMware VGA consoles, e.g.:
vmwdisplay mercury:21
It is recommended that the X display for VMware VGA console management be a VNC session so that remote management is possible. E.g., if the display is mercury:21 then type from mercury as root:
cd /root
mkdir .vnc
cd .vnc
cp $XCATROOT/build/vnc/xstartup .
vncpasswd
vncserver :21 -geometry 1024x768 -depth 24
To remotely access the VGA consoles use any VNC client, if using Linux type:
vncviewer -shared display
e.g.:
vncviewer -shared mercury:21
To prevent VMware from prompting/warning append to /etc/vmware/config (required):
msg.autoAnswer = "TRUE"
Add each VM node to the following xCAT tables like any other node:
$XCATROOT/etc/nodelist.tab
$XCATROOT/etc/nodetype.tab
$XCATROOT/etc/conserver.tab
For each VM node add an entry in $XCATROOT/etc/nodehm.tab, e.g.:
vnode01 vmware,vmware,NA,NA,NA,conserver,NA,NA,vmware,pxe,pcnet32,vnc,N,NA,NA,57600
For each VM node add an entry in $XCATROOR/etc/vmware.tab:
nodename vmware_host,VMX_path,serial_socket_path
e.g.,
vnode01 mercury,/root/vmware/vnode01/vnode01.vmx,/root/.vmserial/vnode01
For each VM node add an entry in $XCATROOT/etc/conserver.cf, e.g.:
vnode01:|conserver.vmserial vnode01::&:
Restart conserver:
service conserver restart
At this point the VMs should behave like any other node (assuming that xCAT is setup correctly). Test with:
getmacs noderange
makedhcp noderange
winstall singlenode
There is one exception to the above step. There is a new rpower function (suspend). You can suspend VMs with:
rpower noderange suspend
To resume:
rpower noderange on
Advanced
Complete the Basic setup first. Test.
Use xCAT to diskfull install (diskless not tested) any nodes that will act as a virtual node host. Install all packages.
Disable swap on each node. (Optional, but recommended for performance reasons, YMMV).
Install VMware Workstation on each virtual node host.
To automate the installing of VMware Workstation on a large number of nodes do the following:
Obtain the proper number of VMware Workstation licenses (one/node).
From the management node copy the /usr/bin/vmware-config.pl script to /root and edit:
# cp /usr/bin/vmware-config.pl /root
# vi /root/vmware-config.pl
Search for "# NAT networking", type:
[ESC]/\# NAT networking
Then change:
# NAT networking
$answer = get_answer('Do you want to be able to use NAT networking '
. 'in your virtual machines? (yes/no)', 'yesno', 'yes');
to
# NAT networking
$answer = get_answer('Do you want to be able to use NAT networking '
. 'in your virtual machines? (yes/no)', 'yesno', 'no');
Next search for "show_EULA", type:
[ESC]/show_EULA()
Then change:
show_EULA()
to
#show_EULA()
The above changes allow VMware to be configured without prompts. Since you already installed VMware manually as part of the Basic setup, you have already agreed to the EULA. The other change has NAT networking disabled by default.
Create a script to do the following, you will obviously need to customize for your environment:
rpm -i VMware-workstation-5.0.0-13124.i386.rpm
scp managementnode:/root/vmware-config.pl /usr/bin
vmware-config.pl -d -c
mkdir /root/.vmware /root/vmware /root/.vmserial
echo "msg.autoAnswer = \"TRUE\"" >>/etc/vmware/config
echo "pref.tip.startup = \"FALSE\"" >>/root/.vmware/preferences
scp managementnode:/root/.vmware/license.ws.5.0 /root/.vmware
That last command assumes that you have a site license allowing one instance/physical node.
Alternatively just install VMware manually per node.
Test display. From any target physical node export display to vmwdisplay entered into $XCATROOT/etc/site.tab, e.g.:
export DISPLAY=mercury:21
Then run vmware manually (if you installed VMware manually enter the serial number), e.g.:
vmware
NOTE: If you get a "Xlib: connection to "host:display" refused by server" error, then you need run xhost + from the host:display session, e.g. mercury:21.
Create virtual machines on the xCAT management node following the complete Basic setup. Test a few.
NOTE: VMs must not be on NFS exported directories (migration will fail).
NOTE: VMs must not be on NFS mount points (suspend will fail).
NOTE: The VMs must have hardcoded static MACs (see Basic setup).
Migrate physical machines:
rpower noderange off (required)
For each virtual machine type:
rmigrate vnode pnode
Where vnode is a VM defined in $XCATROOT/etc/vmware.tab and pnode is the physical node that this VM will run on, e.g.:
rmigrate vnode01 node01
rmigrate vnode02 node01
rmigrate vnode03 node02
rmigrate vnode04 node02
...
NOTE: migration will remove the virtual machine from the management node. You can always migrate back if needed.
Test VMs on virtual node host, e.g.:
Open a VNC connection to the display defined as vmwdisplay in $XCATROOT/etc/site.tab. If not using VNC monitor the physical display that VM VGA consoles will be redirected.
Power up the first node and watch:
rpower vnode01 on
Serial console redirection support. Since the VMs will be running on different physical nodes it will be required to setup a Conserver server on each node.
For each node create the required directories for the serial port sockets. E.g. /root/.vmserial.
mkdir -p /root/.vmserial
For each node setup conserver.cf. Type from each node (not the management node):
cp $XCATROOT/etc/conserver.cf /etc/conserver.cf
Edit conserver.cf
Remove physical node lines.
Define any missing VMs.
It is OK to setup entries for VMs not destine for this physical node. (It may actually be desired and easier, all nodes can have the same conserver.cf).
Change trusted: to IP of management node.
Example conserver.cf:
LOGDIR=/var/log/consoles
vnode01:|conserver.vmserial vnode01::&:
vnode02:|conserver.vmserial vnode02::&:
vnode03:|conserver.vmserial vnode03::&:
vnode04:|conserver.vmserial vnode04::&:
%%
trusted: 199.88.179.26
Setup Conserver init scripts.
For RH, type:
cp $XCATROOT/rc.d/conserver /etc/rc.d/init.d
For SuSE type:
cp $XCATROOT/rc.d/conserver.suse /etc/init.d/conserver
cd /usr/sbin
ln -s -f /etc/init.d/conserver rcconserver
Edit the conserver init script (/etc/rc.d/init.d/conserver or /etc/init.d/conserver):
Change:
CONCONFIG=$CONSPREFIX/etc/conserver.cf
to
CONCONFIG=/etc/conserver.cf
Startup Conserver:
RH:
service conserver start
chkconfig --level 345 conserver on
SuSE:
rcconserver start
chkconfig --level 345 conserver on
Test from management node:
console -Mpnode vnode
Where vnode is a VM defined in $XCATROOT/etc/vmware.tab and pnode is the physical node that this VM will run on. (Use ctrl-E c
From the management node edit $XCATROOT/etc/conserver.tab. For each remote virtual node (VM) change "localhost" to the physical node name hosting that VM, example conserver.tab:
node01 localhost,node01
node02 localhost,node02
node03 localhost,node03
node04 localhost,node04
node05 localhost,node05
vnode01 node01,vnode1
vnode02 node01,vnode2
vnode03 node02,vnode3
vnode04 node02,vnode4
vnode05 node03,vnode5
vnode06 node03,vnode6
vnode07 node04,vnode7
vnode08 node04,vnode8
For each VM node add/edit an entry in $XCATROOT/etc/nodehm.tab, e.g.:
vnode01 vmware,vmware,NA,NA,NA,conserver,NA,NA,rcons,pxe,pcnet32,vnc,N,NA,NA,57600
The only difference between Basic and Advanced is the MAC address collection method. It must be rcons and screen scraped from the serial console. The vmware method is for local virtual nodes only.
Alternatively just edit $XCATROOT/etc/mac.tab manually since the MACs are located in human readable .vmx files, or collect MACs using the vmware method before migration.
At this point the VMs should behave like any other node (assuming that xCAT is setup correctly).
New functionality. You may use the rmigrate command to move any virtual node (VM) to any physical node with if the physical node has been properly prepped to support VMware.
NOTE: The virtual node (VM) must be powered off first.
NOTE: To move running VMs read the Accelerated setup.
NOTE: Edit $XCATROOT/etc/site.tab and change bufferedcons to no. To avoid problems with migration.
Accelerated
Accelerated setup is built on Basic and Advanced setups.
NOTE: This setup is to support the migration of active virtual machines.
Disable buffered console support. At this time support for roaming active VMware serial redirection is not supported. Edit $XCATROOT/etc/site.tab and change bufferedcons to no.
Kill all wcons and rcons sessions, if recently switching off bufferedcons, e.g.:
killall screen (careful)
Running VMs cannot be suspended and restarted on different processor types. At this time no checking is performed. E.g. a VM started on a PIII will fail to start on a P4.
At this time migration does not check that the target node has the proper resources to support the VM. E.g. memory and disk space.
Support
http://xcat.org
Egan Ford
egan@us.ibm.com
November 2005
1 comment:
Hi,
its good to see the initiative, but i really could not drive xCAT in VMware workstation. I got really struck while power on a Vm itself. please get clear steps to make it happen
Post a Comment