Before you start, make sure you have the latest version of autocluster. To download autocluster do this: git clone git://git.samba.org/tridge/autocluster.git autocluster Or to update it, run "git pull" in the autocluster directory To setup a virtual cluster for SoFS with autocluster follow these steps: 1) download and install the latest kvm-userspace and kvm tools from http://kvm.qumranet.com/kvmwiki/Code You need a x86_64 Linux box to run this on. I use a Ubuntu Hardy system. It also needs plenty of memory - at least 3G to run a SoFS cluster. 2) install a squid proxy on your host. See host_setup/etc/squid/squid.conf for a sample config suitable for a virtual cluster. Make sure it caches large objects and has plenty of space. This will be needed to make downloading all the RPMs to each client sane 3) setup a DNS server on your host. See host_setup/etc/bind/ for a sample config that is suitable. It needs to redirect DNS queries for your SOFS virtual domain to your windows domain controller 4) download a RHEL-5.2 install ISO. You can get it from fscc-install.mainz.de.ibm.com in /instgpfs/instsrv/dists/ISO/RHEL5.2-Server-20080430.0-x86_64-DVD.iso 5) use ./create_base.sh to create the base install image. The install will take about 10 to 15 minutes and you will see the packages installing in your terminal 6) when that has finished, 'destroy' that machine (ie. power it off), with "virsh destroy SoFS-1.5-base" Then I recommend you mark that base image immutable like this: chattr +i /virtual/SoFS-1.5-base.img That will ensure it won't change. This is a precaution as the image will be used as a basis file for the per-node images, and if it changes your cluster will become corrupt 7) now run ./create_cluster, specifying a cluster name. For example: ./create_cluster c1 That will create your cluster nodes and the TSM server node 8) now boot your cluster nodes like this: virsh start c1n1 virsh start c1n2 virsh start c1n3 virsh start c1n4 virsh start c1tsm The most useful virsh commands are: start : boot a node shutdown : graceful shutdown of a node destroy : power off a node immediately 9) you can watch boot progress like this: tail -f /var/log/kvm/serial.c1* All the nodes have serial consoles, making it easier to capture kernel panic messages and watch the nodes via ssh 10) now you can ssh into your nodes. You may like to look at the small set of scripts in roots home directory on the nodes for some scripts. In particular: setup_tsm_server.sh: run this on the TSM node to setup the TSM server setup_tsm_client.sh: run this on the GPFS nodes to setup HSM mknsd.sh : this sets up the local shared disks as GPFS NSDs