+INTRODUCTION
+============
+
+Autocluster is a script for building virtual clusters to test
+clustered Samba.
+
+It uses Vagrant (with the libvirt plugin) and Ansible to build and
+configure a cluster.
+
+This software is freely distributable under the GNU public license, a
+copy of which you should have received with this software (in a file
+called COPYING).
+
+CONTENTS
+========
+
+* SUPPORTED PLATFORMS
+
+* INSTALLING AUTOCLUSTER
+
+* HOST MACHINE SETUP
+
+* CREATING A CLUSTER
+
+* DESTROYING A CLUSTER
+
+* DEVELOPMENT HINTS
+
+
+SUPPORTED_PLATFORMS
+===================
+
+Tested host platforms:
+
+* CentOS 7
+
+Tested guest platforms:
+
+* CentOS 7
+
+Tested cluster filesystems:
+
+* GPFS
+
+INSTALLING AUTOCLUSTER
+======================
+
Before you start, make sure you have the latest version of
autocluster. To download autocluster do this:
- git clone git://git.samba.org/tridge/autocluster.git autocluster
+ git clone git://git.samba.org/autocluster.git
+
+You probably want to add the directory where autocluster is installed
+to your PATH, otherwise things may quickly become tedious.
+
+Packages can also be built and installed.
+
+
+HOST MACHINE SETUP
+==================
+
+1. Install Ansible
+
+2. Run: autocluster host <platform> setup
+
+ Currently the only supported <platform> is "centos7"
+
+ This will
+
+ * Install and configure several packages, including Vagrant
+
+ * Assume you want to serve repositories to guests from /home/mediasets/.
+
+ * Create a libvirt storage pool at /virtual/autocluster/ for VM
+ images/files.
-Or to update it, run "git pull" in the autocluster directory
+ * Create an SSH key for autocluster
-To setup a virtual cluster for SoFS with autocluster follow these steps:
+ For speed, you may wish to mirror the guest distribution somewhere
+ under /home/mediasets/ or on another nearby machine.
- 1) download and install the latest kvm-userspace and kvm tools
- from http://kvm.qumranet.com/kvmwiki/Code
+Depending on how your host machine is setup, you may need to run
+autocluster commands as root.
- You need a x86_64 Linux box to run this on. I use a Ubuntu Hardy
- system. It also needs plenty of memory - at least 3G to run a SoFS
- cluster.
+CREATING A CLUSTER
+==================
- You may also find you need a newer version of libvirt. If you get
- an error when running create_base.sh about not handling a device
- named 'sda' then you need a newer libvirt. Get it like this:
+Configuration file
+------------------
- git clone git://git.et.redhat.com/libvirt.git
+The configuration file is a YAML file. If your cluster is to be
+called "foo" then the configuration file must be "foo.yml" in the
+current directory.
- When building it, you probably want to configure it like this:
+To see what options to set, try this:
- ./configure --without-xen --prefix=/usr
+ # autocluster cluster foo defaults
- 2) You need a cacheing web proxy on your local network. If you don't
- have one, then install a squid proxy on your host. See
- host_setup/etc/squid/squid.conf for a sample config suitable for a
- virtual cluster. Make sure it caches large objects and has plenty
- of space. This will be needed to make downloading all the RPMs to
- each client sane
+This will show default the default configuration. This is the only
+cluster command that doesn't need a cluster configuration.
- To test your squid setup, run a command like this:
+It may also be worth looking at the file defaults.yml, which
+includes some useful comments.
- http_proxy=http://10.0.0.1:3128/ wget http://9.155.61.11/mediasets/SoFS-daily/
+Add updated settings foo.yml. Try to set the minimum number of
+options to keep the configuration file small. See example.yml.
- 3) setup a DNS server on your host. See host_setup/etc/bind/ for a
- sample config that is suitable. It needs to redirect DNS queries
- for your SOFS virtual domain to your windows domain controller
+Most items are fairly obvious. However, here are some details:
- 4) download a RHEL-5.2 install ISO. You can get it from
- fscc-install.mainz.de.ibm.com in
- /instgpfs/instsrv/dists/ISO/RHEL5.2-Server-20080430.0-x86_64-DVD.iso
+* networks
- 5) create a 'config' file in the autocluster directory. I suggest you
- create it like this:
+ Default: 10.0.0.0/24 10.0.1.0/24 10.0.2.0/24
- . config.sample
- MEM="what ever mem you want"
- KVM="path to your kvm"
-
- That way when you upgrade autocluster with "git pull" you will
- inherit the new addtions to config.sample
+ There should be at least 2 networks. The first network is a
+ private network, while the others can be used for CTDB public IP
+ addresses.
- Then look through config.sample and check for any config options
- you want to override. Add them to your config file.
+* firstip
- 6) use ./create_base.sh to create the base install image. The
- install will take about 10 to 15 minutes and you will see the
- packages installing in your terminal
+ Default: 20
- Before you start create_base.sh make sure your web proxy cache is
- authenticated with the Mainz BSO (eg. connect to
- https://9.155.61.11 with a web browser)
+ This is the final octet of the first IP address used on each network.
- 7) when that has finished, 'destroy' that machine (ie. power it off),
- with "virsh destroy SoFS-1.5-base"
+* node_list
- Then I recommend you mark that base image immutable like this:
+ Default: [nas, nas, nas, nas, test]
- chattr +i /virtual/SoFS-1.5-base.img
+ Each node is offset from firstip by its position in the list.
- That will ensure it won't change. This is a precaution as the
- image will be used as a basis file for the per-node images, and if
- it changes your cluster will become corrupt
+ The above default will result in 5 nodes.
- 8) now run ./create_cluster, specifying a cluster name. For example:
+ - The first 4 will be Clustered Samba NAS nodes (running CTDB,
+ Samba, NFS) with addresses on the first network from 10.0.0.20
+ to 10.0.0.23 (with similar static addresses on the other
+ networks).
- ./create_cluster c1
+ - The 5th node will be a minimally installed/configured test node
+ that can be used as a CTDB test client, with address 10.0.0.24.
- That will create your cluster nodes and the TSM server node
+ Valid node types are:
- 9) now boot your cluster nodes like this:
+ nas: Clustered Samba node with cluster filesystem, smbd, nfsd
+ ad: Samba Active Directory Domain Controller node
+ base: Base operaing system node
+ build: Build node for CTDB packages
+ cbuild: Build node for Samba, with cluster filesystem installed
+ storage: Cluster filesystem node that doesn't directly provide NAS services
+ test: CTDB test node, with CTDB packages installed
- ./vircmd start c1
+Cluster creation
+----------------
- The most useful vircmd commands are:
-
- start : boot a node
- shutdown : graceful shutdown of a node
- destroy : power off a node immediately
+In theory this is easy:
- 10) you can watch boot progress like this:
+ # autocluster cluster foo build
- tail -f /var/log/kvm/serial.c1*
+This runs several internal steps:
- All the nodes have serial consoles, making it easier to capture
- kernel panic messages and watch the nodes via ssh
+1. `destroy` - Destroy any existing cluster of the same name
+2. `generate` - Generate metadata (for Vagrant, Ansible, SSH) from the
+ configuration
+3. `create` - Create the cluster nodes (using Vagrant)
+4. `ssh_config` - Configure SSH to allow direct access to nodes as root
+5. `setup` - Setup each node according to its type (using Ansible)
- 11) now you can ssh into your nodes. You may like to look at the
- small set of scripts in roots home directory on the nodes for
- some scripts. In particular:
+DESTROYING A CLUSTER
+====================
- setup_tsm_server.sh: run this on the TSM node to setup the TSM server
- setup_tsm_client.sh: run this on the GPFS nodes to setup HSM
- mknsd.sh : this sets up the local shared disks as GPFS NSDs
+ # autocluster cluster foo destroy
+DEVELOPMENT HINTS
+=================
- 12) If using the SoFS GUI, then you may want to lower the memory it
- uses so that it fits easily on the first node. Just edit this
- file on the first node:
+The Ansible playbook for nodes has been structured in a way that
+should make it easy to add new platforms and cluster filesystems. Try
+to follow the pattern and keep task names as generic as possible.
- /opt/IBM/sofs/conf/overrides/sofs.javaopt
+To see facts about <node>:
- 13) For automating the SoFS GUI, you may wish to install the iMacros
- extension to firefox, and look at some sample macros I have put
- in the imacros/ directory of autocluster. They will need editing
- for your environment, but they should give you some hints on how
- to automate the final GUI stage of the installation of a SoFS
- cluster.
+ ansible -i <node>, all -m setup