-BASIC SETUP
-===========
+INTRODUCTION
+============
-Before you start, make sure you have the latest version of
-autocluster. To download autocluster do this:
+Autocluster is a script for building virtual clusters to test
+clustered Samba.
+
+It uses Vagrant (with the libvirt plugin) and Ansible to build and
+configure a cluster.
- git clone git://git.samba.org/tridge/autocluster.git autocluster
+This software is freely distributable under the GNU public license, a
+copy of which you should have received with this software (in a file
+called COPYING).
-Or to update it, run "git pull" in the autocluster directory
+CONTENTS
+========
-To setup a virtual cluster for SoFS with autocluster follow these steps:
+* SUPPORTED PLATFORMS
+* INSTALLING AUTOCLUSTER
- 1) download and install the latest kvm-userspace and kvm tools
- from http://kvm.qumranet.com/kvmwiki/Code
+* HOST MACHINE SETUP
- You need a x86_64 Linux box to run this on. I use a Ubuntu Hardy
- system. It also needs plenty of memory - at least 3G to run a SoFS
- cluster.
+* CREATING A CLUSTER
- You may also find you need a newer version of libvirt. If you get
- an error when running create_base.sh about not handling a device
- named 'sda' then you need a newer libvirt. Get it like this:
+* DESTROYING A CLUSTER
- git clone git://git.et.redhat.com/libvirt.git
+* DEVELOPMENT HINTS
- When building it, you probably want to configure it like this:
- ./configure --without-xen --prefix=/usr
+SUPPORTED_PLATFORMS
+===================
- You will need to configure the right kvm networking setup. The
- files in host_setup/etc/libvirt/qemu/networks/ should help. This
- command will install the right networks for kvm:
+Tested host platforms:
- rsync -av --delete host_setup/etc/libvirt/qemu/networks/ /etc/libvirt/qemu/networks/
+* CentOS 7
- 2) You need a cacheing web proxy on your local network. If you don't
- have one, then install a squid proxy on your host. See
- host_setup/etc/squid/squid.conf for a sample config suitable for a
- virtual cluster. Make sure it caches large objects and has plenty
- of space. This will be needed to make downloading all the RPMs to
- each client sane
+Tested guest platforms:
- To test your squid setup, run a command like this:
+* CentOS 7
- http_proxy=http://10.0.0.1:3128/ wget http://9.155.61.11/mediasets/SoFS-daily/
+Tested cluster filesystems:
+* GPFS
- 3) setup a DNS server on your host. See host_setup/etc/bind/ for a
- sample config that is suitable. It needs to redirect DNS queries
- for your SOFS virtual domain to your windows domain controller
+INSTALLING AUTOCLUSTER
+======================
+
+Before you start, make sure you have the latest version of
+autocluster. To download autocluster do this:
+ git clone git://git.samba.org/autocluster.git
- 4) download a RHEL-5.2 install ISO. You can get it from the install
- server in Mainz. See the FSCC wiki page on autocluster for
- details.
+You probably want to add the directory where autocluster is installed
+to your PATH, otherwise things may quickly become tedious.
- 5) create a 'config' file in the autocluster directory. See the
- "CONFIGURATION" section below for more details.
+Packages can also be built and installed.
- 6) use "./autocluster create base" to create the base install image.
- The install will take about 10 to 15 minutes and you will see the
- packages installing in your terminal
- Before you start create base make sure your web proxy cache is
- authenticated with the Mainz BSO (eg. connect to
- https://9.155.61.11 with a web browser)
+HOST MACHINE SETUP
+==================
+1. Install Ansible
- 7) when that has finished I recommend you mark that base image
- immutable like this:
+2. Run: autocluster host <platform> setup
- chattr +i /virtual/SoFS-1.5-base.img
+ Currently the only supported <platform> is "centos7"
- That will ensure it won't change. This is a precaution as the
- image will be used as a basis file for the per-node images, and if
- it changes your cluster will become corrupt
+ This will
+ * Install and configure several packages, including Vagrant
- 8) now run "./autocluster create cluster" specifying a cluster
- name. For example:
+ * Assume you want to serve repositories to guests from /home/mediasets/.
- ./autocluster create cluster c1
+ * Create a libvirt storage pool at /virtual/autocluster/ for VM
+ images/files.
- That will create your cluster nodes and the TSM server node
+ * Create an SSH key for autocluster
+ For speed, you may wish to mirror the guest distribution somewhere
+ under /home/mediasets/ or on another nearby machine.
- 9) now boot your cluster nodes like this:
+Depending on how your host machine is setup, you may need to run
+autocluster commands as root.
- ./vircmd start c1
+CREATING A CLUSTER
+==================
- The most useful vircmd commands are:
-
- start : boot a node
- shutdown : graceful shutdown of a node
- destroy : power off a node immediately
+Configuration file
+------------------
+The configuration file is a YAML file. If your cluster is to be
+called "foo" then the configuration file must be "foo.yml" in the
+current directory.
- 10) you can watch boot progress like this:
+To see what options to set, try this:
- tail -f /var/log/kvm/serial.c1*
+ # autocluster cluster foo defaults
- All the nodes have serial consoles, making it easier to capture
- kernel panic messages and watch the nodes via ssh
+This will show default the default configuration. This is the only
+cluster command that doesn't need a cluster configuration.
+It may also be worth looking at the file defaults.yml, which
+includes some useful comments.
- 11) now you can ssh into your nodes. You may like to look at the
- small set of scripts in /root/scripts on the nodes for
- some scripts. In particular:
+Add updated settings foo.yml. Try to set the minimum number of
+options to keep the configuration file small. See example.yml.
- setup_tsm_server.sh: run this on the TSM node to setup the TSM server
- setup_tsm_client.sh: run this on the GPFS nodes to setup HSM
- mknsd.sh : this sets up the local shared disks as GPFS NSDs
- setup_gpfs.sh : this sets GPFS, creates a filesystem etc,
- byppassing the SoFS GUI. Useful for quick tests.
+Most items are fairly obvious. However, here are some details:
+* networks
- 12) If using the SoFS GUI, then you may want to lower the memory it
- uses so that it fits easily on the first node. Just edit this
- file on the first node:
+ Default: 10.0.0.0/24 10.0.1.0/24 10.0.2.0/24
- /opt/IBM/sofs/conf/overrides/sofs.javaopt
+ There should be at least 2 networks. The first network is a
+ private network, while the others can be used for CTDB public IP
+ addresses.
+* firstip
- 13) For automating the SoFS GUI, you may wish to install the iMacros
- extension to firefox, and look at some sample macros I have put
- in the imacros/ directory of autocluster. They will need editing
- for your environment, but they should give you some hints on how
- to automate the final GUI stage of the installation of a SoFS
- cluster.
+ Default: 20
-CONFIGURATION
-=============
+ This is the final octet of the first IP address used on each network.
-* See config.sample for an example of a configuration file. Note that
- all items in the sample file are commented out by default
+* node_list
-* Configuration options are defined in config.d/*.defconf. All
- configuration options have an equivalent command-line option.
+ Default: [nas, nas, nas, nas, test]
-* Use "autocluster --help" to list all available command-line options
- - all the items listed under "configuration options:" are the
- equivalents of the settings for config files.
+ Each node is offset from firstip by its position in the list.
-* Run "autocluster --dump > config.foo" (or similar) to create a
- config file containing the default values for all options that you
- can set. You can then delete all options for which you wish to keep
- the default values and then modify the remaining ones, resulting in
- a relatively small config file.
+ The above default will result in 5 nodes.
-* Use the --with-release option on the command-line or the
- with_release function in a configuration file to get default values
- for building virtual clusters for releases of particular "products".
- Currently there are only release definitions for SoFS.
+ - The first 4 will be Clustered Samba NAS nodes (running CTDB,
+ Samba, NFS) with addresses on the first network from 10.0.0.20
+ to 10.0.0.23 (with similar static addresses on the other
+ networks).
- For example, you can setup default values for SoFS-1.5.3 by running:
+ - The 5th node will be a minimally installed/configured test node
+ that can be used as a CTDB test client, with address 10.0.0.24.
- autocluster --with-release=SoFS-1.5.3 ...
+ Valid node types are:
- Equivalently you can use the following syntax in a configuration
- file:
+ nas: Clustered Samba node with cluster filesystem, smbd, nfsd
+ ad: Samba Active Directory Domain Controller node
+ base: Base operaing system node
+ build: Build node for CTDB packages
+ cbuild: Build node for Samba, with cluster filesystem installed
+ storage: Cluster filesystem node that doesn't directly provide NAS services
+ test: CTDB test node, with CTDB packages installed
- with_release "SoFS-1.5.3"
+Cluster creation
+----------------
- The release definitions are stored in releases/*.release. The
- available releases are listed in the output of "autocluster --help".
+In theory this is easy:
- NOTE: Occasionally you will need to consider the position of
- with_release in your configuration. If you want to override options
- handled by a release definition then you will obviously need to set
- them later in your configuration. This will be the case for most
- options you will want to set. However, some options will need to
- appear before with_release so that they can be used within a release
- definition - the most obvious one is the (rarely used) RHEL_ARCH
- option, which is used in the default ISO setting for each release.
+ # autocluster cluster foo build
+
+This runs several internal steps:
+
+1. `destroy` - Destroy any existing cluster of the same name
+2. `generate` - Generate metadata (for Vagrant, Ansible, SSH) from the
+ configuration
+3. `create` - Create the cluster nodes (using Vagrant)
+4. `ssh_config` - Configure SSH to allow direct access to nodes as root
+5. `setup` - Setup each node according to its type (using Ansible)
+
+DESTROYING A CLUSTER
+====================
+
+ # autocluster cluster foo destroy
DEVELOPMENT HINTS
=================
-The -e option provides support for executing arbitrary bash code.
-This is useful for testing and debugging.
-
-One good use of this option is to test template substitution using the
-function substitute_vars(). For example:
+The Ansible playbook for nodes has been structured in a way that
+should make it easy to add new platforms and cluster filesystems. Try
+to follow the pattern and keep task names as generic as possible.
- ./autocluster --with-release=SoFS-1.5.3 -e 'CLUSTER=foo; DISK=foo.qcow2; UUID=abcdef; NAME=foon1; set_macaddrs; substitute_vars templates/node.xml'
+To see facts about <node>:
-This prints templates/node.xml with all appropriate substitutions
-done. Some internal variables (e.g. CLUSTER, DISK, UUID, NAME) are
-given fairly arbitrary values but the various MAC address strings are
-set using the function set_macaddrs().
+ ansible -i <node>, all -m setup