4 Autocluster is a script for building virtual clusters to test
7 It uses Vagrant (with the libvirt plugin) and Ansible to build and
10 This software is freely distributable under the GNU public license, a
11 copy of which you should have received with this software (in a file
19 * INSTALLING AUTOCLUSTER
25 * DESTROYING A CLUSTER
33 Tested host platforms:
37 Tested guest platforms:
41 Tested cluster filesystems:
45 INSTALLING AUTOCLUSTER
46 ======================
48 Before you start, make sure you have the latest version of
49 autocluster. To download autocluster do this:
51 git clone git://git.samba.org/autocluster.git
53 You probably want to add the directory where autocluster is installed
54 to your PATH, otherwise things may quickly become tedious.
56 Packages can also be built and installed.
64 2. Run: autocluster host <platform> setup
66 Currently the only supported <platform> is "centos7"
70 * Install and configure several packages, including Vagrant
72 * Assume you want to serve repositories to guests from /home/mediasets/.
74 * Create a libvirt storage pool at /virtual/autocluster/ for VM
77 * Create an SSH key for autocluster
79 For speed, you may wish to mirror the guest distribution somewhere
80 under /home/mediasets/ or on another nearby machine.
82 Depending on how your host machine is setup, you may need to run
83 autocluster commands as root.
91 The configuration file is a YAML file. If your cluster is to be
92 called "foo" then the configuration file must be "foo.yml" in the
95 To see what options to set, try this:
97 # autocluster cluster foo defaults
99 This will show default the default configuration. This is the only
100 cluster command that doesn't need a cluster configuration.
102 It may also be worth looking at the file defaults.yml, which
103 includes some useful comments.
105 Add updated settings foo.yml. Try to set the minimum number of
106 options to keep the configuration file small. See example.yml.
108 Most items are fairly obvious. However, here are some details:
112 Default: 10.0.0.0/24 10.0.1.0/24 10.0.2.0/24
114 There should be at least 2 networks. The first network is a
115 private network, while the others can be used for CTDB public IP
122 This is the final octet of the first IP address used on each network.
126 Default: [nas, nas, nas, nas, test]
128 Each node is offset from firstip by its position in the list.
130 The above default will result in 5 nodes.
132 - The first 4 will be Clustered Samba NAS nodes (running CTDB,
133 Samba, NFS) with addresses on the first network from 10.0.0.20
134 to 10.0.0.23 (with similar static addresses on the other
137 - The 5th node will be a minimally installed/configured test node
138 that can be used as a CTDB test client, with address 10.0.0.24.
140 Valid node types are:
142 nas: Clustered Samba node with cluster filesystem, smbd, nfsd
143 ad: Samba Active Directory Domain Controller node
144 base: Base operaing system node
145 build: Build node for CTDB packages
146 cbuild: Build node for Samba, with cluster filesystem installed
147 storage: Cluster filesystem node that doesn't directly provide NAS services
148 test: CTDB test node, with CTDB packages installed
153 In theory this is easy:
155 # autocluster cluster foo build
157 This runs several internal steps:
159 1. `destroy` - Destroy any existing cluster of the same name
160 2. `generate` - Generate metadata (for Vagrant, Ansible, SSH) from the
162 3. `create` - Create the cluster nodes (using Vagrant)
163 4. `ssh_config` - Configure SSH to allow direct access to nodes as root
164 5. `setup` - Setup each node according to its type (using Ansible)
169 # autocluster cluster foo destroy
174 The Ansible playbook for nodes has been structured in a way that
175 should make it easy to add new platforms and cluster filesystems. Try
176 to follow the pattern and keep task names as generic as possible.
178 To see facts about <node>:
180 ansible -i <node>, all -m setup