4 Autocluster is set of scripts for building virtual clusters to test
5 clustered Samba. It uses Linux's libvirt and KVM virtualisation
8 Autocluster is a collection of scripts, template and configuration
9 files that allow you to create a cluster of virtual nodes very
10 quickly. You can create a cluster from scratch in less than 30
11 minutes. Once you have a base image you can then recreate a cluster
12 or create new virtual clusters in minutes.
14 The current implementation creates virtual clusters of RHEL5 nodes.
20 * INSTALLING AUTOCLUSTER
35 INSTALLING AUTOCLUSTER
36 ======================
38 Before you start, make sure you have the latest version of
39 autocluster. To download autocluster do this:
41 git clone git://git.samba.org/tridge/autocluster.git autocluster
43 Or to update it, run "git pull" in the autocluster directory
45 You probably want to add the directory where autocluster is installed
46 to your PATH, otherwise things may quickly become tedious.
52 This section explains how to setup a host machine to run virtual
53 clusters generated by autocluster.
56 1) Install kvm, libvirt, qemu-nbd, nbd-client and expect.
58 Autocluster creates virtual machines that use libvirt to run under
59 KVM. This means that you will need to install both KVM and
60 libvirt on your host machine. You will also need the qemu-nbd and
61 nbd-client programs, which autocluster uses to loopback-nbd-mount
62 the disk images when configuring each node. Expect is used by the
63 "waitfor" script and should be available for installation form
70 For RHEL5/CentOS5, useful packages for both kvm and libvirt can
73 http://www.lfarkas.org/linux/packages/centos/5/x86_64/
75 You will need to install a matching kmod-kvm package to get the
78 RHEL5.4 ships with KVM but it doesn't have the SCSI disk
79 emulation that autocluster uses by default. There are also
80 problems when autocluster uses virtio on RHEL5.4's KVM. You
81 should use a version from lfarkas.org instead. Hopefully this
84 qemu-nbd is in the kvm package.
86 Unless you can find an RPM for nbd-client then you need to
89 http://sourceforge.net/projects/nbd/
95 Useful packages ship with Fedora Core 10 (Cambridge) and later.
97 qemu-nbd is in the kvm package.
99 nbd-client is in the nbd package.
103 Useful packages ship with Ubuntu 8.10 (Intrepid Ibex) and later.
105 qemu-nbd is in the kvm package but is called kvm-nbd, so you
106 need to set the QEMU_NBD configuration variable.
108 nbd-client is in the nbd-client package.
110 For other distributions you'll have to backport distro sources or
111 compile from upstream source as described below.
113 * For KVM see the "Downloads" and "Code" sections at:
115 http://www.linux-kvm.org/
121 * As mentioned about, nbd can be found at:
123 http://sourceforge.net/projects/nbd/
125 You will need to add the autocluster directory to your PATH.
127 You will need to configure the right kvm networking setup. The
128 files in host_setup/etc/libvirt/qemu/networks/ should help. This
129 command will install the right networks for kvm:
131 rsync -av --delete host_setup/etc/libvirt/qemu/networks/ /etc/libvirt/qemu/networks/
133 After this you might need to reload libvirt:
135 /etc/init.d/libvirt reload
139 You might also need to set:
141 VIRSH_DEFAULT_CONNECT_URI=qemu:///system
143 in your environment so that virsh does KVM/QEMU things by default.
145 2) You need a caching web proxy on your local network. If you don't
146 have one, then install a squid proxy on your host. See
147 host_setup/etc/squid/squid.conf for a sample config suitable for a
148 virtual cluster. Make sure it caches large objects and has plenty
149 of space. This will be needed to make downloading all the RPMs to
152 To test your squid setup, run a command like this:
154 http_proxy=http://10.0.0.1:3128/ wget <some-url>
156 Check your firewall setup. If you have problems accessing the
157 proxy from your nodes (including from kickstart postinstall) then
158 check it again! Some distributions install nice "convenient"
159 firewalls by default that might block access to the squid port
160 from the nodes. On a current version of Fedora Core you may be
161 able to run system-config-firewall-tui to reconfigure the
164 3) Setup a DNS server on your host. See host_setup/etc/bind/ for a
165 sample config that is suitable. It needs to redirect DNS queries
166 for your virtual domain to your windows domain controller
168 4) Download a RHEL install ISO.
174 A cluster comprises a single base disk image, a copy-on-write disk
175 image for each node and some XML files that tell libvirt about each
176 node's virtual hardware configuration. The copy-on-write disk images
177 save a lot of disk space on the host machine because they each use the
178 base disk image - without them the disk image for each cluster node
179 would need to contain the entire RHEL install.
181 The cluster creation process can be broken down into 2 mains steps:
183 1) Creating the base disk image.
185 2) Create the per-node disk images and corresponding XML files.
187 However, before you do this you will need to create a configuration
188 file. See the "CONFIGURATION" section below for more details.
190 Here are more details on the "create cluster" process. Note that
191 unless you have done something extra special then you'll need to run
194 1) Create the base disk image using:
196 ./autocluster create base
198 The first thing this step does is to check that it can connect to
199 the YUM server. If this fails make sure that there are no
200 firewalls blocking your access to the server.
202 The install will take about 10 to 15 minutes and you will see the
203 packages installing in your terminal
205 The installation process uses kickstart. If your configuration
206 uses a SoFS release then the last stage of the kickstart
207 configuration will be a postinstall script that installs and
208 configures packages related to SoFS. The choice of postinstall
209 script is set using the POSTINSTALL_TEMPLATE variable, allowing you
210 to adapt the installation process for different types of clusters.
212 It makes sense to install packages that will be common to all
213 nodes into the base image. This save time later when you're
214 setting up the cluster nodes. However, you don't have to do this
215 - you can set POSTINSTALL_TEMPLATE to "" instead - but then you
216 will lose the quick cluster creation/setup that is a major feature
219 When that has finished you should mark that base image immutable
222 chattr +i /virtual/ac-base.img
224 That will ensure it won't change. This is a precaution as the
225 image will be used as a basis file for the per-node images, and if
226 it changes your cluster will become corrupt
228 2) Now run "autocluster create cluster" specifying a cluster
231 autocluster create cluster c1
233 This will create and install the XML node descriptions and the
234 disk images for your cluster nodes, and any other nodes you have
235 configured. Each disk image is initially created as an "empty"
236 copy-on-write image, which is linked to the base image. Those
237 images are then loopback-nbd-mounted and populated with system
238 configuration files and other potentially useful things (such as
245 At this point the cluster has been created but isn't yet running.
246 Autocluster provides a command called "vircmd", which is a thin
247 wrapper around libvirt's virsh command. vircmd takes a cluster name
248 instead of a node/domain name and runs the requested command on all
249 nodes in the cluster.
251 1) Now boot your cluster nodes like this:
255 The most useful vircmd commands are:
258 shutdown : graceful shutdown of a node
259 destroy : power off a node immediately
261 2) You can watch boot progress like this:
263 tail -f /var/log/kvm/serial.c1*
265 All the nodes have serial consoles, making it easier to capture
266 kernel panic messages and watch the nodes via ssh
272 Now you have a cluster of nodes, which might have a variety of
273 packages installed and configured in a common way. Now that the
274 cluster is up and running you might need to configure specialised
275 subsystems like GPFS or Samba. You can do this by hand or use the
276 sample scripts/configurations that are provided
278 1) Now you can ssh into your nodes. You may like to look at the
279 small set of scripts in /root/scripts on the nodes for
280 some scripts. In particular:
282 mknsd.sh : sets up the local shared disks as GPFS NSDs
283 setup_gpfs.sh : sets up GPFS, creates a filesystem etc
284 setup_samba.sh : sets up Samba and many other system compoents
285 setup_tsm_server.sh: run this on the TSM node to setup the TSM server
286 setup_tsm_client.sh: run this on the GPFS nodes to setup HSM
288 To setup a SoFS system you will normally need to run
289 setup_gpfs.sh and setup_samba.sh.
291 2) If using the SoFS GUI, then you may want to lower the memory it
292 uses so that it fits easily on the first node. Just edit this
293 file on the first node:
295 /opt/IBM/sofs/conf/overrides/sofs.javaopt
297 3) For automating the SoFS GUI, you may wish to install the iMacros
298 extension to firefox, and look at some sample macros I have put
299 in the imacros/ directory of autocluster. They will need editing
300 for your environment, but they should give you some hints on how
301 to automate the final GUI stage of the installation of a SoFS
311 Autocluster uses configuration files containing Unix shell style
312 variables. For example,
316 indicates that the last octet of the first IP address in the cluster
317 will be 30. If an option contains multiple words then they will be
318 separated by underscores ('_'), as in:
322 All options have an equivalent command-line option, such
327 Command-line options are lowercase. Words are separated by dashes
332 Normally you would use a configuration file with variables so that you
333 can repeat steps easily. The command-line equivalents are useful for
334 trying things out without resorting to an editor. You can specify a
335 configuration file to use on the autocluster command-line using the -c
338 autocluster -c config-foo create base
340 If you don't provide a configuration variable then autocluster will
341 look for a file called "config" in the current directory.
343 You can also use environment variables to override the default values
344 of configuration variables. However, both command-line options and
345 configuration file entries will override environment variables.
347 Potentially useful information:
349 * Use "autocluster --help" to list all available command-line options
350 - all the items listed under "configuration options:" are the
351 equivalents of the settings for config files. This output also
352 shows descriptions of the options.
354 * You can use the --dump option to check the current value of
355 configuration variables. This is most useful when used in
356 combination with grep:
358 autocluster --dump | grep ISO_DIR
360 In the past we recommended using --dump to create initial
361 configuration file. Don't do this - it is a bad idea! There are a
362 lot of options and you'll create a huge file that you don't
363 understand and can't debug!
365 * Configuration options are defined in config.d/*.defconf. You
366 shouldn't need to look in these files... but sometimes they contain
367 comments about options that are too long to fit into help strings.
372 * I recommend that you aim for the smallest possible configuration file.
377 and move on from there.
379 * Use the --with-release option on the command-line or the
380 with_release function in a configuration file to get default values
381 for building virtual clusters for releases of particular "products".
382 Currently there are only release definitions for SoFS.
384 For example, you can setup default values for SoFS-1.5.3 by running:
386 autocluster --with-release=SoFS-1.5.3 ...
388 Equivalently you can use the following syntax in a configuration
391 with_release "SoFS-1.5.3"
393 So the smallest possible config file would have something like this
394 as the first line and would then set FIRSTIP:
396 with_release "SoFS-1.5.3"
400 Add other options as you need them.
402 The release definitions are stored in releases/*.release. The
403 available releases are listed in the output of "autocluster --help".
405 NOTE: Occasionally you will need to consider the position of
406 with_release in your configuration. If you want to override options
407 handled by a release definition then you will obviously need to set
408 them later in your configuration. This will be the case for most
409 options you will want to set. However, some options will need to
410 appear before with_release so that they can be used within a release
411 definition - the most obvious one is the (rarely used) RHEL_ARCH
412 option, which is used in the default ISO setting for each release.
413 If things don't work as expected use --dump to confirm that
414 configuration variables have the values that you expect.
416 * The NODES configuration variable controls the types of nodes that
417 are created. At the time of writing, the default value is:
419 NODES="rhel_base:0-3"
421 This means that you get 4 nodes, at IP offsets 0, 1, 2, & 3 from
422 FIRSTIP, all part of the CTDB cluster. That is, with standard
423 settings and FIRSTIP=35, 4 nodes will be created in the IP range
424 10.0.0.35 to 10.0.0.38.
426 The SoFS releases use a default of:
428 NODES="tsm_server:0 sofs_gui:1 sofs_front:2-4"
430 which should produce a set of nodes the same as the old SoFS
431 default. You can add extra rhel_base nodes if you need them for
432 test clients or some other purpose:
434 NODES="$NODES rhel_base:7,8"
436 This produces an additional 2 base RHEL nodes at IP offsets 7 & 8
437 from FIRSTIP. Since sofs_* nodes are present, these base nodes will
438 not be part of the CTDB cluster - they're just extra.
440 For many standard use cases the nodes specified by NODES can be
441 modified by setting NUMNODES, WITH_SOFS_GUI and WITH_TSM_NODE.
442 However, these options can't be used to create nodes without
443 specifying IP offsets - except WITH_TSM_NODE, which checks to see if
444 IP offset 0 is vacant. Therefore, for many uses you can ignore the
447 However, NODES is the recommended mechanism for specifying the nodes
448 that you want in your cluster. It is powerful, easy to read and
449 centralises the information in a single line of your configuration
455 The -e option provides support for executing arbitrary bash code.
456 This is useful for testing and debugging.
458 One good use of this option is to test template substitution using the
459 function substitute_vars(). For example:
461 ./autocluster --with-release=SoFS-1.5.3 -e 'CLUSTER=foo; DISK=foo.qcow2; UUID=abcdef; NAME=foon1; set_macaddrs; substitute_vars templates/node.xml'
463 This prints templates/node.xml with all appropriate substitutions
464 done. Some internal variables (e.g. CLUSTER, DISK, UUID, NAME) are
465 given fairly arbitrary values but the various MAC address strings are
466 set using the function set_macaddrs().
468 The -e option is also useful when writing scripts that use
469 autocluster. Given the complexities of the configuration system you
470 probably don't want to parse configuration files yourself to determine
471 the current settings. Instead, you can ask autocluster to tell you
472 useful pieces of information. For example, say you want to script
473 creating a base disk image and you want to ensure the image is
476 base_image=$(autocluster -c $CONFIG -e 'echo $VIRTBASE/$BASENAME.img')
477 chattr -V -i "$base_image"
479 if autocluster -c $CONFIG create base ; then
480 chattr -V +i "$base_image"
483 Note that the command that autocluster should run is enclosed in
484 single quotes. This means that $VIRTBASE and $BASENAME will be expand
485 within autocluster after the configuration file has been loaded.