1 <?xml version="1.0" encoding="iso-8859-1"?>
3 PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN"
4 "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd">
8 <refentrytitle>ctdb</refentrytitle>
9 <manvolnum>7</manvolnum>
10 <refmiscinfo class="source">ctdb</refmiscinfo>
11 <refmiscinfo class="manual">CTDB - clustered TDB database</refmiscinfo>
16 <refname>ctdb</refname>
17 <refpurpose>Clustered TDB</refpurpose>
21 <title>DESCRIPTION</title>
24 CTDB is a clustered database component in clustered Samba that
25 provides a high-availability load-sharing CIFS server cluster.
29 The main functions of CTDB are:
35 Provide a clustered version of the TDB database with automatic
36 rebuild/recovery of the databases upon node failures.
42 Monitor nodes in the cluster and services running on each node.
48 Manage a pool of public IP addresses that are used to provide
49 services to clients. Alternatively, CTDB can be used with
56 Combined with a cluster filesystem CTDB provides a full
57 high-availablity (HA) environment for services such as clustered
58 Samba, NFS and other services.
63 <title>ANATOMY OF A CTDB CLUSTER</title>
66 A CTDB cluster is a collection of nodes with 2 or more network
67 interfaces. All nodes provide network (usually file/NAS) services
68 to clients. Data served by file services is stored on shared
69 storage (usually a cluster filesystem) that is accessible by all
73 CTDB provides an "all active" cluster, where services are load
74 balanced across all nodes.
79 <title>Recovery Lock</title>
82 CTDB uses a <emphasis>recovery lock</emphasis> to avoid a
83 <emphasis>split brain</emphasis>, where a cluster becomes
84 partitioned and each partition attempts to operate
85 independently. Issues that can result from a split brain
86 include file data corruption, because file locking metadata may
87 not be tracked correctly.
91 CTDB uses a <emphasis>cluster leader and follower</emphasis>
92 model of cluster management. All nodes in a cluster elect one
93 node to be the leader. The leader node coordinates privileged
94 operations such as database recovery and IP address failover.
95 CTDB refers to the leader node as the <emphasis>recovery
96 master</emphasis>. This node takes and holds the recovery lock
97 to assert its privileged role in the cluster.
101 The recovery lock is implemented using a file residing in shared
102 storage (usually) on a cluster filesystem. To support a
103 recovery lock the cluster filesystem must support lock
105 <citerefentry><refentrytitle>ping_pong</refentrytitle>
106 <manvolnum>1</manvolnum></citerefentry> for more details.
110 If a cluster becomes partitioned (for example, due to a
111 communication failure) and a different recovery master is
112 elected by the nodes in each partition, then only one of these
113 recovery masters will be able to take the recovery lock. The
114 recovery master in the "losing" partition will not be able to
115 take the recovery lock and will be excluded from the cluster.
116 The nodes in the "losing" partition will elect each node in turn
117 as their recovery master so eventually all the nodes in that
118 partition will be excluded.
122 CTDB does sanity checks to ensure that the recovery lock is held
127 CTDB can run without a recovery lock but this is not recommended
128 as there will be no protection from split brains.
133 <title>Private vs Public addresses</title>
136 Each node in a CTDB cluster has multiple IP addresses assigned
142 A single private IP address that is used for communication
148 One or more public IP addresses that are used to provide
149 NAS or other services.
156 <title>Private address</title>
159 Each node is configured with a unique, permanently assigned
160 private address. This address is configured by the operating
161 system. This address uniquely identifies a physical node in
162 the cluster and is the address that CTDB daemons will use to
163 communicate with the CTDB daemons on other nodes.
166 Private addresses are listed in the file specified by the
167 <varname>CTDB_NODES</varname> configuration variable (see
168 <citerefentry><refentrytitle>ctdbd.conf</refentrytitle>
169 <manvolnum>5</manvolnum></citerefentry>, default
170 <filename>/usr/local/etc/ctdb/nodes</filename>). This file contains the
171 list of private addresses for all nodes in the cluster, one
172 per line. This file must be the same on all nodes in the
176 Private addresses should not be used by clients to connect to
177 services provided by the cluster.
180 It is strongly recommended that the private addresses are
181 configured on a private network that is separate from client
182 networks. This is because the CTDB protocol is both
183 unauthenticated and unencrypted. If clients share the private
184 network then steps need to be taken to stop injection of
185 packets to relevant ports on the private addresses. It is
186 also likely that CTDB protocol traffic between nodes could
187 leak sensitive information if it can be intercepted.
191 Example <filename>/usr/local/etc/ctdb/nodes</filename> for a four node
194 <screen format="linespecific">
203 <title>Public addresses</title>
206 Public addresses are used to provide services to clients.
207 Public addresses are not configured at the operating system
208 level and are not permanently associated with a particular
209 node. Instead, they are managed by CTDB and are assigned to
210 interfaces on physical nodes at runtime.
213 The CTDB cluster will assign/reassign these public addresses
214 across the available healthy nodes in the cluster. When one
215 node fails, its public addresses will be taken over by one or
216 more other nodes in the cluster. This ensures that services
217 provided by all public addresses are always available to
218 clients, as long as there are nodes available capable of
219 hosting this address.
222 The public address configuration is stored in a file on each
223 node specified by the <varname>CTDB_PUBLIC_ADDRESSES</varname>
224 configuration variable (see
225 <citerefentry><refentrytitle>ctdbd.conf</refentrytitle>
226 <manvolnum>5</manvolnum></citerefentry>, recommended
227 <filename>/usr/local/etc/ctdb/public_addresses</filename>). This file
228 contains a list of the public addresses that the node is
229 capable of hosting, one per line. Each entry also contains
230 the netmask and the interface to which the address should be
235 Example <filename>/usr/local/etc/ctdb/public_addresses</filename> for a
236 node that can host 4 public addresses, on 2 different
239 <screen format="linespecific">
247 In many cases the public addresses file will be the same on
248 all nodes. However, it is possible to use different public
249 address configurations on different nodes.
253 Example: 4 nodes partitioned into two subgroups:
255 <screen format="linespecific">
256 Node 0:/usr/local/etc/ctdb/public_addresses
260 Node 1:/usr/local/etc/ctdb/public_addresses
264 Node 2:/usr/local/etc/ctdb/public_addresses
268 Node 3:/usr/local/etc/ctdb/public_addresses
273 In this example nodes 0 and 1 host two public addresses on the
274 10.1.1.x network while nodes 2 and 3 host two public addresses
275 for the 10.1.2.x network.
278 Public address 10.1.1.1 can be hosted by either of nodes 0 or
279 1 and will be available to clients as long as at least one of
280 these two nodes are available.
283 If both nodes 0 and 1 become unavailable then public address
284 10.1.1.1 also becomes unavailable. 10.1.1.1 can not be failed
285 over to nodes 2 or 3 since these nodes do not have this public
289 The <command>ctdb ip</command> command can be used to view the
290 current assignment of public addresses to physical nodes.
297 <title>Node status</title>
300 The current status of each node in the cluster can be viewed by the
301 <command>ctdb status</command> command.
305 A node can be in one of the following states:
313 This node is healthy and fully functional. It hosts public
314 addresses to provide services.
320 <term>DISCONNECTED</term>
323 This node is not reachable by other nodes via the private
324 network. It is not currently participating in the cluster.
325 It <emphasis>does not</emphasis> host public addresses to
326 provide services. It might be shut down.
332 <term>DISABLED</term>
335 This node has been administratively disabled. This node is
336 partially functional and participates in the cluster.
337 However, it <emphasis>does not</emphasis> host public
338 addresses to provide services.
344 <term>UNHEALTHY</term>
347 A service provided by this node has failed a health check
348 and should be investigated. This node is partially
349 functional and participates in the cluster. However, it
350 <emphasis>does not</emphasis> host public addresses to
351 provide services. Unhealthy nodes should be investigated
352 and may require an administrative action to rectify.
361 CTDB is not behaving as designed on this node. For example,
362 it may have failed too many recovery attempts. Such nodes
363 are banned from participating in the cluster for a
364 configurable time period before they attempt to rejoin the
365 cluster. A banned node <emphasis>does not</emphasis> host
366 public addresses to provide services. All banned nodes
367 should be investigated and may require an administrative
377 This node has been administratively exclude from the
378 cluster. A stopped node does no participate in the cluster
379 and <emphasis>does not</emphasis> host public addresses to
380 provide services. This state can be used while performing
381 maintenance on a node.
387 <term>PARTIALLYONLINE</term>
390 A node that is partially online participates in a cluster
391 like a healthy (OK) node. Some interfaces to serve public
392 addresses are down, but at least one interface is up. See
393 also <command>ctdb ifaces</command>.
402 <title>CAPABILITIES</title>
405 Cluster nodes can have several different capabilities enabled.
406 These are listed below.
412 <term>RECMASTER</term>
415 Indicates that a node can become the CTDB cluster recovery
416 master. The current recovery master is decided via an
417 election held by all active nodes with this capability.
429 Indicates that a node can be the location master (LMASTER)
430 for database records. The LMASTER always knows which node
431 has the latest copy of a record in a volatile database.
443 Indicates that a node is configued in Linux Virtual Server
444 (LVS) mode. In this mode the entire CTDB cluster uses one
445 single public address for the entire cluster instead of
446 using multiple public addresses in failover mode. This is
447 an alternative to using a load-balancing layer-4 switch.
448 See the <citetitle>LVS</citetitle> section for more
457 The RECMASTER and LMASTER capabilities can be disabled when CTDB
458 is used to create a cluster spanning across WAN links. In this
459 case CTDB acts as a WAN accelerator.
468 LVS is a mode where CTDB presents one single IP address for the
469 entire cluster. This is an alternative to using public IP
470 addresses and round-robin DNS to loadbalance clients across the
475 This is similar to using a layer-4 loadbalancing switch but with
480 In this mode the cluster selects a set of nodes in the cluster
481 and loadbalance all client access to the LVS address across this
482 set of nodes. This set of nodes are all LVS capable nodes that
483 are HEALTHY, or if no HEALTHY nodes exists all LVS capable nodes
484 regardless of health status. LVS will however never loadbalance
485 traffic to nodes that are BANNED, STOPPED, DISABLED or
486 DISCONNECTED. The <command>ctdb lvs</command> command is used to
487 show which nodes are currently load-balanced across.
491 One of the these nodes are elected as the LVSMASTER. This node
492 receives all traffic from clients coming in to the LVS address
493 and multiplexes it across the internal network to one of the
494 nodes that LVS is using. When responding to the client, that
495 node will send the data back directly to the client, bypassing
496 the LVSMASTER node. The command <command>ctdb
497 lvsmaster</command> will show which node is the current
502 The path used for a client I/O is:
506 Client sends request packet to LVSMASTER.
511 LVSMASTER passes the request on to one node across the
517 Selected node processes the request.
522 Node responds back to client.
529 This means that all incoming traffic to the cluster will pass
530 through one physical node, which limits scalability. You can
531 send more data to the LVS address that one physical node can
532 multiplex. This means that you should not use LVS if your I/O
533 pattern is write-intensive since you will be limited in the
534 available network bandwidth that node can handle. LVS does work
535 wery well for read-intensive workloads where only smallish READ
536 requests are going through the LVSMASTER bottleneck and the
537 majority of the traffic volume (the data in the read replies)
538 goes straight from the processing node back to the clients. For
539 read-intensive i/o patterns you can acheive very high throughput
544 Note: you can use LVS and public addresses at the same time.
548 If you use LVS, you must have a permanent address configured for
549 the public interface on each node. This address must be routable
550 and the cluster nodes must be configured so that all traffic
551 back to client hosts are routed through this interface. This is
552 also required in order to allow samba/winbind on the node to
553 talk to the domain controller. This LVS IP address can not be
554 used to initiate outgoing traffic.
557 Make sure that the domain controller and the clients are
558 reachable from a node <emphasis>before</emphasis> you enable
559 LVS. Also ensure that outgoing traffic to these hosts is routed
560 out through the configured public interface.
564 <title>Configuration</title>
567 To activate LVS on a CTDB node you must specify the
568 <varname>CTDB_PUBLIC_INTERFACE</varname> and
569 <varname>CTDB_LVS_PUBLIC_IP</varname> configuration variables.
570 Setting the latter variable also enables the LVS capability on
576 <screen format="linespecific">
577 CTDB_PUBLIC_INTERFACE=eth1
578 CTDB_LVS_PUBLIC_IP=10.1.1.237
586 <title>TRACKING AND RESETTING TCP CONNECTIONS</title>
589 CTDB tracks TCP connections from clients to public IP addresses,
590 on known ports. When an IP address moves from one node to
591 another, all existing TCP connections to that IP address are
592 reset. The node taking over this IP address will also send
593 gratuitous ARPs (for IPv4, or neighbour advertisement, for
594 IPv6). This allows clients to reconnect quickly, rather than
595 waiting for TCP timeouts, which can be very long.
599 It is important that established TCP connections do not survive
600 a release and take of a public IP address on the same node.
601 Such connections can get out of sync with sequence and ACK
602 numbers, potentially causing a disruptive ACK storm.
608 <title>NAT GATEWAY</title>
611 NAT gateway (NATGW) is an optional feature that is used to
612 configure fallback routing for nodes. This allows cluster nodes
613 to connect to external services (e.g. DNS, AD, NIS and LDAP)
614 when they do not host any public addresses (e.g. when they are
618 This also applies to node startup because CTDB marks nodes as
619 UNHEALTHY until they have passed a "monitor" event. In this
620 context, NAT gateway helps to avoid a "chicken and egg"
621 situation where a node needs to access an external service to
625 Another way of solving this type of problem is to assign an
626 extra static IP address to a public interface on every node.
627 This is simpler but it uses an extra IP address per node, while
628 NAT gateway generally uses only one extra IP address.
632 <title>Operation</title>
635 One extra NATGW public address is assigned on the public
636 network to each NATGW group. Each NATGW group is a set of
637 nodes in the cluster that shares the same NATGW address to
638 talk to the outside world. Normally there would only be one
639 NATGW group spanning an entire cluster, but in situations
640 where one CTDB cluster spans multiple physical sites it might
641 be useful to have one NATGW group for each site.
644 There can be multiple NATGW groups in a cluster but each node
645 can only be member of one NATGW group.
648 In each NATGW group, one of the nodes is selected by CTDB to
649 be the NATGW master and the other nodes are consider to be
650 NATGW slaves. NATGW slaves establish a fallback default route
651 to the NATGW master via the private network. When a NATGW
652 slave hosts no public IP addresses then it will use this route
653 for outbound connections. The NATGW master hosts the NATGW
654 public IP address and routes outgoing connections from
655 slave nodes via this IP address. It also establishes a
656 fallback default route.
661 <title>Configuration</title>
664 NATGW is usually configured similar to the following example configuration:
666 <screen format="linespecific">
667 CTDB_NATGW_NODES=/usr/local/etc/ctdb/natgw_nodes
668 CTDB_NATGW_PRIVATE_NETWORK=192.168.1.0/24
669 CTDB_NATGW_PUBLIC_IP=10.0.0.227/24
670 CTDB_NATGW_PUBLIC_IFACE=eth0
671 CTDB_NATGW_DEFAULT_GATEWAY=10.0.0.1
675 Normally any node in a NATGW group can act as the NATGW
676 master. Some configurations may have special nodes that lack
677 connectivity to a public network. In such cases, those nodes
678 can be flagged with the "slave-only" option in the
679 <varname>CTDB_NATGW_NODES</varname> file to limit the NATGW
680 functionality of those nodes.
684 See the <citetitle>NAT GATEWAY</citetitle> section in
685 <citerefentry><refentrytitle>ctdbd.conf</refentrytitle>
686 <manvolnum>5</manvolnum></citerefentry> for more details of
693 <title>Implementation details</title>
696 When the NATGW functionality is used, one of the nodes is
697 selected to act as a NAT gateway for all the other nodes in
698 the group when they need to communicate with the external
699 services. The NATGW master is selected to be a node that is
700 most likely to have usable networks.
704 The NATGW master hosts the NATGW public IP address
705 <varname>CTDB_NATGW_PUBLIC_IP</varname> on the configured public
706 interfaces <varname>CTDB_NATGW_PUBLIC_IFACE</varname> and acts as
707 a router, masquerading outgoing connections from slave nodes
708 via this IP address. If
709 <varname>CTDB_NATGW_DEFAULT_GATEWAY</varname> is set then it
710 also establishes a fallback default route to the configured
711 this gateway with a metric of 10. A metric 10 route is used
712 so it can co-exist with other default routes that may be
717 A NATGW slave establishes its fallback default route to the
718 NATGW master via the private network
719 <varname>CTDB_NATGW_PRIVATE_NETWORK</varname>with a metric of 10.
720 This route is used for outbound connections when no other
721 default route is available because the node hosts no public
722 addresses. A metric 10 routes is used so that it can co-exist
723 with other default routes that may be available when the node
724 is hosting public addresses.
728 <varname>CTDB_NATGW_STATIC_ROUTES</varname> can be used to
729 have NATGW create more specific routes instead of just default
734 This is implemented in the <filename>11.natgw</filename>
735 eventscript. Please see the eventscript file and the
736 <citetitle>NAT GATEWAY</citetitle> section in
737 <citerefentry><refentrytitle>ctdbd.conf</refentrytitle>
738 <manvolnum>5</manvolnum></citerefentry> for more details.
745 <title>POLICY ROUTING</title>
748 Policy routing is an optional CTDB feature to support complex
749 network topologies. Public addresses may be spread across
750 several different networks (or VLANs) and it may not be possible
751 to route packets from these public addresses via the system's
752 default route. Therefore, CTDB has support for policy routing
753 via the <filename>13.per_ip_routing</filename> eventscript.
754 This allows routing to be specified for packets sourced from
755 each public address. The routes are added and removed as CTDB
756 moves public addresses between nodes.
760 <title>Configuration variables</title>
763 There are 4 configuration variables related to policy routing:
764 <varname>CTDB_PER_IP_ROUTING_CONF</varname>,
765 <varname>CTDB_PER_IP_ROUTING_RULE_PREF</varname>,
766 <varname>CTDB_PER_IP_ROUTING_TABLE_ID_LOW</varname>,
767 <varname>CTDB_PER_IP_ROUTING_TABLE_ID_HIGH</varname>. See the
768 <citetitle>POLICY ROUTING</citetitle> section in
769 <citerefentry><refentrytitle>ctdbd.conf</refentrytitle>
770 <manvolnum>5</manvolnum></citerefentry> for more details.
775 <title>Configuration</title>
778 The format of each line of
779 <varname>CTDB_PER_IP_ROUTING_CONF</varname> is:
783 <public_address> <network> [ <gateway> ]
787 Leading whitespace is ignored and arbitrary whitespace may be
788 used as a separator. Lines that have a "public address" item
789 that doesn't match an actual public address are ignored. This
790 means that comment lines can be added using a leading
791 character such as '#', since this will never match an IP
796 A line without a gateway indicates a link local route.
800 For example, consider the configuration line:
804 192.168.1.99 192.168.1.1/24
808 If the corresponding public_addresses line is:
812 192.168.1.99/24 eth2,eth3
816 <varname>CTDB_PER_IP_ROUTING_RULE_PREF</varname> is 100, and
817 CTDB adds the address to eth2 then the following routing
818 information is added:
822 ip rule add from 192.168.1.99 pref 100 table ctdb.192.168.1.99
823 ip route add 192.168.1.0/24 dev eth2 table ctdb.192.168.1.99
827 This causes traffic from 192.168.1.1 to 192.168.1.0/24 go via
832 The <command>ip rule</command> command will show (something
833 like - depending on other public addresses and other routes on
838 0: from all lookup local
839 100: from 192.168.1.99 lookup ctdb.192.168.1.99
840 32766: from all lookup main
841 32767: from all lookup default
845 <command>ip route show table ctdb.192.168.1.99</command> will show:
849 192.168.1.0/24 dev eth2 scope link
853 The usual use for a line containing a gateway is to add a
854 default route corresponding to a particular source address.
855 Consider this line of configuration:
859 192.168.1.99 0.0.0.0/0 192.168.1.1
863 In the situation described above this will cause an extra
864 routing command to be executed:
868 ip route add 0.0.0.0/0 via 192.168.1.1 dev eth2 table ctdb.192.168.1.99
872 With both configuration lines, <command>ip route show table
873 ctdb.192.168.1.99</command> will show:
877 192.168.1.0/24 dev eth2 scope link
878 default via 192.168.1.1 dev eth2
883 <title>Sample configuration</title>
886 Here is a more complete example configuration.
890 /usr/local/etc/ctdb/public_addresses:
892 192.168.1.98 eth2,eth3
893 192.168.1.99 eth2,eth3
895 /usr/local/etc/ctdb/policy_routing:
897 192.168.1.98 192.168.1.0/24
898 192.168.1.98 192.168.200.0/24 192.168.1.254
899 192.168.1.98 0.0.0.0/0 192.168.1.1
900 192.168.1.99 192.168.1.0/24
901 192.168.1.99 192.168.200.0/24 192.168.1.254
902 192.168.1.99 0.0.0.0/0 192.168.1.1
906 The routes local packets as expected, the default route is as
907 previously discussed, but packets to 192.168.200.0/24 are
908 routed via the alternate gateway 192.168.1.254.
915 <title>NOTIFICATION SCRIPT</title>
918 When certain state changes occur in CTDB, it can be configured
919 to perform arbitrary actions via a notification script. For
920 example, sending SNMP traps or emails when a node becomes
921 unhealthy or similar.
924 This is activated by setting the
925 <varname>CTDB_NOTIFY_SCRIPT</varname> configuration variable.
926 The specified script must be executable.
929 Use of the provided <filename>/usr/local/etc/ctdb/notify.sh</filename>
930 script is recommended. It executes files in
931 <filename>/usr/local/etc/ctdb/notify.d/</filename>.
934 CTDB currently generates notifications after CTDB changes to
939 <member>init</member>
940 <member>setup</member>
941 <member>startup</member>
942 <member>healthy</member>
943 <member>unhealthy</member>
949 <title>DEBUG LEVELS</title>
952 Valid values for DEBUGLEVEL are:
956 <member>ERR (0)</member>
957 <member>WARNING (1)</member>
958 <member>NOTICE (2)</member>
959 <member>INFO (3)</member>
960 <member>DEBUG (4)</member>
966 <title>REMOTE CLUSTER NODES</title>
968 It is possible to have a CTDB cluster that spans across a WAN link.
969 For example where you have a CTDB cluster in your datacentre but you also
970 want to have one additional CTDB node located at a remote branch site.
971 This is similar to how a WAN accelerator works but with the difference
972 that while a WAN-accelerator often acts as a Proxy or a MitM, in
973 the ctdb remote cluster node configuration the Samba instance at the remote site
974 IS the genuine server, not a proxy and not a MitM, and thus provides 100%
975 correct CIFS semantics to clients.
979 See the cluster as one single multihomed samba server where one of
980 the NICs (the remote node) is very far away.
984 NOTE: This does require that the cluster filesystem you use can cope
985 with WAN-link latencies. Not all cluster filesystems can handle
986 WAN-link latencies! Whether this will provide very good WAN-accelerator
987 performance or it will perform very poorly depends entirely
988 on how optimized your cluster filesystem is in handling high latency
989 for data and metadata operations.
993 To activate a node as being a remote cluster node you need to set
994 the following two parameters in /etc/sysconfig/ctdb for the remote node:
995 <screen format="linespecific">
996 CTDB_CAPABILITY_LMASTER=no
997 CTDB_CAPABILITY_RECMASTER=no
1002 Verify with the command "ctdb getcapabilities" that that node no longer
1003 has the recmaster or the lmaster capabilities.
1010 <title>SEE ALSO</title>
1013 <citerefentry><refentrytitle>ctdb</refentrytitle>
1014 <manvolnum>1</manvolnum></citerefentry>,
1016 <citerefentry><refentrytitle>ctdbd</refentrytitle>
1017 <manvolnum>1</manvolnum></citerefentry>,
1019 <citerefentry><refentrytitle>ctdbd_wrapper</refentrytitle>
1020 <manvolnum>1</manvolnum></citerefentry>,
1022 <citerefentry><refentrytitle>ltdbtool</refentrytitle>
1023 <manvolnum>1</manvolnum></citerefentry>,
1025 <citerefentry><refentrytitle>onnode</refentrytitle>
1026 <manvolnum>1</manvolnum></citerefentry>,
1028 <citerefentry><refentrytitle>ping_pong</refentrytitle>
1029 <manvolnum>1</manvolnum></citerefentry>,
1031 <citerefentry><refentrytitle>ctdbd.conf</refentrytitle>
1032 <manvolnum>5</manvolnum></citerefentry>,
1034 <citerefentry><refentrytitle>ctdb-statistics</refentrytitle>
1035 <manvolnum>7</manvolnum></citerefentry>,
1037 <citerefentry><refentrytitle>ctdb-tunables</refentrytitle>
1038 <manvolnum>7</manvolnum></citerefentry>,
1040 <ulink url="http://ctdb.samba.org/"/>
1047 This documentation was written by
1056 <holder>Andrew Tridgell</holder>
1057 <holder>Ronnie Sahlberg</holder>
1061 This program is free software; you can redistribute it and/or
1062 modify it under the terms of the GNU General Public License as
1063 published by the Free Software Foundation; either version 3 of
1064 the License, or (at your option) any later version.
1067 This program is distributed in the hope that it will be
1068 useful, but WITHOUT ANY WARRANTY; without even the implied
1069 warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
1070 PURPOSE. See the GNU General Public License for more details.
1073 You should have received a copy of the GNU General Public
1074 License along with this program; if not, see
1075 <ulink url="http://www.gnu.org/licenses"/>.