1 <html><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"><title>ctdb</title><meta name="generator" content="DocBook XSL Stylesheets V1.79.1"></head><body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"><div class="refentry"><a name="ctdb.7"></a><div class="titlepage"></div><div class="refnamediv"><h2>Name</h2><p>ctdb — Clustered TDB</p></div><div class="refsect1"><a name="idm10"></a><h2>DESCRIPTION</h2><p>
2 CTDB is a clustered database component in clustered Samba that
3 provides a high-availability load-sharing CIFS server cluster.
5 The main functions of CTDB are:
6 </p><div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; "><li class="listitem"><p>
7 Provide a clustered version of the TDB database with automatic
8 rebuild/recovery of the databases upon node failures.
9 </p></li><li class="listitem"><p>
10 Monitor nodes in the cluster and services running on each node.
11 </p></li><li class="listitem"><p>
12 Manage a pool of public IP addresses that are used to provide
13 services to clients. Alternatively, CTDB can be used with
15 </p></li></ul></div><p>
16 Combined with a cluster filesystem CTDB provides a full
17 high-availablity (HA) environment for services such as clustered
18 Samba, NFS and other services.
19 </p></div><div class="refsect1"><a name="idm22"></a><h2>ANATOMY OF A CTDB CLUSTER</h2><p>
20 A CTDB cluster is a collection of nodes with 2 or more network
21 interfaces. All nodes provide network (usually file/NAS) services
22 to clients. Data served by file services is stored on shared
23 storage (usually a cluster filesystem) that is accessible by all
26 CTDB provides an "all active" cluster, where services are load
27 balanced across all nodes.
28 </p></div><div class="refsect1"><a name="idm26"></a><h2>Recovery Lock</h2><p>
29 CTDB uses a <span class="emphasis"><em>recovery lock</em></span> to avoid a
30 <span class="emphasis"><em>split brain</em></span>, where a cluster becomes
31 partitioned and each partition attempts to operate
32 independently. Issues that can result from a split brain
33 include file data corruption, because file locking metadata may
34 not be tracked correctly.
36 CTDB uses a <span class="emphasis"><em>cluster leader and follower</em></span>
37 model of cluster management. All nodes in a cluster elect one
38 node to be the leader. The leader node coordinates privileged
39 operations such as database recovery and IP address failover.
40 CTDB refers to the leader node as the <span class="emphasis"><em>recovery
41 master</em></span>. This node takes and holds the recovery lock
42 to assert its privileged role in the cluster.
44 By default, the recovery lock is implemented using a file
45 (specified by <em class="parameter"><code>CTDB_RECOVERY_LOCK</code></em>)
46 residing in shared storage (usually) on a cluster filesystem.
47 To support a recovery lock the cluster filesystem must support
49 <span class="citerefentry"><span class="refentrytitle">ping_pong</span>(1)</span> for more details.
51 The recovery lock can also be implemented using an arbitrary
52 cluster mutex call-out by using an exclamation point ('!') as
53 the first character of
54 <em class="parameter"><code>CTDB_RECOVERY_LOCK</code></em>. For example, a value
55 of <span class="command"><strong>!/usr/local/bin/myhelper recovery</strong></span> would
56 run the given helper with the specified arguments. See the
57 source code relating to cluster mutexes for clues about writing
60 If a cluster becomes partitioned (for example, due to a
61 communication failure) and a different recovery master is
62 elected by the nodes in each partition, then only one of these
63 recovery masters will be able to take the recovery lock. The
64 recovery master in the "losing" partition will not be able to
65 take the recovery lock and will be excluded from the cluster.
66 The nodes in the "losing" partition will elect each node in turn
67 as their recovery master so eventually all the nodes in that
68 partition will be excluded.
70 CTDB does sanity checks to ensure that the recovery lock is held
73 CTDB can run without a recovery lock but this is not recommended
74 as there will be no protection from split brains.
75 </p></div><div class="refsect1"><a name="idm45"></a><h2>Private vs Public addresses</h2><p>
76 Each node in a CTDB cluster has multiple IP addresses assigned
79 </p><div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; "><li class="listitem"><p>
80 A single private IP address that is used for communication
82 </p></li><li class="listitem"><p>
83 One or more public IP addresses that are used to provide
84 NAS or other services.
85 </p></li></ul></div><p>
86 </p><div class="refsect2"><a name="idm53"></a><h3>Private address</h3><p>
87 Each node is configured with a unique, permanently assigned
88 private address. This address is configured by the operating
89 system. This address uniquely identifies a physical node in
90 the cluster and is the address that CTDB daemons will use to
91 communicate with the CTDB daemons on other nodes.
93 Private addresses are listed in the file specified by the
94 <code class="varname">CTDB_NODES</code> configuration variable (see
95 <span class="citerefentry"><span class="refentrytitle">ctdbd.conf</span>(5)</span>, default
96 <code class="filename">/usr/local/etc/ctdb/nodes</code>). This file contains the
97 list of private addresses for all nodes in the cluster, one
98 per line. This file must be the same on all nodes in the
101 Private addresses should not be used by clients to connect to
102 services provided by the cluster.
104 It is strongly recommended that the private addresses are
105 configured on a private network that is separate from client
106 networks. This is because the CTDB protocol is both
107 unauthenticated and unencrypted. If clients share the private
108 network then steps need to be taken to stop injection of
109 packets to relevant ports on the private addresses. It is
110 also likely that CTDB protocol traffic between nodes could
111 leak sensitive information if it can be intercepted.
113 Example <code class="filename">/usr/local/etc/ctdb/nodes</code> for a four node
115 </p><pre class="screen">
120 </pre></div><div class="refsect2"><a name="idm67"></a><h3>Public addresses</h3><p>
121 Public addresses are used to provide services to clients.
122 Public addresses are not configured at the operating system
123 level and are not permanently associated with a particular
124 node. Instead, they are managed by CTDB and are assigned to
125 interfaces on physical nodes at runtime.
127 The CTDB cluster will assign/reassign these public addresses
128 across the available healthy nodes in the cluster. When one
129 node fails, its public addresses will be taken over by one or
130 more other nodes in the cluster. This ensures that services
131 provided by all public addresses are always available to
132 clients, as long as there are nodes available capable of
133 hosting this address.
135 The public address configuration is stored in a file on each
136 node specified by the <code class="varname">CTDB_PUBLIC_ADDRESSES</code>
137 configuration variable (see
138 <span class="citerefentry"><span class="refentrytitle">ctdbd.conf</span>(5)</span>, recommended
139 <code class="filename">/usr/local/etc/ctdb/public_addresses</code>). This file
140 contains a list of the public addresses that the node is
141 capable of hosting, one per line. Each entry also contains
142 the netmask and the interface to which the address should be
145 Example <code class="filename">/usr/local/etc/ctdb/public_addresses</code> for a
146 node that can host 4 public addresses, on 2 different
148 </p><pre class="screen">
154 In many cases the public addresses file will be the same on
155 all nodes. However, it is possible to use different public
156 address configurations on different nodes.
158 Example: 4 nodes partitioned into two subgroups:
159 </p><pre class="screen">
160 Node 0:/usr/local/etc/ctdb/public_addresses
164 Node 1:/usr/local/etc/ctdb/public_addresses
168 Node 2:/usr/local/etc/ctdb/public_addresses
172 Node 3:/usr/local/etc/ctdb/public_addresses
176 In this example nodes 0 and 1 host two public addresses on the
177 10.1.1.x network while nodes 2 and 3 host two public addresses
178 for the 10.1.2.x network.
180 Public address 10.1.1.1 can be hosted by either of nodes 0 or
181 1 and will be available to clients as long as at least one of
182 these two nodes are available.
184 If both nodes 0 and 1 become unavailable then public address
185 10.1.1.1 also becomes unavailable. 10.1.1.1 can not be failed
186 over to nodes 2 or 3 since these nodes do not have this public
189 The <span class="command"><strong>ctdb ip</strong></span> command can be used to view the
190 current assignment of public addresses to physical nodes.
191 </p></div></div><div class="refsect1"><a name="idm88"></a><h2>Node status</h2><p>
192 The current status of each node in the cluster can be viewed by the
193 <span class="command"><strong>ctdb status</strong></span> command.
195 A node can be in one of the following states:
196 </p><div class="variablelist"><dl class="variablelist"><dt><span class="term">OK</span></dt><dd><p>
197 This node is healthy and fully functional. It hosts public
198 addresses to provide services.
199 </p></dd><dt><span class="term">DISCONNECTED</span></dt><dd><p>
200 This node is not reachable by other nodes via the private
201 network. It is not currently participating in the cluster.
202 It <span class="emphasis"><em>does not</em></span> host public addresses to
203 provide services. It might be shut down.
204 </p></dd><dt><span class="term">DISABLED</span></dt><dd><p>
205 This node has been administratively disabled. This node is
206 partially functional and participates in the cluster.
207 However, it <span class="emphasis"><em>does not</em></span> host public
208 addresses to provide services.
209 </p></dd><dt><span class="term">UNHEALTHY</span></dt><dd><p>
210 A service provided by this node has failed a health check
211 and should be investigated. This node is partially
212 functional and participates in the cluster. However, it
213 <span class="emphasis"><em>does not</em></span> host public addresses to
214 provide services. Unhealthy nodes should be investigated
215 and may require an administrative action to rectify.
216 </p></dd><dt><span class="term">BANNED</span></dt><dd><p>
217 CTDB is not behaving as designed on this node. For example,
218 it may have failed too many recovery attempts. Such nodes
219 are banned from participating in the cluster for a
220 configurable time period before they attempt to rejoin the
221 cluster. A banned node <span class="emphasis"><em>does not</em></span> host
222 public addresses to provide services. All banned nodes
223 should be investigated and may require an administrative
225 </p></dd><dt><span class="term">STOPPED</span></dt><dd><p>
226 This node has been administratively exclude from the
227 cluster. A stopped node does no participate in the cluster
228 and <span class="emphasis"><em>does not</em></span> host public addresses to
229 provide services. This state can be used while performing
230 maintenance on a node.
231 </p></dd><dt><span class="term">PARTIALLYONLINE</span></dt><dd><p>
232 A node that is partially online participates in a cluster
233 like a healthy (OK) node. Some interfaces to serve public
234 addresses are down, but at least one interface is up. See
235 also <span class="command"><strong>ctdb ifaces</strong></span>.
236 </p></dd></dl></div></div><div class="refsect1"><a name="idm128"></a><h2>CAPABILITIES</h2><p>
237 Cluster nodes can have several different capabilities enabled.
238 These are listed below.
239 </p><div class="variablelist"><dl class="variablelist"><dt><span class="term">RECMASTER</span></dt><dd><p>
240 Indicates that a node can become the CTDB cluster recovery
241 master. The current recovery master is decided via an
242 election held by all active nodes with this capability.
245 </p></dd><dt><span class="term">LMASTER</span></dt><dd><p>
246 Indicates that a node can be the location master (LMASTER)
247 for database records. The LMASTER always knows which node
248 has the latest copy of a record in a volatile database.
251 </p></dd></dl></div><p>
252 The RECMASTER and LMASTER capabilities can be disabled when CTDB
253 is used to create a cluster spanning across WAN links. In this
254 case CTDB acts as a WAN accelerator.
255 </p></div><div class="refsect1"><a name="idm143"></a><h2>LVS</h2><p>
256 LVS is a mode where CTDB presents one single IP address for the
257 entire cluster. This is an alternative to using public IP
258 addresses and round-robin DNS to loadbalance clients across the
261 This is similar to using a layer-4 loadbalancing switch but with
264 One extra LVS public address is assigned on the public network
265 to each LVS group. Each LVS group is a set of nodes in the
266 cluster that presents the same LVS address public address to the
267 outside world. Normally there would only be one LVS group
268 spanning an entire cluster, but in situations where one CTDB
269 cluster spans multiple physical sites it might be useful to have
270 one LVS group for each site. There can be multiple LVS groups
271 in a cluster but each node can only be member of one LVS group.
273 Client access to the cluster is load-balanced across the HEALTHY
274 nodes in an LVS group. If no HEALTHY nodes exists then all
275 nodes in the group are used, regardless of health status. CTDB
276 will, however never load-balance LVS traffic to nodes that are
277 BANNED, STOPPED, DISABLED or DISCONNECTED. The <span class="command"><strong>ctdb
278 lvs</strong></span> command is used to show which nodes are currently
279 load-balanced across.
281 In each LVS group, one of the nodes is selected by CTDB to be
282 the LVS master. This node receives all traffic from clients
283 coming in to the LVS public address and multiplexes it across
284 the internal network to one of the nodes that LVS is using.
285 When responding to the client, that node will send the data back
286 directly to the client, bypassing the LVS master node. The
287 command <span class="command"><strong>ctdb lvs master</strong></span> will show which node
288 is the current LVS master.
290 The path used for a client I/O is:
291 </p><div class="orderedlist"><ol class="orderedlist" type="1"><li class="listitem"><p>
292 Client sends request packet to LVSMASTER.
293 </p></li><li class="listitem"><p>
294 LVSMASTER passes the request on to one node across the
296 </p></li><li class="listitem"><p>
297 Selected node processes the request.
298 </p></li><li class="listitem"><p>
299 Node responds back to client.
300 </p></li></ol></div><p>
302 This means that all incoming traffic to the cluster will pass
303 through one physical node, which limits scalability. You can
304 send more data to the LVS address that one physical node can
305 multiplex. This means that you should not use LVS if your I/O
306 pattern is write-intensive since you will be limited in the
307 available network bandwidth that node can handle. LVS does work
308 very well for read-intensive workloads where only smallish READ
309 requests are going through the LVSMASTER bottleneck and the
310 majority of the traffic volume (the data in the read replies)
311 goes straight from the processing node back to the clients. For
312 read-intensive i/o patterns you can achieve very high throughput
315 Note: you can use LVS and public addresses at the same time.
317 If you use LVS, you must have a permanent address configured for
318 the public interface on each node. This address must be routable
319 and the cluster nodes must be configured so that all traffic
320 back to client hosts are routed through this interface. This is
321 also required in order to allow samba/winbind on the node to
322 talk to the domain controller. This LVS IP address can not be
323 used to initiate outgoing traffic.
325 Make sure that the domain controller and the clients are
326 reachable from a node <span class="emphasis"><em>before</em></span> you enable
327 LVS. Also ensure that outgoing traffic to these hosts is routed
328 out through the configured public interface.
329 </p><div class="refsect2"><a name="idm167"></a><h3>Configuration</h3><p>
330 To activate LVS on a CTDB node you must specify the
331 <code class="varname">CTDB_LVS_PUBLIC_IFACE</code>,
332 <code class="varname">CTDB_LVS_PUBLIC_IP</code> and
333 <code class="varname">CTDB_LVS_NODES</code> configuration variables.
334 <code class="varname">CTDB_LVS_NODES</code> specifies a file containing
335 the private address of all nodes in the current node's LVS
339 </p><pre class="screen">
340 CTDB_LVS_PUBLIC_IFACE=eth1
341 CTDB_LVS_PUBLIC_IP=10.1.1.237
342 CTDB_LVS_NODES=/usr/local/etc/ctdb/lvs_nodes
345 Example <code class="filename">/usr/local/etc/ctdb/lvs_nodes</code>:
346 </p><pre class="screen">
351 Normally any node in an LVS group can act as the LVS master.
352 Nodes that are highly loaded due to other demands maybe
353 flagged with the "slave-only" option in the
354 <code class="varname">CTDB_LVS_NODES</code> file to limit the LVS
355 functionality of those nodes.
357 LVS nodes file that excludes 192.168.1.4 from being
359 </p><pre class="screen">
362 192.168.1.4 slave-only
363 </pre></div></div><div class="refsect1"><a name="idm183"></a><h2>TRACKING AND RESETTING TCP CONNECTIONS</h2><p>
364 CTDB tracks TCP connections from clients to public IP addresses,
365 on known ports. When an IP address moves from one node to
366 another, all existing TCP connections to that IP address are
367 reset. The node taking over this IP address will also send
368 gratuitous ARPs (for IPv4, or neighbour advertisement, for
369 IPv6). This allows clients to reconnect quickly, rather than
370 waiting for TCP timeouts, which can be very long.
372 It is important that established TCP connections do not survive
373 a release and take of a public IP address on the same node.
374 Such connections can get out of sync with sequence and ACK
375 numbers, potentially causing a disruptive ACK storm.
376 </p></div><div class="refsect1"><a name="idm187"></a><h2>NAT GATEWAY</h2><p>
377 NAT gateway (NATGW) is an optional feature that is used to
378 configure fallback routing for nodes. This allows cluster nodes
379 to connect to external services (e.g. DNS, AD, NIS and LDAP)
380 when they do not host any public addresses (e.g. when they are
383 This also applies to node startup because CTDB marks nodes as
384 UNHEALTHY until they have passed a "monitor" event. In this
385 context, NAT gateway helps to avoid a "chicken and egg"
386 situation where a node needs to access an external service to
389 Another way of solving this type of problem is to assign an
390 extra static IP address to a public interface on every node.
391 This is simpler but it uses an extra IP address per node, while
392 NAT gateway generally uses only one extra IP address.
393 </p><div class="refsect2"><a name="idm192"></a><h3>Operation</h3><p>
394 One extra NATGW public address is assigned on the public
395 network to each NATGW group. Each NATGW group is a set of
396 nodes in the cluster that shares the same NATGW address to
397 talk to the outside world. Normally there would only be one
398 NATGW group spanning an entire cluster, but in situations
399 where one CTDB cluster spans multiple physical sites it might
400 be useful to have one NATGW group for each site.
402 There can be multiple NATGW groups in a cluster but each node
403 can only be member of one NATGW group.
405 In each NATGW group, one of the nodes is selected by CTDB to
406 be the NATGW master and the other nodes are consider to be
407 NATGW slaves. NATGW slaves establish a fallback default route
408 to the NATGW master via the private network. When a NATGW
409 slave hosts no public IP addresses then it will use this route
410 for outbound connections. The NATGW master hosts the NATGW
411 public IP address and routes outgoing connections from
412 slave nodes via this IP address. It also establishes a
413 fallback default route.
414 </p></div><div class="refsect2"><a name="idm197"></a><h3>Configuration</h3><p>
415 NATGW is usually configured similar to the following example configuration:
416 </p><pre class="screen">
417 CTDB_NATGW_NODES=/usr/local/etc/ctdb/natgw_nodes
418 CTDB_NATGW_PRIVATE_NETWORK=192.168.1.0/24
419 CTDB_NATGW_PUBLIC_IP=10.0.0.227/24
420 CTDB_NATGW_PUBLIC_IFACE=eth0
421 CTDB_NATGW_DEFAULT_GATEWAY=10.0.0.1
423 Normally any node in a NATGW group can act as the NATGW
424 master. Some configurations may have special nodes that lack
425 connectivity to a public network. In such cases, those nodes
426 can be flagged with the "slave-only" option in the
427 <code class="varname">CTDB_NATGW_NODES</code> file to limit the NATGW
428 functionality of those nodes.
430 See the <em class="citetitle">NAT GATEWAY</em> section in
431 <span class="citerefentry"><span class="refentrytitle">ctdbd.conf</span>(5)</span> for more details of
433 </p></div><div class="refsect2"><a name="idm208"></a><h3>Implementation details</h3><p>
434 When the NATGW functionality is used, one of the nodes is
435 selected to act as a NAT gateway for all the other nodes in
436 the group when they need to communicate with the external
437 services. The NATGW master is selected to be a node that is
438 most likely to have usable networks.
440 The NATGW master hosts the NATGW public IP address
441 <code class="varname">CTDB_NATGW_PUBLIC_IP</code> on the configured public
442 interfaces <code class="varname">CTDB_NATGW_PUBLIC_IFACE</code> and acts as
443 a router, masquerading outgoing connections from slave nodes
444 via this IP address. If
445 <code class="varname">CTDB_NATGW_DEFAULT_GATEWAY</code> is set then it
446 also establishes a fallback default route to the configured
447 this gateway with a metric of 10. A metric 10 route is used
448 so it can co-exist with other default routes that may be
451 A NATGW slave establishes its fallback default route to the
452 NATGW master via the private network
453 <code class="varname">CTDB_NATGW_PRIVATE_NETWORK</code>with a metric of 10.
454 This route is used for outbound connections when no other
455 default route is available because the node hosts no public
456 addresses. A metric 10 routes is used so that it can co-exist
457 with other default routes that may be available when the node
458 is hosting public addresses.
460 <code class="varname">CTDB_NATGW_STATIC_ROUTES</code> can be used to
461 have NATGW create more specific routes instead of just default
464 This is implemented in the <code class="filename">11.natgw</code>
465 eventscript. Please see the eventscript file and the
466 <em class="citetitle">NAT GATEWAY</em> section in
467 <span class="citerefentry"><span class="refentrytitle">ctdbd.conf</span>(5)</span> for more details.
468 </p></div></div><div class="refsect1"><a name="idm225"></a><h2>POLICY ROUTING</h2><p>
469 Policy routing is an optional CTDB feature to support complex
470 network topologies. Public addresses may be spread across
471 several different networks (or VLANs) and it may not be possible
472 to route packets from these public addresses via the system's
473 default route. Therefore, CTDB has support for policy routing
474 via the <code class="filename">13.per_ip_routing</code> eventscript.
475 This allows routing to be specified for packets sourced from
476 each public address. The routes are added and removed as CTDB
477 moves public addresses between nodes.
478 </p><div class="refsect2"><a name="idm229"></a><h3>Configuration variables</h3><p>
479 There are 4 configuration variables related to policy routing:
480 <code class="varname">CTDB_PER_IP_ROUTING_CONF</code>,
481 <code class="varname">CTDB_PER_IP_ROUTING_RULE_PREF</code>,
482 <code class="varname">CTDB_PER_IP_ROUTING_TABLE_ID_LOW</code>,
483 <code class="varname">CTDB_PER_IP_ROUTING_TABLE_ID_HIGH</code>. See the
484 <em class="citetitle">POLICY ROUTING</em> section in
485 <span class="citerefentry"><span class="refentrytitle">ctdbd.conf</span>(5)</span> for more details.
486 </p></div><div class="refsect2"><a name="idm240"></a><h3>Configuration</h3><p>
487 The format of each line of
488 <code class="varname">CTDB_PER_IP_ROUTING_CONF</code> is:
489 </p><pre class="screen">
490 <public_address> <network> [ <gateway> ]
492 Leading whitespace is ignored and arbitrary whitespace may be
493 used as a separator. Lines that have a "public address" item
494 that doesn't match an actual public address are ignored. This
495 means that comment lines can be added using a leading
496 character such as '#', since this will never match an IP
499 A line without a gateway indicates a link local route.
501 For example, consider the configuration line:
502 </p><pre class="screen">
503 192.168.1.99 192.168.1.1/24
505 If the corresponding public_addresses line is:
506 </p><pre class="screen">
507 192.168.1.99/24 eth2,eth3
509 <code class="varname">CTDB_PER_IP_ROUTING_RULE_PREF</code> is 100, and
510 CTDB adds the address to eth2 then the following routing
511 information is added:
512 </p><pre class="screen">
513 ip rule add from 192.168.1.99 pref 100 table ctdb.192.168.1.99
514 ip route add 192.168.1.0/24 dev eth2 table ctdb.192.168.1.99
516 This causes traffic from 192.168.1.1 to 192.168.1.0/24 go via
519 The <span class="command"><strong>ip rule</strong></span> command will show (something
520 like - depending on other public addresses and other routes on
522 </p><pre class="screen">
523 0: from all lookup local
524 100: from 192.168.1.99 lookup ctdb.192.168.1.99
525 32766: from all lookup main
526 32767: from all lookup default
528 <span class="command"><strong>ip route show table ctdb.192.168.1.99</strong></span> will show:
529 </p><pre class="screen">
530 192.168.1.0/24 dev eth2 scope link
532 The usual use for a line containing a gateway is to add a
533 default route corresponding to a particular source address.
534 Consider this line of configuration:
535 </p><pre class="screen">
536 192.168.1.99 0.0.0.0/0 192.168.1.1
538 In the situation described above this will cause an extra
539 routing command to be executed:
540 </p><pre class="screen">
541 ip route add 0.0.0.0/0 via 192.168.1.1 dev eth2 table ctdb.192.168.1.99
543 With both configuration lines, <span class="command"><strong>ip route show table
544 ctdb.192.168.1.99</strong></span> will show:
545 </p><pre class="screen">
546 192.168.1.0/24 dev eth2 scope link
547 default via 192.168.1.1 dev eth2
548 </pre></div><div class="refsect2"><a name="idm268"></a><h3>Sample configuration</h3><p>
549 Here is a more complete example configuration.
550 </p><pre class="screen">
551 /usr/local/etc/ctdb/public_addresses:
553 192.168.1.98 eth2,eth3
554 192.168.1.99 eth2,eth3
556 /usr/local/etc/ctdb/policy_routing:
558 192.168.1.98 192.168.1.0/24
559 192.168.1.98 192.168.200.0/24 192.168.1.254
560 192.168.1.98 0.0.0.0/0 192.168.1.1
561 192.168.1.99 192.168.1.0/24
562 192.168.1.99 192.168.200.0/24 192.168.1.254
563 192.168.1.99 0.0.0.0/0 192.168.1.1
565 The routes local packets as expected, the default route is as
566 previously discussed, but packets to 192.168.200.0/24 are
567 routed via the alternate gateway 192.168.1.254.
568 </p></div></div><div class="refsect1"><a name="idm273"></a><h2>NOTIFICATION SCRIPT</h2><p>
569 When certain state changes occur in CTDB, it can be configured
570 to perform arbitrary actions via a notification script. For
571 example, sending SNMP traps or emails when a node becomes
572 unhealthy or similar.
574 This is activated by setting the
575 <code class="varname">CTDB_NOTIFY_SCRIPT</code> configuration variable.
576 The specified script must be executable.
578 Use of the provided <code class="filename">/usr/local/etc/ctdb/notify.sh</code>
579 script is recommended. It executes files in
580 <code class="filename">/usr/local/etc/ctdb/notify.d/</code>.
582 CTDB currently generates notifications after CTDB changes to
584 </p><table border="0" summary="Simple list" class="simplelist"><tr><td>init</td></tr><tr><td>setup</td></tr><tr><td>startup</td></tr><tr><td>healthy</td></tr><tr><td>unhealthy</td></tr></table></div><div class="refsect1"><a name="idm288"></a><h2>DEBUG LEVELS</h2><p>
585 Valid values for DEBUGLEVEL are:
586 </p><table border="0" summary="Simple list" class="simplelist"><tr><td>ERR</td></tr><tr><td>WARNING</td></tr><tr><td>NOTICE</td></tr><tr><td>INFO</td></tr><tr><td>DEBUG</td></tr></table></div><div class="refsect1"><a name="idm297"></a><h2>REMOTE CLUSTER NODES</h2><p>
587 It is possible to have a CTDB cluster that spans across a WAN link.
588 For example where you have a CTDB cluster in your datacentre but you also
589 want to have one additional CTDB node located at a remote branch site.
590 This is similar to how a WAN accelerator works but with the difference
591 that while a WAN-accelerator often acts as a Proxy or a MitM, in
592 the ctdb remote cluster node configuration the Samba instance at the remote site
593 IS the genuine server, not a proxy and not a MitM, and thus provides 100%
594 correct CIFS semantics to clients.
596 See the cluster as one single multihomed samba server where one of
597 the NICs (the remote node) is very far away.
599 NOTE: This does require that the cluster filesystem you use can cope
600 with WAN-link latencies. Not all cluster filesystems can handle
601 WAN-link latencies! Whether this will provide very good WAN-accelerator
602 performance or it will perform very poorly depends entirely
603 on how optimized your cluster filesystem is in handling high latency
604 for data and metadata operations.
606 To activate a node as being a remote cluster node you need to set
607 the following two parameters in /etc/sysconfig/ctdb for the remote node:
608 </p><pre class="screen">
609 CTDB_CAPABILITY_LMASTER=no
610 CTDB_CAPABILITY_RECMASTER=no
613 Verify with the command "ctdb getcapabilities" that that node no longer
614 has the recmaster or the lmaster capabilities.
615 </p></div><div class="refsect1"><a name="idm305"></a><h2>SEE ALSO</h2><p>
616 <span class="citerefentry"><span class="refentrytitle">ctdb</span>(1)</span>,
618 <span class="citerefentry"><span class="refentrytitle">ctdbd</span>(1)</span>,
620 <span class="citerefentry"><span class="refentrytitle">ctdbd_wrapper</span>(1)</span>,
622 <span class="citerefentry"><span class="refentrytitle">ctdb_diagnostics</span>(1)</span>,
624 <span class="citerefentry"><span class="refentrytitle">ltdbtool</span>(1)</span>,
626 <span class="citerefentry"><span class="refentrytitle">onnode</span>(1)</span>,
628 <span class="citerefentry"><span class="refentrytitle">ping_pong</span>(1)</span>,
630 <span class="citerefentry"><span class="refentrytitle">ctdbd.conf</span>(5)</span>,
632 <span class="citerefentry"><span class="refentrytitle">ctdb-statistics</span>(7)</span>,
634 <span class="citerefentry"><span class="refentrytitle">ctdb-tunables</span>(7)</span>,
636 <a class="ulink" href="http://ctdb.samba.org/" target="_top">http://ctdb.samba.org/</a>
637 </p></div></div></body></html>