1# $OpenLDAP$ 2# Copyright 1999-2011 The OpenLDAP Foundation, All Rights Reserved. 3# COPYING RESTRICTIONS APPLY, see COPYRIGHT. 4 5H1: Replication 6 7Replicated directories are a fundamental requirement for delivering a 8resilient enterprise deployment. 9 10{{PRD:OpenLDAP}} has various configuration options for creating a replicated 11directory. In previous releases, replication was discussed in terms of 12a {{master}} server and some number of {{slave}} servers. A master 13accepted directory updates from other clients, and a slave only 14accepted updates from a (single) master. The replication structure 15was rigidly defined and any particular database could only fulfill 16a single role, either master or slave. 17 18As OpenLDAP now supports a wide variety of replication topologies, these 19terms have been deprecated in favor of {{provider}} and 20{{consumer}}: A provider replicates directory updates to consumers; 21consumers receive replication updates from providers. Unlike the 22rigidly defined master/slave relationships, provider/consumer roles 23are quite fluid: replication updates received in a consumer can be 24further propagated by that consumer to other servers, so a consumer 25can also act simultaneously as a provider. Also, a consumer need not 26be an actual LDAP server; it may be just an LDAP client. 27 28The following sections will describe the replication technology and 29discuss the various replication options that are available. 30 31H2: Replication Technology 32 33H3: LDAP Sync Replication 34 35The {{TERM:LDAP Sync}} Replication engine, {{TERM:syncrepl}} for 36short, is a consumer-side replication engine that enables the 37consumer {{TERM:LDAP}} server to maintain a shadow copy of a 38{{TERM:DIT}} fragment. A syncrepl engine resides at the consumer 39and executes as one of the {{slapd}}(8) threads. It creates and maintains a 40consumer replica by connecting to the replication provider to perform 41the initial DIT content load followed either by periodic content 42polling or by timely updates upon content changes. 43 44Syncrepl uses the LDAP Content Synchronization protocol (or LDAP Sync for 45short) as the replica synchronization protocol. LDAP Sync provides 46a stateful replication which supports both pull-based and push-based 47synchronization and does not mandate the use of a history store. 48In pull-based replication the consumer periodically 49polls the provider for updates. In push-based replication the consumer 50listens for updates that are sent by the provider in realtime. Since the 51protocol does not require a history store, the provider does not need to 52maintain any log of updates it has received (Note 53that the syncrepl engine is extensible and additional replication 54protocols may be supported in the future.). 55 56Syncrepl keeps track of the status of the replication content by 57maintaining and exchanging synchronization cookies. Because the 58syncrepl consumer and provider maintain their content status, the 59consumer can poll the provider content to perform incremental 60synchronization by asking for the entries required to make the 61consumer replica up-to-date with the provider content. Syncrepl 62also enables convenient management of replicas by maintaining replica 63status. The consumer replica can be constructed from a consumer-side 64or a provider-side backup at any synchronization status. Syncrepl 65can automatically resynchronize the consumer replica up-to-date 66with the current provider content. 67 68Syncrepl supports both pull-based and push-based synchronization. 69In its basic refreshOnly synchronization mode, the provider uses 70pull-based synchronization where the consumer servers need not be 71tracked and no history information is maintained. The information 72required for the provider to process periodic polling requests is 73contained in the synchronization cookie of the request itself. To 74optimize the pull-based synchronization, syncrepl utilizes the 75present phase of the LDAP Sync protocol as well as its delete phase, 76instead of falling back on frequent full reloads. To further optimize 77the pull-based synchronization, the provider can maintain a per-scope 78session log as a history store. In its refreshAndPersist mode of 79synchronization, the provider uses a push-based synchronization. 80The provider keeps track of the consumer servers that have requested 81a persistent search and sends them necessary updates as the provider 82replication content gets modified. 83 84With syncrepl, a consumer server can create a replica without 85changing the provider's configurations and without restarting the 86provider server, if the consumer server has appropriate access 87privileges for the DIT fragment to be replicated. The consumer 88server can stop the replication also without the need for provider-side 89changes and restart. 90 91Syncrepl supports partial, sparse, and fractional replications. The shadow 92DIT fragment is defined by a general search criteria consisting of 93base, scope, filter, and attribute list. The replica content is 94also subject to the access privileges of the bind identity of the 95syncrepl replication connection. 96 97 98H4: The LDAP Content Synchronization Protocol 99 100The LDAP Sync protocol allows a client to maintain a synchronized 101copy of a DIT fragment. The LDAP Sync operation is defined as a set 102of controls and other protocol elements which extend the LDAP search 103operation. This section introduces the LDAP Content Sync protocol 104only briefly. For more information, refer to {{REF:RFC4533}}. 105 106The LDAP Sync protocol supports both polling and listening for changes 107by defining two respective synchronization operations: 108{{refreshOnly}} and {{refreshAndPersist}}. Polling is implemented 109by the {{refreshOnly}} operation. The consumer 110polls the provider using an LDAP Search request with an LDAP Sync 111control attached. The consumer copy is synchronized 112to the provider copy at the time of polling using the information 113returned in the search. The provider finishes the 114search operation by returning {{SearchResultDone}} at the end of 115the search operation as in the normal search. Listening is 116implemented by the {{refreshAndPersist}} operation. As the name 117implies, it begins with a search, like refreshOnly. Instead of 118finishing the search after returning all entries currently matching 119the search criteria, the synchronization search remains persistent 120in the provider. Subsequent updates to the synchronization content 121in the provider cause additional entry updates to be sent to the 122consumer. 123 124The {{refreshOnly}} operation and the refresh stage of the 125{{refreshAndPersist}} operation can be performed with a present 126phase or a delete phase. 127 128In the present phase, the provider sends the consumer the entries updated 129within the search scope since the last synchronization. The provider 130sends all requested attributes, be they changed or not, of the updated 131entries. For each unchanged entry which remains in the scope, the 132provider sends a present message consisting only of the name of the 133entry and the synchronization control representing state present. 134The present message does not contain any attributes of the entry. 135After the consumer receives all update and present entries, it can 136reliably determine the new consumer copy by adding the entries added 137to the provider, by replacing the entries modified at the provider, and 138by deleting entries in the consumer copy which have not been updated 139nor specified as being present at the provider. 140 141The transmission of the updated entries in the delete phase is the 142same as in the present phase. The provider sends all the requested 143attributes of the entries updated within the search scope since the 144last synchronization to the consumer. In the delete phase, however, 145the provider sends a delete message for each entry deleted from the 146search scope, instead of sending present messages. The delete 147message consists only of the name of the entry and the synchronization 148control representing state delete. The new consumer copy can be 149determined by adding, modifying, and removing entries according to 150the synchronization control attached to the {{SearchResultEntry}} 151message. 152 153In the case that the LDAP Sync provider maintains a history store and 154can determine which entries are scoped out of the consumer copy since 155the last synchronization time, the provider can use the delete phase. 156If the provider does not maintain any history store, cannot determine 157the scoped-out entries from the history store, or the history store 158does not cover the outdated synchronization state of the consumer, 159the provider should use the present phase. The use of the present 160phase is much more efficient than a full content reload in terms 161of the synchronization traffic. To reduce the synchronization 162traffic further, the LDAP Sync protocol also provides several 163optimizations such as the transmission of the normalized {{EX:entryUUID}}s 164and the transmission of multiple {{EX:entryUUIDs}} in a single 165{{syncIdSet}} message. 166 167At the end of the {{refreshOnly}} synchronization, the provider sends 168a synchronization cookie to the consumer as a state indicator of the 169consumer copy after the synchronization is completed. The consumer 170will present the received cookie when it requests the next incremental 171synchronization to the provider. 172 173When {{refreshAndPersist}} synchronization is used, the provider sends 174a synchronization cookie at the end of the refresh stage by sending 175a Sync Info message with refreshDone=TRUE. It also sends a 176synchronization cookie by attaching it to {{SearchResultEntry}} 177messages generated in the persist stage of the synchronization search. During 178the persist stage, the provider can also send a Sync Info message 179containing the synchronization cookie at any time the provider wants 180to update the consumer-side state indicator. 181 182In the LDAP Sync protocol, entries are uniquely identified by the 183{{EX:entryUUID}} attribute value. It can function as a reliable 184identifier of the entry. The DN of the entry, on the other hand, 185can be changed over time and hence cannot be considered as the 186reliable identifier. The {{EX:entryUUID}} is attached to each 187{{SearchResultEntry}} or {{SearchResultReference}} as a part of the 188synchronization control. 189 190H4: Syncrepl Details 191 192The syncrepl engine utilizes both the {{refreshOnly}} and the 193{{refreshAndPersist}} operations of the LDAP Sync protocol. If a 194syncrepl specification is included in a database definition, 195{{slapd}}(8) launches a syncrepl engine as a {{slapd}}(8) thread 196and schedules its execution. If the {{refreshOnly}} operation is 197specified, the syncrepl engine will be rescheduled at the interval 198time after a synchronization operation is completed. If the 199{{refreshAndPersist}} operation is specified, the engine will remain 200active and process the persistent synchronization messages from the 201provider. 202 203The syncrepl engine utilizes both the present phase and the delete 204phase of the refresh synchronization. It is possible to configure 205a session log in the provider which stores the 206{{EX:entryUUID}}s of a finite number of entries deleted from a 207database. Multiple replicas share the same session log. The syncrepl 208engine uses the 209delete phase if the session log is present and the state of the 210consumer server is recent enough that no session log entries are 211truncated after the last synchronization of the client. The syncrepl 212engine uses the present phase if no session log is configured for 213the replication content or if the consumer replica is too outdated 214to be covered by the session log. The current design of the session 215log store is memory based, so the information contained in the 216session log is not persistent over multiple provider invocations. 217It is not currently supported to access the session log store by 218using LDAP operations. It is also not currently supported to impose 219access control to the session log. 220 221As a further optimization, even in the case the synchronization 222search is not associated with any session log, no entries will be 223transmitted to the consumer server when there has been no update 224in the replication context. 225 226The syncrepl engine, which is a consumer-side replication engine, 227can work with any backends. The LDAP Sync provider can be configured 228as an overlay on any backend, but works best with the {{back-bdb}} 229or {{back-hdb}} backend. 230 231The LDAP Sync provider maintains a {{EX:contextCSN}} for each 232database as the current synchronization state indicator of the 233provider content. It is the largest {{EX:entryCSN}} in the provider 234context such that no transactions for an entry having smaller 235{{EX:entryCSN}} value remains outstanding. The {{EX:contextCSN}} 236could not just be set to the largest issued {{EX:entryCSN}} because 237{{EX:entryCSN}} is obtained before a transaction starts and 238transactions are not committed in the issue order. 239 240The provider stores the {{EX:contextCSN}} of a context in the 241{{EX:contextCSN}} attribute of the context suffix entry. The attribute 242is not written to the database after every update operation though; 243instead it is maintained primarily in memory. At database start 244time the provider reads the last saved {{EX:contextCSN}} into memory 245and uses the in-memory copy exclusively thereafter. By default, 246changes to the {{EX:contextCSN}} as a result of database updates 247will not be written to the database until the server is cleanly 248shut down. A checkpoint facility exists to cause the {{EX:contextCSN}} to 249be written out more frequently if desired. 250 251Note that at startup time, if the provider is unable to read a 252{{EX:contextCSN}} from the suffix entry, it will scan the entire 253database to determine the value, and this scan may take quite a 254long time on a large database. When a {{EX:contextCSN}} value is 255read, the database will still be scanned for any {{EX:entryCSN}} 256values greater than it, to make sure the {{EX:contextCSN}} value 257truly reflects the greatest committed {{EX:entryCSN}} in the database. 258On databases which support inequality indexing, setting an eq index 259on the {{EX:entryCSN}} attribute and configuring {{contextCSN}} 260checkpoints will greatly speed up this scanning step. 261 262If no {{EX:contextCSN}} can be determined by reading and scanning 263the database, a new value will be generated. Also, if scanning the 264database yielded a greater {{EX:entryCSN}} than was previously 265recorded in the suffix entry's {{EX:contextCSN}} attribute, a 266checkpoint will be immediately written with the new value. 267 268The consumer also stores its replica state, which is the provider's 269{{EX:contextCSN}} received as a synchronization cookie, in the 270{{EX:contextCSN}} attribute of the suffix entry. The replica state 271maintained by a consumer server is used as the synchronization state 272indicator when it performs subsequent incremental synchronization 273with the provider server. It is also used as a provider-side 274synchronization state indicator when it functions as a secondary 275provider server in a cascading replication configuration. Since 276the consumer and provider state information are maintained in the 277same location within their respective databases, any consumer can 278be promoted to a provider (and vice versa) without any special 279actions. 280 281Because a general search filter can be used in the syncrepl 282specification, some entries in the context may be omitted from the 283synchronization content. The syncrepl engine creates a glue entry 284to fill in the holes in the replica context if any part of the 285replica content is subordinate to the holes. The glue entries will 286not be returned in the search result unless {{ManageDsaIT}} control 287is provided. 288 289Also as a consequence of the search filter used in the syncrepl 290specification, it is possible for a modification to remove an entry 291from the replication scope even though the entry has not been deleted 292on the provider. Logically the entry must be deleted on the consumer 293but in {{refreshOnly}} mode the provider cannot detect and propagate 294this change without the use of the session log on the provider. 295 296For configuration, please see the {{SECT:Syncrepl}} section. 297 298 299H2: Deployment Alternatives 300 301While the LDAP Sync specification only defines a narrow scope for replication, 302the OpenLDAP implementation is extremely flexible and supports a variety of 303operating modes to handle other scenarios not explicitly addressed in the spec. 304 305 306H3: Delta-syncrepl replication 307 308* Disadvantages of LDAP Sync replication: 309 310LDAP Sync replication is an object-based replication mechanism. 311When any attribute value in a replicated object is changed on the provider, 312each consumer fetches and processes the complete changed object, including 313{{B:both the changed and unchanged attribute values}} during replication. 314One advantage of this approach is that when multiple changes occur to 315a single object, the precise sequence of those changes need not be preserved; 316only the final state of the entry is significant. But this approach 317may have drawbacks when the usage pattern involves single changes to 318multiple objects. 319 320For example, suppose you have a database consisting of 102,400 objects of 1 KB 321each. Further, suppose you routinely run a batch job to change the value of 322a single two-byte attribute value that appears in each of the 102,400 objects 323on the master. Not counting LDAP and TCP/IP protocol overhead, each time you 324run this job each consumer will transfer and process {{B:100 MB}} of data to 325process {{B:200KB of changes!}} 326 32799.98% of the data that is transmitted and processed in a case like this will 328be redundant, since it represents values that did not change. This is a waste 329of valuable transmission and processing bandwidth and can cause an unacceptable 330replication backlog to develop. While this situation is extreme, it serves to 331demonstrate a very real problem that is encountered in some LDAP deployments. 332 333 334* Where Delta-syncrepl comes in: 335 336Delta-syncrepl, a changelog-based variant of syncrepl, is designed to address 337situations like the one described above. Delta-syncrepl works by maintaining a 338changelog of a selectable depth on the provider. The replication consumer 339checks the changelog for the changes it needs and, as long as 340the changelog contains the needed changes, the consumer fetches the changes 341from the changelog and applies them to its database. If, however, a replica 342is too far out of sync (or completely empty), conventional syncrepl is used to 343bring it up to date and replication then switches back to the delta-syncrepl 344mode. 345 346For configuration, please see the {{SECT:Delta-syncrepl}} section. 347 348 349H3: N-Way Multi-Master replication 350 351Multi-Master replication is a replication technique using Syncrepl to replicate 352data to multiple provider ("Master") Directory servers. 353 354H4: Valid Arguments for Multi-Master replication 355 356* If any provider fails, other providers will continue to accept updates 357* Avoids a single point of failure 358* Providers can be located in several physical sites i.e. distributed across 359the network/globe. 360* Good for Automatic failover/High Availability 361 362H4: Invalid Arguments for Multi-Master replication 363 364(These are often claimed to be advantages of Multi-Master replication but 365those claims are false): 366 367* It has {{B:NOTHING}} to do with load balancing 368* Providers {{B:must}} propagate writes to {{B:all}} the other servers, which 369means the network traffic and write load spreads across all 370of the servers the same as for single-master. 371* Server utilization and performance are at best identical for 372Multi-Master and Single-Master replication; at worst Single-Master is 373superior because indexing can be tuned differently to optimize for the 374different usage patterns between the provider and the consumers. 375 376H4: Arguments against Multi-Master replication 377 378* Breaks the data consistency guarantees of the directory model 379* {{URL:http://www.openldap.org/faq/data/cache/1240.html}} 380* If connectivity with a provider is lost because of a network partition, then 381"automatic failover" can just compound the problem 382* Typically, a particular machine cannot distinguish between losing contact 383 with a peer because that peer crashed, or because the network link has failed 384* If a network is partitioned and multiple clients start writing to each of the 385"masters" then reconciliation will be a pain; it may be best to simply deny 386writes to the clients that are partitioned from the single provider 387 388 389For configuration, please see the {{SECT:N-Way Multi-Master}} section below 390 391H3: MirrorMode replication 392 393MirrorMode is a hybrid configuration that provides all of the consistency 394guarantees of single-master replication, while also providing the high 395availability of multi-master. In MirrorMode two providers are set up to 396replicate from each other (as a multi-master configuration), but an 397external frontend is employed to direct all writes to only one of 398the two servers. The second provider will only be used for writes if 399the first provider crashes, at which point the frontend will switch to 400directing all writes to the second provider. When a crashed provider is 401repaired and restarted it will automatically catch up to any changes 402on the running provider and resync. 403 404H4: Arguments for MirrorMode 405 406* Provides a high-availability (HA) solution for directory writes (replicas handle reads) 407* As long as one provider is operational, writes can safely be accepted 408* Provider nodes replicate from each other, so they are always up to date and 409can be ready to take over (hot standby) 410* Syncrepl also allows the provider nodes to re-synchronize after any downtime 411 412 413H4: Arguments against MirrorMode 414 415* MirrorMode is not what is termed as a Multi-Master solution. This is because 416writes have to go to just one of the mirror nodes at a time 417* MirrorMode can be termed as Active-Active Hot-Standby, therefore an external 418server (slapd in proxy mode) or device (hardware load balancer) 419is needed to manage which provider is currently active 420* Backups are managed slightly differently 421- If backing up the Berkeley database itself and periodically backing up the 422transaction log files, then the same member of the mirror pair needs to be 423used to collect logfiles until the next database backup is taken 424* Delta-Syncrepl is not yet supported 425 426For configuration, please see the {{SECT:MirrorMode}} section below 427 428 429H3: Syncrepl Proxy Mode 430 431While the LDAP Sync protocol supports both pull- and push-based replication, 432the push mode (refreshAndPersist) must still be initiated from the consumer 433before the provider can begin pushing changes. In some network configurations, 434particularly where firewalls restrict the direction in which connections 435can be made, a provider-initiated push mode may be needed. 436 437This mode can be configured with the aid of the LDAP Backend 438({{SECT: Backends}} and {{slapd-ldap(8)}}). Instead of running the 439syncrepl engine on the actual consumer, a slapd-ldap proxy is set up 440near (or collocated with) the provider that points to the consumer, 441and the syncrepl engine runs on the proxy. 442 443For configuration, please see the {{SECT:Syncrepl Proxy}} section. 444 445H4: Replacing Slurpd 446 447The old {{slurpd}} mechanism only operated in provider-initiated 448push mode. Slurpd replication was deprecated in favor of Syncrepl 449replication and has been completely removed from OpenLDAP 2.4. 450 451The slurpd daemon was the original replication mechanism inherited from 452UMich's LDAP and operated in push mode: the master pushed changes to the 453slaves. It was replaced for many reasons, in brief: 454 455 * It was not reliable 456 ** It was extremely sensitive to the ordering of records in the replog 457 ** It could easily go out of sync, at which point manual intervention was 458 required to resync the slave database with the master directory 459 ** It wasn't very tolerant of unavailable servers. If a slave went down 460 for a long time, the replog could grow to a size that was too large for 461 slurpd to process 462 * It only worked in push mode 463 * It required stopping and restarting the master to add new slaves 464 * It only supported single master replication 465 466Syncrepl has none of those weaknesses: 467 468 * Syncrepl is self-synchronizing; you can start with a consumer database 469 in any state from totally empty to fully synced and it will automatically 470 do the right thing to achieve and maintain synchronization 471 ** It is completely insensitive to the order in which changes occur 472 ** It guarantees convergence between the consumer and the provider 473 content without manual intervention 474 ** It can resynchronize regardless of how long a consumer stays out 475 of contact with the provider 476 * Syncrepl can operate in either direction 477 * Consumers can be added at any time without touching anything on the 478 provider 479 * Multi-master replication is supported 480 481 482H2: Configuring the different replication types 483 484H3: Syncrepl 485 486H4: Syncrepl configuration 487 488Because syncrepl is a consumer-side replication engine, the syncrepl 489specification is defined in {{slapd.conf}}(5) of the consumer 490server, not in the provider server's configuration file. The initial 491loading of the replica content can be performed either by starting 492the syncrepl engine with no synchronization cookie or by populating 493the consumer replica by loading an {{TERM:LDIF}} file dumped as a 494backup at the provider. 495 496When loading from a backup, it is not required to perform the initial 497loading from the up-to-date backup of the provider content. The 498syncrepl engine will automatically synchronize the initial consumer 499replica to the current provider content. As a result, it is not 500required to stop the provider server in order to avoid the replica 501inconsistency caused by the updates to the provider content during 502the content backup and loading process. 503 504When replicating a large scale directory, especially in a bandwidth 505constrained environment, it is advised to load the consumer replica 506from a backup instead of performing a full initial load using 507syncrepl. 508 509 510H4: Set up the provider slapd 511 512The provider is implemented as an overlay, so the overlay itself 513must first be configured in {{slapd.conf}}(5) before it can be 514used. The provider has only two configuration directives, for setting 515checkpoints on the {{EX:contextCSN}} and for configuring the session 516log. Because the LDAP Sync search is subject to access control, 517proper access control privileges should be set up for the replicated 518content. 519 520The {{EX:contextCSN}} checkpoint is configured by the 521 522> syncprov-checkpoint <ops> <minutes> 523 524directive. Checkpoints are only tested after successful write 525operations. If {{<ops>}} operations or more than {{<minutes>}} 526time has passed since the last checkpoint, a new checkpoint is 527performed. 528 529The session log is configured by the 530 531> syncprov-sessionlog <size> 532 533directive, where {{<size>}} is the maximum number of session log 534entries the session log can record. When a session log is configured, 535it is automatically used for all LDAP Sync searches within the 536database. 537 538Note that using the session log requires searching on the {{entryUUID}} 539attribute. Setting an eq index on this attribute will greatly benefit 540the performance of the session log on the provider. 541 542A more complete example of the {{slapd.conf}}(5) content is thus: 543 544> database bdb 545> suffix dc=Example,dc=com 546> rootdn dc=Example,dc=com 547> directory /var/ldap/db 548> index objectclass,entryCSN,entryUUID eq 549> 550> overlay syncprov 551> syncprov-checkpoint 100 10 552> syncprov-sessionlog 100 553 554 555H4: Set up the consumer slapd 556 557The syncrepl replication is specified in the database section of 558{{slapd.conf}}(5) for the replica context. The syncrepl engine 559is backend independent and the directive can be defined with any 560database type. 561 562> database hdb 563> suffix dc=Example,dc=com 564> rootdn dc=Example,dc=com 565> directory /var/ldap/db 566> index objectclass,entryCSN,entryUUID eq 567> 568> syncrepl rid=123 569> provider=ldap://provider.example.com:389 570> type=refreshOnly 571> interval=01:00:00:00 572> searchbase="dc=example,dc=com" 573> filter="(objectClass=organizationalPerson)" 574> scope=sub 575> attrs="cn,sn,ou,telephoneNumber,title,l" 576> schemachecking=off 577> bindmethod=simple 578> binddn="cn=syncuser,dc=example,dc=com" 579> credentials=secret 580 581In this example, the consumer will connect to the provider {{slapd}}(8) 582at port 389 of {{FILE:ldap://provider.example.com}} to perform a 583polling ({{refreshOnly}}) mode of synchronization once a day. It 584will bind as {{EX:cn=syncuser,dc=example,dc=com}} using simple 585authentication with password "secret". Note that the access control 586privilege of {{EX:cn=syncuser,dc=example,dc=com}} should be set 587appropriately in the provider to retrieve the desired replication 588content. Also the search limits must be high enough on the provider 589to allow the syncuser to retrieve a complete copy of the requested 590content. The consumer uses the rootdn to write to its database so 591it always has full permissions to write all content. 592 593The synchronization search in the above example will search for the 594entries whose objectClass is organizationalPerson in the entire 595subtree rooted at {{EX:dc=example,dc=com}}. The requested attributes 596are {{EX:cn}}, {{EX:sn}}, {{EX:ou}}, {{EX:telephoneNumber}}, 597{{EX:title}}, and {{EX:l}}. The schema checking is turned off, so 598that the consumer {{slapd}}(8) will not enforce entry schema 599checking when it processes updates from the provider {{slapd}}(8). 600 601For more detailed information on the syncrepl directive, see the 602{{SECT:syncrepl}} section of {{SECT:The slapd Configuration File}} 603chapter of this admin guide. 604 605 606H4: Start the provider and the consumer slapd 607 608The provider {{slapd}}(8) is not required to be restarted. 609{{contextCSN}} is automatically generated as needed: it might be 610originally contained in the {{TERM:LDIF}} file, generated by 611{{slapadd}} (8), generated upon changes in the context, or generated 612when the first LDAP Sync search arrives at the provider. If an 613LDIF file is being loaded which did not previously contain the 614{{contextCSN}}, the {{-w}} option should be used with {{slapadd}} 615(8) to cause it to be generated. This will allow the server to 616startup a little quicker the first time it runs. 617 618When starting a consumer {{slapd}}(8), it is possible to provide 619a synchronization cookie as the {{-c cookie}} command line option 620in order to start the synchronization from a specific state. The 621cookie is a comma separated list of name=value pairs. Currently 622supported syncrepl cookie fields are {{csn=<csn>}} and {{rid=<rid>}}. 623{{<csn>}} represents the current synchronization state of the 624consumer replica. {{<rid>}} identifies a consumer replica locally 625within the consumer server. It is used to relate the cookie to the 626syncrepl definition in {{slapd.conf}}(5) which has the matching 627replica identifier. The {{<rid>}} must have no more than 3 decimal 628digits. The command line cookie overrides the synchronization 629cookie stored in the consumer replica database. 630 631 632H3: Delta-syncrepl 633 634H4: Delta-syncrepl Provider configuration 635 636Setting up delta-syncrepl requires configuration changes on both the master and 637replica servers: 638 639> # Give the replica DN unlimited read access. This ACL needs to be 640> # merged with other ACL statements, and/or moved within the scope 641> # of a database. The "by * break" portion causes evaluation of 642> # subsequent rules. See slapd.access(5) for details. 643> access to * 644> by dn.base="cn=replicator,dc=symas,dc=com" read 645> by * break 646> 647> # Set the module path location 648> modulepath /opt/symas/lib/openldap 649> 650> # Load the hdb backend 651> moduleload back_hdb.la 652> 653> # Load the accesslog overlay 654> moduleload accesslog.la 655> 656> #Load the syncprov overlay 657> moduleload syncprov.la 658> 659> # Accesslog database definitions 660> database hdb 661> suffix cn=accesslog 662> directory /db/accesslog 663> rootdn cn=accesslog 664> index default eq 665> index entryCSN,objectClass,reqEnd,reqResult,reqStart 666> 667> overlay syncprov 668> syncprov-nopresent TRUE 669> syncprov-reloadhint TRUE 670> 671> # Let the replica DN have limitless searches 672> limits dn.exact="cn=replicator,dc=symas,dc=com" time.soft=unlimited time.hard=unlimited size.soft=unlimited size.hard=unlimited 673> 674> # Primary database definitions 675> database hdb 676> suffix "dc=symas,dc=com" 677> rootdn "cn=manager,dc=symas,dc=com" 678> 679> ## Whatever other configuration options are desired 680> 681> # syncprov specific indexing 682> index entryCSN eq 683> index entryUUID eq 684> 685> # syncrepl Provider for primary db 686> overlay syncprov 687> syncprov-checkpoint 1000 60 688> 689> # accesslog overlay definitions for primary db 690> overlay accesslog 691> logdb cn=accesslog 692> logops writes 693> logsuccess TRUE 694> # scan the accesslog DB every day, and purge entries older than 7 days 695> logpurge 07+00:00 01+00:00 696> 697> # Let the replica DN have limitless searches 698> limits dn.exact="cn=replicator,dc=symas,dc=com" time.soft=unlimited time.hard=unlimited size.soft=unlimited size.hard=unlimited 699 700For more information, always consult the relevant man pages ({{slapo-accesslog}}(5) and {{slapd.conf}}(5)) 701 702 703H4: Delta-syncrepl Consumer configuration 704 705> # Replica database configuration 706> database hdb 707> suffix "dc=symas,dc=com" 708> rootdn "cn=manager,dc=symas,dc=com" 709> 710> ## Whatever other configuration bits for the replica, like indexing 711> ## that you want 712> 713> # syncrepl specific indices 714> index entryUUID eq 715> 716> # syncrepl directives 717> syncrepl rid=0 718> provider=ldap://ldapmaster.symas.com:389 719> bindmethod=simple 720> binddn="cn=replicator,dc=symas,dc=com" 721> credentials=secret 722> searchbase="dc=symas,dc=com" 723> logbase="cn=accesslog" 724> logfilter="(&(objectClass=auditWriteObject)(reqResult=0))" 725> schemachecking=on 726> type=refreshAndPersist 727> retry="60 +" 728> syncdata=accesslog 729> 730> # Refer updates to the master 731> updateref ldap://ldapmaster.symas.com 732 733 734The above configuration assumes that you have a replicator identity defined 735in your database that can be used to bind to the provider. In addition, 736all of the databases (primary, replica, and the accesslog 737storage database) should also have properly tuned {{DB_CONFIG}} files that meet 738your needs. 739 740 741H3: N-Way Multi-Master 742 743For the following example we will be using 3 Master nodes. Keeping in line with 744{{B:test050-syncrepl-multimaster}} of the OpenLDAP test suite, we will be configuring 745{{slapd(8)}} via {{B:cn=config}} 746 747This sets up the config database: 748 749> dn: cn=config 750> objectClass: olcGlobal 751> cn: config 752> olcServerID: 1 753> 754> dn: olcDatabase={0}config,cn=config 755> objectClass: olcDatabaseConfig 756> olcDatabase: {0}config 757> olcRootPW: secret 758 759second and third servers will have a different olcServerID obviously: 760 761> dn: cn=config 762> objectClass: olcGlobal 763> cn: config 764> olcServerID: 2 765> 766> dn: olcDatabase={0}config,cn=config 767> objectClass: olcDatabaseConfig 768> olcDatabase: {0}config 769> olcRootPW: secret 770 771This sets up syncrepl as a provider (since these are all masters): 772 773> dn: cn=module,cn=config 774> objectClass: olcModuleList 775> cn: module 776> olcModulePath: /usr/local/libexec/openldap 777> olcModuleLoad: syncprov.la 778 779Now we setup the first Master Node (replace $URI1, $URI2 and $URI3 etc. with your actual ldap urls): 780 781> dn: cn=config 782> changetype: modify 783> replace: olcServerID 784> olcServerID: 1 $URI1 785> olcServerID: 2 $URI2 786> olcServerID: 3 $URI3 787> 788> dn: olcOverlay=syncprov,olcDatabase={0}config,cn=config 789> changetype: add 790> objectClass: olcOverlayConfig 791> objectClass: olcSyncProvConfig 792> olcOverlay: syncprov 793> 794> dn: olcDatabase={0}config,cn=config 795> changetype: modify 796> add: olcSyncRepl 797> olcSyncRepl: rid=001 provider=$URI1 binddn="cn=config" bindmethod=simple 798> credentials=secret searchbase="cn=config" type=refreshAndPersist 799> retry="5 5 300 5" timeout=1 800> olcSyncRepl: rid=002 provider=$URI2 binddn="cn=config" bindmethod=simple 801> credentials=secret searchbase="cn=config" type=refreshAndPersist 802> retry="5 5 300 5" timeout=1 803> olcSyncRepl: rid=003 provider=$URI3 binddn="cn=config" bindmethod=simple 804> credentials=secret searchbase="cn=config" type=refreshAndPersist 805> retry="5 5 300 5" timeout=1 806> - 807> add: olcMirrorMode 808> olcMirrorMode: TRUE 809 810Now start up the Master and a consumer/s, also add the above LDIF to the first consumer, second consumer etc. It will then replicate {{B:cn=config}}. You now have N-Way Multimaster on the config database. 811 812We still have to replicate the actual data, not just the config, so add to the master (all active and configured consumers/masters will pull down this config, as they are all syncing). Also, replace all {{${}}} variables with whatever is applicable to your setup: 813 814> dn: olcDatabase={1}$BACKEND,cn=config 815> objectClass: olcDatabaseConfig 816> objectClass: olc${BACKEND}Config 817> olcDatabase: {1}$BACKEND 818> olcSuffix: $BASEDN 819> olcDbDirectory: ./db 820> olcRootDN: $MANAGERDN 821> olcRootPW: $PASSWD 822> olcLimits: dn.exact="$MANAGERDN" time.soft=unlimited time.hard=unlimited size.soft=unlimited size.hard=unlimited 823> olcSyncRepl: rid=004 provider=$URI1 binddn="$MANAGERDN" bindmethod=simple 824> credentials=$PASSWD searchbase="$BASEDN" type=refreshOnly 825> interval=00:00:00:10 retry="5 5 300 5" timeout=1 826> olcSyncRepl: rid=005 provider=$URI2 binddn="$MANAGERDN" bindmethod=simple 827> credentials=$PASSWD searchbase="$BASEDN" type=refreshOnly 828> interval=00:00:00:10 retry="5 5 300 5" timeout=1 829> olcSyncRepl: rid=006 provider=$URI3 binddn="$MANAGERDN" bindmethod=simple 830> credentials=$PASSWD searchbase="$BASEDN" type=refreshOnly 831> interval=00:00:00:10 retry="5 5 300 5" timeout=1 832> olcMirrorMode: TRUE 833> 834> dn: olcOverlay=syncprov,olcDatabase={1}${BACKEND},cn=config 835> changetype: add 836> objectClass: olcOverlayConfig 837> objectClass: olcSyncProvConfig 838> olcOverlay: syncprov 839 840Note: All of your servers' clocks must be tightly synchronized using 841e.g. NTP {{http://www.ntp.org/}}, atomic clock, or some other reliable 842time reference. 843 844Note: As stated in {{slapd-config}}(5), URLs specified in {{olcSyncRepl}} 845directives are the URLs of the servers from which to replicate. These 846must exactly match the URLs {{slapd}} listens on ({{-h}} in {{SECT:Command-Line Options}}). 847Otherwise slapd may attempt to replicate from itself, causing a loop. 848 849H3: MirrorMode 850 851MirrorMode configuration is actually very easy. If you have ever setup a normal 852slapd syncrepl provider, then the only change is the following two directives: 853 854> mirrormode on 855> serverID 1 856 857Note: You need to make sure that the {{serverID}} of each mirror node is 858different and add it as a global configuration option. 859 860H4: Mirror Node Configuration 861 862The first step is to configure the syncrepl provider the same as in the 863{{SECT:Set up the provider slapd}} section. 864 865Note: Delta-syncrepl is not yet supported with MirrorMode. 866 867Here's a specific cut down example using {{SECT:LDAP Sync Replication}} in 868{{refreshAndPersist}} mode: 869 870MirrorMode node 1: 871 872> # Global section 873> serverID 1 874> # database section 875> 876> # syncrepl directive 877> syncrepl rid=001 878> provider=ldap://ldap-sid2.example.com 879> bindmethod=simple 880> binddn="cn=mirrormode,dc=example,dc=com" 881> credentials=mirrormode 882> searchbase="dc=example,dc=com" 883> schemachecking=on 884> type=refreshAndPersist 885> retry="60 +" 886> 887> mirrormode on 888 889MirrorMode node 2: 890 891> # Global section 892> serverID 2 893> # database section 894> 895> # syncrepl directive 896> syncrepl rid=001 897> provider=ldap://ldap-sid1.example.com 898> bindmethod=simple 899> binddn="cn=mirrormode,dc=example,dc=com" 900> credentials=mirrormode 901> searchbase="dc=example,dc=com" 902> schemachecking=on 903> type=refreshAndPersist 904> retry="60 +" 905> 906> mirrormode on 907 908It's simple really; each MirrorMode node is setup {{B:exactly}} the same, except 909that the {{serverID}} is unique, and each consumer is pointed to 910the other server. 911 912H5: Failover Configuration 913 914There are generally 2 choices for this; 1. Hardware proxies/load-balancing or 915dedicated proxy software, 2. using a Back-LDAP proxy as a syncrepl provider 916 917A typical enterprise example might be: 918 919!import "dual_dc.png"; align="center"; title="MirrorMode Enterprise Configuration" 920FT[align="Center"] Figure X.Y: MirrorMode in a Dual Data Center Configuration 921 922H5: Normal Consumer Configuration 923 924This is exactly the same as the {{SECT:Set up the consumer slapd}} section. It 925can either setup in normal {{SECT:syncrepl replication}} mode, or in 926{{SECT:delta-syncrepl replication}} mode. 927 928H4: MirrorMode Summary 929 930You will now have a directory architecture that provides all of the 931consistency guarantees of single-master replication, while also providing the 932high availability of multi-master replication. 933 934 935H3: Syncrepl Proxy 936 937!import "push-based-complete.png"; align="center"; title="Syncrepl Proxy Mode" 938FT[align="Center"] Figure X.Y: Replacing slurpd 939 940The following example is for a self-contained push-based replication solution: 941 942> ####################################################################### 943> # Standard OpenLDAP Master/Provider 944> ####################################################################### 945> 946> include /usr/local/etc/openldap/schema/core.schema 947> include /usr/local/etc/openldap/schema/cosine.schema 948> include /usr/local/etc/openldap/schema/nis.schema 949> include /usr/local/etc/openldap/schema/inetorgperson.schema 950> 951> include /usr/local/etc/openldap/slapd.acl 952> 953> modulepath /usr/local/libexec/openldap 954> moduleload back_hdb.la 955> moduleload syncprov.la 956> moduleload back_monitor.la 957> moduleload back_ldap.la 958> 959> pidfile /usr/local/var/slapd.pid 960> argsfile /usr/local/var/slapd.args 961> 962> loglevel sync stats 963> 964> database hdb 965> suffix "dc=suretecsystems,dc=com" 966> directory /usr/local/var/openldap-data 967> 968> checkpoint 1024 5 969> cachesize 10000 970> idlcachesize 10000 971> 972> index objectClass eq 973> # rest of indexes 974> index default sub 975> 976> rootdn "cn=admin,dc=suretecsystems,dc=com" 977> rootpw testing 978> 979> # syncprov specific indexing 980> index entryCSN eq 981> index entryUUID eq 982> 983> # syncrepl Provider for primary db 984> overlay syncprov 985> syncprov-checkpoint 1000 60 986> 987> # Let the replica DN have limitless searches 988> limits dn.exact="cn=replicator,dc=suretecsystems,dc=com" time.soft=unlimited time.hard=unlimited size.soft=unlimited size.hard=unlimited 989> 990> database monitor 991> 992> database config 993> rootpw testing 994> 995> ############################################################################## 996> # Consumer Proxy that pulls in data via Syncrepl and pushes out via slapd-ldap 997> ############################################################################## 998> 999> database ldap 1000> # ignore conflicts with other databases, as we need to push out to same suffix 1001> hidden on 1002> suffix "dc=suretecsystems,dc=com" 1003> rootdn "cn=slapd-ldap" 1004> uri ldap://localhost:9012/ 1005> 1006> lastmod on 1007> 1008> # We don't need any access to this DSA 1009> restrict all 1010> 1011> acl-bind bindmethod=simple 1012> binddn="cn=replicator,dc=suretecsystems,dc=com" 1013> credentials=testing 1014> 1015> syncrepl rid=001 1016> provider=ldap://localhost:9011/ 1017> binddn="cn=replicator,dc=suretecsystems,dc=com" 1018> bindmethod=simple 1019> credentials=testing 1020> searchbase="dc=suretecsystems,dc=com" 1021> type=refreshAndPersist 1022> retry="5 5 300 5" 1023> 1024> overlay syncprov 1025 1026A replica configuration for this type of setup could be: 1027 1028> ####################################################################### 1029> # Standard OpenLDAP Slave without Syncrepl 1030> ####################################################################### 1031> 1032> include /usr/local/etc/openldap/schema/core.schema 1033> include /usr/local/etc/openldap/schema/cosine.schema 1034> include /usr/local/etc/openldap/schema/nis.schema 1035> include /usr/local/etc/openldap/schema/inetorgperson.schema 1036> 1037> include /usr/local/etc/openldap/slapd.acl 1038> 1039> modulepath /usr/local/libexec/openldap 1040> moduleload back_hdb.la 1041> moduleload syncprov.la 1042> moduleload back_monitor.la 1043> moduleload back_ldap.la 1044> 1045> pidfile /usr/local/var/slapd.pid 1046> argsfile /usr/local/var/slapd.args 1047> 1048> loglevel sync stats 1049> 1050> database hdb 1051> suffix "dc=suretecsystems,dc=com" 1052> directory /usr/local/var/openldap-slave/data 1053> 1054> checkpoint 1024 5 1055> cachesize 10000 1056> idlcachesize 10000 1057> 1058> index objectClass eq 1059> # rest of indexes 1060> index default sub 1061> 1062> rootdn "cn=admin,dc=suretecsystems,dc=com" 1063> rootpw testing 1064> 1065> # Let the replica DN have limitless searches 1066> limits dn.exact="cn=replicator,dc=suretecsystems,dc=com" time.soft=unlimited time.hard=unlimited size.soft=unlimited size.hard=unlimited 1067> 1068> updatedn "cn=replicator,dc=suretecsystems,dc=com" 1069> 1070> # Refer updates to the master 1071> updateref ldap://localhost:9011 1072> 1073> database monitor 1074> 1075> database config 1076> rootpw testing 1077 1078You can see we use the {{updatedn}} directive here and example ACLs ({{F:usr/local/etc/openldap/slapd.acl}}) for this could be: 1079 1080> # Give the replica DN unlimited read access. This ACL may need to be 1081> # merged with other ACL statements. 1082> 1083> access to * 1084> by dn.base="cn=replicator,dc=suretecsystems,dc=com" write 1085> by * break 1086> 1087> access to dn.base="" 1088> by * read 1089> 1090> access to dn.base="cn=Subschema" 1091> by * read 1092> 1093> access to dn.subtree="cn=Monitor" 1094> by dn.exact="uid=admin,dc=suretecsystems,dc=com" write 1095> by users read 1096> by * none 1097> 1098> access to * 1099> by self write 1100> by * read 1101 1102In order to support more replicas, just add more {{database ldap}} sections and 1103increment the {{syncrepl rid}} number accordingly. 1104 1105Note: You must populate the Master and Slave directories with the same data, 1106unlike when using normal Syncrepl 1107 1108If you do not have access to modify the master directory configuration you can 1109configure a standalone ldap proxy, which might look like: 1110 1111!import "push-based-standalone.png"; align="center"; title="Syncrepl Standalone Proxy Mode" 1112FT[align="Center"] Figure X.Y: Replacing slurpd with a standalone version 1113 1114The following configuration is an example of a standalone LDAP Proxy: 1115 1116> include /usr/local/etc/openldap/schema/core.schema 1117> include /usr/local/etc/openldap/schema/cosine.schema 1118> include /usr/local/etc/openldap/schema/nis.schema 1119> include /usr/local/etc/openldap/schema/inetorgperson.schema 1120> 1121> include /usr/local/etc/openldap/slapd.acl 1122> 1123> modulepath /usr/local/libexec/openldap 1124> moduleload syncprov.la 1125> moduleload back_ldap.la 1126> 1127> ############################################################################## 1128> # Consumer Proxy that pulls in data via Syncrepl and pushes out via slapd-ldap 1129> ############################################################################## 1130> 1131> database ldap 1132> # ignore conflicts with other databases, as we need to push out to same suffix 1133> hidden on 1134> suffix "dc=suretecsystems,dc=com" 1135> rootdn "cn=slapd-ldap" 1136> uri ldap://localhost:9012/ 1137> 1138> lastmod on 1139> 1140> # We don't need any access to this DSA 1141> restrict all 1142> 1143> acl-bind bindmethod=simple 1144> binddn="cn=replicator,dc=suretecsystems,dc=com" 1145> credentials=testing 1146> 1147> syncrepl rid=001 1148> provider=ldap://localhost:9011/ 1149> binddn="cn=replicator,dc=suretecsystems,dc=com" 1150> bindmethod=simple 1151> credentials=testing 1152> searchbase="dc=suretecsystems,dc=com" 1153> type=refreshAndPersist 1154> retry="5 5 300 5" 1155> 1156> overlay syncprov 1157 1158As you can see, you can let your imagination go wild using Syncrepl and 1159{{slapd-ldap(8)}} tailoring your replication to fit your specific network 1160topology. 1161