Thursday 22 December 2011

Eventual Consistency in MySQL Cluster - implementation part 3




As promised, this is the final post in a series looking at eventual consistency with MySQL Cluster asynchronous replication. This time I'll describe the transaction dependency tracking used with NDB$EPOCH_TRANS and review some of the implementation properties.

Transaction based conflict handling with NDB$EPOCH_TRANS

NDB$EPOCH_TRANS is almost exactly the same as NDB$EPOCH, except that when a conflict is detected on a row, the whole user transaction which made the conflicting row change is marked as conflicting, along with any dependent transactions. All of these rejected row operations are then handled using inserts to an exceptions table and realignment operations. This helps avoid the row-shear problems described here.

Including user transaction ids in the Binlog

Ndb Binlog epoch transactions contain row events from all the user transactions which committed in an epoch. However there is no information in the Binlog indicating which user transaction caused each row event. To allow detected conflicts to 'rollback' the other rows modified in the same user transaction, the Slave applying an epoch transaction needs to know which user transaction was responsible for each of the row events in the epoch transaction. This information can now be recorded in the Binlog by using the --ndb-log-transaction-id MySQLD option. Logging Ndb user transaction ids against rows in-turn requires a v2 format RBR Binlog, enabled with the --log-bin-use-v1-row-events=0 option. The mysqlbinlog --verbose tool can be used to see per-row transaction information in the Binlog.

User transaction ids in the Binlog are useful for NDB$EPOCH_TRANS and more. One interesting possibility is to use the user transaction ids and same-row operation dependencies to sort the row events inside an epoch into a partial order. This could enable recovery to a consistent point other than an epoch boundary. A project for a rainy day perhaps?

NDB$EPOCH_TRANS multiple slave passes

Initially, NDB$EPOCH_TRANS proceeds in the same way as NDB$EPOCH, attempting to apply replicated row changes, with interpreted code attached to detect conflicts. If no row conflicts are detected, the epoch transaction is committed as normal with the same minimal overhead as NDB$EPOCH. However if a row conflict is detected, the epoch transaction is rolled back, and reapplied. This is where NDB$EPOCH_TRANS starts to diverge from NDB$EPOCH.

In this second pass, the user transaction ids of rows with detected conflicts are tracked, along with any inter-transaction dependencies detectable from the Binlog. At the end of the second pass, prior to commit, the set of conflicting user transactions is combined with the user transaction dependency data to get a complete set of conflicting user transactions. The epoch transaction initiated in the second pass is then rolled-back and a third pass begins.

In the third pass, only row events for non-conflicting transactions are applied, though these are still applied with conflict detecting interpreted programs attached in case a further conflict has arisen since the second pass. Conflict handling for row events belonging to conflicting transactions is performed in the same way as NDB$EPOCH. Prior to commit, the applied row events are checked for further conflicts. If further conflicts have occurred then the epoch transaction is rolled back again and we return to the second pass. If no further conflicts have occurred then the epoch transaction is committed.

These three passes, and associated rollbacks are only externally visible via new counters added to the MySQLD server. From an external observer's point of view, only non-conflicting transactions are committed, and all row events associated with conflicting transactions are handled as conflicts. As an optimisation, when transactional conflicts have been detected, further epochs are handled with just two passes (second and third) to improve efficiency. Once an epoch transaction with no conflicts has been applied, further epochs are initially handled with the more optimistic and efficient first pass.

Dependency tracking implementation

To build the set of inter-transaction dependencies and conflicts, two hash tables are used. The first is a unique hashmap mapping row event tables and primary keys to transaction ids. If two events for the same table and primary key are found in a single epoch transaction then there is a dependency between those events, specifically the second event depends on the first. If the events belong to different user transactions then there is a dependency between the transactions.

Transaction dependency detection hash :
{Table, Primary keys} -> {Transaction id}

The second hash table is a hashmap of transaction id to an in-conflict marker and a list of dependent user transactions. When transaction dependencies are discovered using the first dependency detection hash, the second hash is modified to reflect the dependency. By the end of processing the epoch transaction, all dependencies detectable from the Binlog are described.

Transaction dependency tracking and conflict marking hash :
{Transaction id} -> {in_conflict, List}

As epoch operations are applied and row conflicts are detected, the operation's user transaction id is marked in the dependency hash as in-conflict. When marking a transaction as in-conflict, all of its dependent transactions must also be transitively marked as in-conflict. This is done by a traverse through the dependency tree of the in-conflict transaction. Due to slave batching, the addition of new dependencies and the marking of conflicting transactions is interleaved, so adding a dependency can result in a sub-tree being marked as in-conflict.

After the second pass is complete, the transaction dependency hash is used as a simple hash for looking up whether a particular transaction id is in conflict or not :

Transaction in-conflict lookup hash :
{Transaction id} -> {in_conflict}

This is used in the third pass to determine whether to apply each row event, or to proceed straight to conflict handling.

The size of these hashes, and the complexity of the dependency graph is bounded by the size of the epoch transaction. There is no need to track dependencies across the boundary of two epoch transactions, as any dependencies will be discovered via conflicts on the data committed by the first epoch transaction when attempting to apply the second epoch transaction.

Event counters

Like the existing conflict detection functions, NDB$EPOCH_TRANS has a row-conflict detection counter called ndb_conflict_epoch_trans.

Additional counters have been added which specifically track the different events associated with transactional conflict detection. These can be seen with the usual SHOW GLOBAL STATUS LIKE syntax, or via the INFORMATION_SCHEMA tables.

  • ndb_conflict_trans_row_conflict_count
    This is essentially the same as ndb_conflict_epoch_trans - the number of row events with conflict detected.
  • ndb_conflict_trans_row_reject_count
    The number of row events which were handled as in-conflict. It will be at least as large as ndb_conflict_trans_row_count, and will be higher if other rows are implicated by being in a conflicting transaction, or being dependent on a row in a conflicting transaction.
    A separate ndb_conflict_trans_row_implicated_count could be constructed as ndb_conflict_trans_row_reject_count - ndb_conflict_trans_row_conflict_count
  • ndb_conflict_trans_reject_count
    The number of discrete user transactions detected as in-conflict.
  • ndb_conflict_trans_conflict_commit_count
    The number of epoch transactions which had transactional conflicts detected during application.
  • ndb_conflict_trans_detect_iter_count
    The number of iterations of the three-pass algorithm that have occurred. Each set of passes counts as one. Normally this would be the same as ndb_conflict_trans_conflict_commit_count. Where further conflicts are found on the third pass, another iteration may be required, which would increase this count. So if this count is larger than ndb_conflict_trans_conflict_commit_count then there have been some conflicts generated concurrently with conflict detection, perhaps suggesting a high conflict rate.


Performance properties of NDB$EPOCH and NDB$EPOCH_TRANS

I have tried to avoid getting involved in an explanation of Ndb replication in general which would probably fill a terabyte of posts. Comparing replication using NDB$EPOCH and NDB$EPOCH_TRANS relative to Ndb replication with no conflict detection, what can we can say?

  • Conflict detection logic is pushed down to data nodes for execution
    Minimising extra data transfer + locking
  • Slave operation batching is preserved
    Multiple row events are applied together, saving MySQLD <-> data node round trips, using data node parallelism
    For both algorithms, one extra MySQLD <-> data node round-trip is required in the no-conflicts case (best case)
  • NDB$EPOCH : One extra MySQLD <-> data node round-trip is required per *batch* in the all-conflicts case (worst case)
  • NDB$EPOCH : Minimal impact to Binlog sizes - one extra row event per epoch.
  • NDB$EPOCH : Minimal overhead to Slave SQL CPU consumption
  • NDB$EPOCH_TRANS : One extra MySQLD <-> data node round-trip is required per *batch* per *pass* in the all-conflicts case (worst case)
  • NDB$EPOCH_TRANS : One round of two passes is required for each conflict newly created since the previous pass.
  • NDB$EPOCH_TRANS : Small impact to Binlog sizes - one extra row event per epoch plus one user transaction id per row event.
  • NDB$EPOCH_TRANS : Small overhead to Slave SQL CPU consumption in no-conflict case

Current and intrinsic limitations

These functions support automatic conflict detection and handling without schema or application changes, but there are a number of limitations. Some limitations are due to the current implementation, some are just intrinsic in the asynchronous distributed consistency problem itself.

Intrinsic limitations
  • Reads from the Secondary are tentative
    Data committed on the secondary may later be rolled back. The window of potential rollback is limited, after which Secondary data can be considered stable. This is described in more detail here.
  • Writes to the Secondary may be rolled back
    If this occurs, the fact will be recorded on the Primary. Once a committed write is stable it will not be rolled back.
  • Out-of-band dependencies between transactions are out-of-scope
    For example direct communication between two clients creating a dependency between their committed transactions, not observable from their database footprints.

Current implementation limitations

  • Detected transaction dependencies are limited to dependencies between binlogged writes (Insert, Update, Delete)
    Reads are not currently included.
  • Delete vs Delete+Insert conflicts risk data divergence
    Delete vs Delete conflicts are detected, but currently do not result in conflict handling, so that Delete vs Delete + Insert can result in data divergence.
  • With NDB$EPOCH_TRANS, unplanned Primary outages may require manual steps to restore Secondary consistency
    With pending multiple, time spaced, non-overlapping transactional conflicts, an unexpected failure may need some Binlog processing to ensure consistency.

Want to try it out?

Andrew Morgan has written a great post showing how to setup NDB$EPOCH_TRANS. He's even included non-ascii art. This is probably the easiest way to get started. NDB$EPOCH is slightly easier to get started with as the --ndb-log-transaction-id (and Binlog v2) options are not required.

Edit 23/12/11 : Added index

Monday 19 December 2011

Eventual consistency in MySQL Cluster - implementation part 2




In previous posts I described how row conflicts are detected using epochs. In this post I describe how they are handled.

Row based conflict handling with NDB$EPOCH


Once a row conflict is detected, as well as rejecting the row change, row based conflict handling in the Slave will :
  • Increment conflict counters
  • Optionally insert a row into an exceptions table
For NDB$EPOCH, conflict detection and handling operates on one Cluster in an Active-Active pair designated as the Primary. When a Slave MySQLD attached to the Primary Cluster detects a conflict between data stored in the Primary and a replicated event from the Secondary, it needs to realign the Secondary to store the same values for the conflicting data. Realignment involves injecting an event into the Primary Cluster's Binlog which, when applied idempotently on the Secondary Cluster, will force the row on the Secondary Cluster to take the supplied values. This requires either a WRITE_ROW event, with all columns, or a DELETE_ROW event with just the primary key columns. These events can be thought of as compensating events used to revert the original effect of the rejected events.

Conflicts are detected by a Slave MySQLD attached to the Primary Cluster, and realignment events must appear in Binlogs recorded by the same MySQLD and/or other Binlogging MySQLDs attached to the Primary Cluster. This is achieved using a new NdbApi primary key operation type called refreshTuple.

When a refreshTuple operation is executed it will :
  1. Lock the affected row/primary key until transaction commit time, even if it does not exist (much as an Insert would).
  2. Set the affected row's author metacolum to 0
    The refresh is logically a local change
  3. On commit
    - Row exists case : Set the row's last committed epoch to the current epoch
    - Cause a WRITE_ROW (row exists case) or DELETE_ROW (no row exists) event to be generated by attached Binlogging MySQLDs.

Locking the row as part of refreshTuple serialises the conflicting epoch transaction with other potentially conflicting local transactions. Updating the stored epoch and author metacolumns results in the conflicting row conflicting with any further replicated changes occurring while the realignment event is 'in flight'. The compensating row events are effectively new row changes originating at the Primary cluster which need to be monitored for conflicts in the same way as normal row changes.

It is important that the Slave running at the Secondary Cluster where the realignment events will be applied, is running in idempotent mode, so that it can handle the realignment events correctly. If this is not the case then WRITE_ROW realignment events may hit 'Row already exists' errors, and DELETE_ROW realignment events may hit 'Row does not exist' errors.

Observations on conflict windows and consistency

When a conflict is detected, the refresh process results in the row's epoch and author metacolumns being modified so that the window of potential conflict is extended, until the epoch in which the refresh operation was recorded has itself been reflected. If ongoing updates at both clusters continually conflict then refresh operations will continue to be generated, and the conflict window will remain open until a refresh operation manages to propagate with no further conflicts occurring. As with any eventually consistent system, consistency is only guaranteed when the system (or at least the data of interest) is quiescent for a period.

From the Primary cluster's point of view, the conflict window length is the time between committing a local transaction in epoch n, and the attached Slave committing a replicated epoch transaction indicating that epoch n has been applied at the Secondary. Any Secondary-sourced overlapping change applied in this time is in-conflict.

This Cluster conflict window length is comprised of :

  • Time between commit of transaction, and next Primary Cluster epoch boundary
    (Worst = 1 * TimeBetweenEpochs, Best = 0, Avg = 0.5 * TimeBetweenEpochs)
  • Time required to log event in Primary Cluster's Binlogging MySQLDs Binlog (~negligible)
  • Time required for Secondary Slave MySQLD IO thread to
    - Minimum : Detect new Binlog data - negligible
    - Maximum : Consume queued Binlog prior to the new data - unbounded
    - Pull new epoch transaction
    - Record in Relay log
  • Time required for Secondary Slave MySQLD SQL thread to
    - Minimum : Detect new events in relay log
    - Maximum : Consume queued Relay log prior to new data - unbounded
    - Read and apply events
    - Potentially multiple batches.
    - Commit epoch transaction at Secondary
  • Time between commit of replicated epoch transaction and next Secondary Cluster epoch boundary
    (Worst = 1 * TimeBetweenEpochs, Best = 0, Avg = 0.5 * TimeBetweenEpochs)
  • After this point a Secondary-local commit on the data is possible without conflict
  • Time required to log event in Secondary Cluster's Binlogging MySQLDs Binlog (~negligible)
  • Time required for Primary Slave MySQLD IO thread to
    - Minimum : Detect new Binlog data
    - Maximum : Consume queued Binlog data prior to the new data - unbounded
    - Pull new epoch transaction
    - Record in Relay log
  • Time required for Primary Slave MySQLD SQL thread to
    - Minimum : Detect new events in relay log
    - Maximum : Consume queued Relay log prior to new data - unbounded
    - Read and apply events
    - Potentially multiple batches.
    - For NDB$EPOCH_TRANS, potentially multiple passes
    - Commit epoch transaction
    - Update max replicated epoch to reflect new maximum.
  • Further Secondary sourced modifications to the rows are now considered not-in-conflict

From the point of view of an external client with access to both Primary and Secondary clusters, the conflict window only extends from the time transaction commit occurs at the Primary to the time the replicated operations are applied at the Secondary, and its commit time Secondary epoch ends. Changes committed at the Secondary after this will clearly appear to the Primary to have occurred after its epoch was applied on the Secondary and therefore are not in-conflict.

Assuming that both Clusters have the same TimeBetweenEpochs, we can simplify the Cluster conflict window to :
  Cluster_conflict_window_length = EpochDelay +
P_Binlog_lag +
S_Relay_lag +
S_Binlog_lag +
P_Relay_lag

Where
EpochDelay minimum is 0
EpochDelay avg is TimeBetweenEpochs
EpochDelay maximum is 2 * TimeBetweenEpochs


Substituting the default value of TimeBetweenEpochs of 100 millis, we get :
     EpochDelay minimum is 0
EpochDelay avg is 100 millis
EpochDelay maximum is 200 millis


Note that TimeBetweenEpochs is an epoch-increment trigger delay. The actual experienced time between epochs can be longer depending on system load. The various Binlog and Relay log delays can vary from close to zero up to infinity. Infinity occurs when replication stops in either direction.

The Cluster conflict window length can be thought of as both
  • The time taken to detect a conflict with a Primary transaction
  • The time taken for a committed Secondary transaction to become stable or be reverted

We can define a Client conflict window length as either :
 Primary->Secondary

Client_conflict_window_length = EpochDelay +
P_Binlog_lag +
S_Relay_lag +
EpochDelay

or

Secondary->Primary

Client_conflict_window_length = EpochDelay +
S_Binlog_lag +
P_Relay_lag

Where EpochDelay is defined as above.


These definitions are asymmetric. They represent the time taken by the system to determine that a particular change at one cluster definitely happened-before another change at the other cluster. The asymmetry is due to the need for the Secondary part of a Primary->Secondary conflict to be recorded in a different Secondary epoch. The first definition considers an initial change at the Primary cluster, and a following change at the Secondary. The second definition is for the inverse case.

An interesting observation is that for a single pair of near-concurrent updates at different clusters, happened-before depends only on latencies in one direction. For example, an update to the Primary at time Ta, followed by an update to the Secondary at time Tb will not be considered in conflict if:

 Tb - Ta > Client_conflict_window_length(Primary->Secondary)


Client_conflict_window_length(Primary->Secondary) depends on the EpochDelay, the P_Binlog_lag and S_Relay_lag, but not on the S_Binlog_lag or P_Relay_lag. This can mean that high replication latency, or a complete outage in one direction does not always result in increased conflict rates. However, in the case of multiple sequences of near-concurrent updates at different sites, it probably will.

A general property of the NDB$EPOCH family is that the conflict rate has some dependency on the replication latency. Whether two updates to the same row at times Ta and Tb are considered to be in conflict depends on the relationship between those times and the current system replication latencies. This can remove the need for highly synchronised real-time clocks as recommended for NDB$MAX, but can mean that the observed conflict rate increases when the system is lagging. This also implies that more work is required to catch up, which could further affect lag. NDB$MAX requires manual timestamp maintenance, and will not detect incorrect behaviour, but the basic decision on whether two updates are in-conflict is decided at commit time and is independent of the system replication latency.

In summary :
  • The Client_conflict_window_length in either direction will on average not be less than the EpochDelay (100 millis by default)
  • Clients racing against replication to update both clusters need only beat the current Client_conflict_window_length to cause a conflict
  • Replication latencies in either direction are potentially independent
  • Detected conflict rates partly depend on replication latencies

Stability of reads from the Primary Cluster

In the case of a conflict, the rows at the Primary Cluster will tentatively have replicated operations applied against them by a Slave MySQLD. These conflicting operations will fail prior to commit as their interpreted precondition checks will fail, therefore the conflicting rows will not be modified on the Primary. One effect of this is that a read from the Primary Cluster only ever returns stable data, as conflicting changes are never committed there. In contrast, a read from the Secondary Cluster returns data which has been committed, but may be subject to later 'rollback' via refresh operations from the Primary Cluster.

The same stability of reads observation applies to a row change event stream on the Primary Cluster - events received for a single key will be received in the order they were committed, and no later-to-be-rolled-back events will be observed in the stream.

Stability of reads from the Secondary Cluster

If the Secondary Cluster is also receiving reflected applied epoch information back from the Primary then it will know when it's epoch x has been applied successfully at the Primary. Therefore a read of some row y on the Secondary can be considered tentative while Max_Replicated_Epoch(Secondary) < row_epoch(y), but once Max_Replicated_Epoch(Secondary) >= row_epoch(y) then the read can be considered stable. This is because if the Primary were going to detect a conflict with a Secondary change committed in epoch x, then the refresh events associated with the conflict would be recorded in the same Primary epoch as the notification of the application of epoch x. So if the Secondary observes the notification of epoch x (and updates Max_Replicated_Epoch accordingly), and row y is not modified in the same epoch transaction, then it is stable. The time taken to reach stability after a Secondary Cluster commit will be the Cluster conflict window length.

Perhaps some applications can make better use of the potentially transiently inconsistent Secondary data by categorising their reads from the Secondary as either potentially-inconsistent or stable. To do this, they need to maintain Max_replicated_epoch(Secondary) (By listening to row change events on the ndb_apply_status table) and read the NDB$GCI_64 metacolumn when reading row data. A read from the Secondary is stable if all the NDB$GCI_64 values for all rows read are <= the Secondary's Max_Replicated_Epoch.

In the next post (final post I promise!) I will describe the implementation of the transaction dependency tracking in NDB$EPOCH_TRANS, and review the implementation of both NDB$EPOCH and NDB$EPOCH_TRANS.

Edit 23/12/11 : Added index

Thursday 8 December 2011

Eventual consistency in MySQL Cluster - implementation part 1




The last post described MySQL Cluster epochs and why they provide a good basis for conflict detection, with a few enhancements required. This post describes the enhancements.

The following four mechanisms are required to implement conflict detection via epochs :
  1. Slaves should 'reflect' information about replicated epochs they have applied
    Applied epoch numbers should be included in the Slave Binlog events returning to the originating cluster, in a Binlog position corresponding to the commit time of the replicated epoch transaction relative to Slave local transactions.
  2. Masters should maintain a maximum replicated epoch
    A cluster should use the reflected epoch information to track which of its epochs has been applied by a Slave cluster. This will be the maximum of all epochs applied by the Slave.
  3. Masters should track commit-time epoch per row
    To allow per-row detection of conflicts
  4. Masters should track commit-authorship per row
    To differentiate recent epochs due to replication or conflicting activity.

'Reflecting' epoch information and maintaining the maximum replicated epoch

Every epoch transaction in the Binlog contains a special WRITE_ROW event on the mysql.ndb_apply_status table which carries the epoch transaction's epoch number. This is designed to give an atomically consistent way to determine a Slave cluster's position relative to a Master cluster. Normally these WRITE_ROW events are applied by the Slave but not logged in the Slave's Binlog, even when --log-slave-updates is ON. A new MySQLD option, --ndb-log-apply-status causes WRITE_ROW events applied to the mysql.ndb_apply_status table to be binlogged at a Slave, even when --log-slave-updates is OFF. These events are logged with the ServerId of the Slave MySQLD, so that they can be applied on the Master, but will not loop infinitely.

Allowing this applied epoch information to propagate through a Slave Cluster has the following effects :
  1. Downstream Clusters become aware of their position relative to all upstream Master clusters, not just their immediate Master cluster.
    They gain extra mysql.ndb_apply_status entries for all upstream Masters.
  2. Circularly replicating clusters become aware of which of their epochs, and epoch transactions, have been applied to all clusters in the circle.
    They gain extra mysql.ndb_apply_status entries for all Binlogging MySQLDs in the loop

Effect 1 is useful for replication failover with more than two replication-chained clusters where an intermediate cluster is being routed-around (A->B->C) -> (A->C). Cluster C knows the correct Binlog file and position to resume from on A, without consulting B.

Effect 2 could be used to allow clients to wait until their writes have been fully replicated and are globally visible, a kind of synchronous replication. More relevantly, effect 2 allows us to maintain a maximum replicated epoch value for detecting conflicts.

The visible result of using --ndb-log-apply-status on a Slave is that the mysql.ndb_apply_status table on the Master contains extra entries for the Binlogging MySQLDs attached to its Cluster. The maximum replicated epoch is the maximum of these epoch values.

    Cluster 1 Epoch transactions in flight in
a circular configuration
(Ignoring Cluster 2 epochs)

39 38 37
->---->----->----->----->--
/ \ (Queued epochs 36-26)
Cluster 1 Cluster 2
(Queued epochs 23,24) \ /
-<---<------<----<----<----
25 26 27

Current epoch = 40
Max replicated epoch = 22


A MySQLD acting as a conflict detecting Slave for a cluster needs to know the attached cluster's maximum replicated epoch for conflict detection. On Slave start, before the Slave starts applying replicated changes to the Ndb storage engine, it scans the mysql.ndb_apply_status table to find the highest reflected epoch value. Rows in mysql.ndb_apply_status with server ids in the CHANGE MASTER TO IGNORE_SERVER_IDS list are considered to be local servers, as well as the Slave's own server id, and the maximum replicated epoch is the maximum epoch value from these rows.

@ Slave start

max_replicated_epoch = SELECT MAX(epoch)
FROM mysql.ndb_apply_status
WHERE server_id IN @@IGNORE_SERVER_IDS;



Once the Max_replicated_epoch has been initialised at slave start, it is updated as each reflected epoch event (WRITE_ROW event to mysql.ndb_apply_status) arrives and is processed by the Slave SQL thread. The current Max_replicated_epoch can be seen by issuing the command SHOW STATUS LIKE 'Ndb_slave_max_replicated_epoch';. Note that this is really just a cached copy of the current result of the SELECT MAX(epoch) query from above. One subtlety is that the max_replicated_epoch is only changed when the Slave commits an epoch transaction, as it is only at this point that we know for sure that any event committed on the other cluster before the replicated epoch was applied has been handled.

Per row last-modified epoch storage

Each row stored in Ndb has a built-in hidden metadata column called NDB$GCI64. This columns stores the epoch number at which the row was last modified. For normal system recovery purposes, only the top 32 bits of the 64 bit epoch, called the Global Checkpoint Index or GCI are used. NDB$EPOCH needs further bits to be stored per-row. Epoch values only use a few of the bits in the bottom 32 bits of the epoch, so by default 6 extra bits per row are used to enable a full 64 bit epoch to be stored for each row. The actual number of bits used can be controlled by a parameter to NDB$EPOCH. Where some epoch is not fully expressible in the number of bits available, the bottom 32 bits are saturated, which again errs on the side of safety, potentially causing false conflicts, but ensuring no real conflicts are missed. The ndb_select_all tool has a --gci64 option which shows each row's stored epoch value.

A conflict detecting slave detects conflicts between transactions already committed, whose rows have their commit-time epoch numbers, and incoming operations in an epoch transaction, which are considered to have been committed at the epoch given by the current Maximum Replicated Epoch. An incoming operation is considered to be in-conflict if the row it affects has a last-committed epoch that is greater than the current Maximum Replicated Epoch.

  in_conflict = (ndb_gci64 > max_replicated_epoch)


In other words, at the time the change was committed on the other Cluster, that other Cluster was only aware of our changes as-of our epoch (max_replicated_epoch). Therefore it was unaware of any changes committed in more recent epochs. If the row being changed has been locally modified since that epoch then there have been concurrent modifications and a conflict has been discovered.

Note that this mechanism is purely based on monitoring serialisation of updates to rows. No semantic understanding of row data, or the meaning of applied changes is attempted. Even if both clusters update some row to contain exactly the same value it will be considered to be a conflict, as the updates were not serialised with respect to each other.

Per row hidden Author metacolumn

One advantage of reusing the row's last-modified epoch number for conflict detection is that it is automatically set on every commit. However the downside is that when a replicated modification is found to not be in conflict, and is applied, the row's epoch is automatically set to the current value at commit time as normal. By definition, the current epoch value is always greater than the maximum replicated epoch, and so if a further replicated modification to the same row were to arrive, it would find the row's epoch to be higher than the current maximum replicated epoch, and detect a false conflict.

In theory we could consider the current maximum replicated epoch to be the row's commit time epoch, but as the per-row epoch is used for other more critical DB recovery purposes it's not safe to abuse it in this way. Instead we use the observation that if we found a previous row update from some other cluster to be not-in-conflict, then further updates from it are also not-in-conflict.

To detect this, a new hidden metadata column is introduced called NDB$AUTHOR. This column is set to zero when a row is modified by any unmodified NdbApi client, including MySQLD, but when a row is modified by the MySQLD Slave SQL thread, it is set to one. More generally, NDB$AUTHOR could be set to a non-zero identifier of which other cluster sourced an accepted change. Just setting to one limits us to having one other cluster originating potentially conflicting changes. The ndb_select_all tool has a --author option which shows each row's stored Author value.

By extending the conflict detecting function to examine the NDB$AUTHOR value, we avoid the problem of falsely detecting conflicts when applied consecutive replicated changes.
  in_conflict = (ndb$author != change_author) && (ndb_gci64 > max_replicated_epoch)


We are currently just using 1 to mean 'other author', so this simplifies to :
 in_conflict = (ndb$author != 1) && (ndb_gci64 > max_replicated_epoch)

= (ndb$author == 0) && (ndb_gci64 > max_replicated_epoch)


This conflict detection function is encoded in an Ndb interpreted program and attached to the replicated DELETE and UPDATE NdbApi operations so that it can be quickly and atomically executed at the Ndb data nodes as a predicate prior to applying the operation.

Ndb binlog row event ordering and false conflicts

The happened-before relationship between reflected epoch events (WRITE_ROW to mysql.ndb_apply_status) and incoming row events is used to determine whether a conflict has occurred. As described in the last post, Ndb offers limited ordering guarantees on the row events within an epoch transaction. The only guarantee is that multiple changes to the same row will be recorded in the order they committed. This implies that the relative ordering of the reflected epoch WRITE_ROW event, on some row in mysql.ndb_apply_status, and other row events on other tables sharing the same epoch transaction is meaningless. The only ordering guarantees between different rows exist at epoch boundaries.

This means that if we see a reflected epoch WRITE_ROW event somewhere in replicated epoch j, then we can only safely assume that this happened before incoming row events in epoch j+1 and later. The row events appearing before and after the reflected epoch WRITE_ROW event in epoch j may have committed before or after the reflected epoch event.

The relaxed relative ordering gives us reduced precision in determining happened-before, and to be safe, we must err on the side of assuming that a conflict exists rather than that it does not. Consider a Master committing a change to row X, recorded in epoch N. This is then applied on the Slave in Slave epoch S. If the Slave then commits a local change affecting the same row X in the same epoch S, this will be returned to the Master in the same Slave epoch transaction, and the Master will be unable to determine whether it occurred before or after it's original write to X, so must assume that it occurred before and is therefore in conflict. If the Slave had committed its change in epoch S+1 or later, the happened-before relationship would be clear and the change would not be considered in conflict.

These potential false conflicts are the price paid here for the lack of fine grained event ordering in the Ndb Binlog.

I'm lost

There's been a lot of information, or at least a lot of words. Let's summarise how NDB$EPOCH and NDB$EPOCH_TRANS detect row conflicts by following

  • @Cluster A
    Transactions modify rows, automatically setting their hidden NDB$GCI64 column to the current epoch and their NDB$AUTHOR column to 0

    Binlogging MySQLDs record modified rows in epoch transactions in their Binlogs, together with MySQLD generated mysql.ndb_apply_status WRITE_ROW events

  • @Cluster B
    Slave MySQLDs apply replicated epoch transactions along with their generated mysql.ndb_apply_status WRITE_ROW events

    Other clients of Cluster B commit transactions against the same data.

    Binlogging MySQLDs 'reflect' the applied-replicated epoch information by recording the mysql.ndb_apply_status WRITE_ROW events in their Binlogs as a result of --ndb-log-apply-status.

    Binlogging MySQLDs also record the row changes made by local clients.

  • @Cluster A
    Slave MySQLDs track the incoming reflected epoch mysql.ndb_apply_status WRITE_ROW events to maintain their ndb_slave_max_replicated_epoch variables

    Slave MySQLDs attach NdbApi interpreted programs to UPDATE and DELETE operations as they are applied to the database, comparing the row's stored NDB$GCI64 and NDB$AUTHOR columns with constant values supplied in the program.

    If there are no conflicts, the UPDATE and DELETE operations are applied, and the row's NDB$AUTHOR columns are set to one indicating a successful Slave modification

    If there are conflicts then conflict handling for the conflicting rows begins.

Now does that make any sense? Assuming it does, then next we look at how detected conflicts are handled.

Once again, another wordy endurance test and we're not finished. Surely the end must be near?

Edit 23/12/11 : Added index

Wednesday 7 December 2011

Eventual Consistency in MySQL Cluster - using epochs




Before getting to the details of how eventual consistency is implemented, we need to look at epochs. Ndb Cluster maintains an internal distributed logical clock known as the epoch, represented as a 64 bit number. This epoch serves a number of internal functions, and is atomically advanced across all data nodes.

Epochs and consistent distributed state

Ndb is a parallel database, with multiple internal transaction coordinator components starting, executing and committing transactions against rows stored in different data nodes. Concurrent transactions only interact where they attempt to lock the same row. This design minimises unnecessary system-wide synchronisation, enabling linear scalability of reads and writes.

The stream of changes made to rows stored at a data node are written to a local Redo log for node and system recovery. The change stream is also published to NdbApi event listeners, including MySQLD servers recording Binlogs. Each node's change stream contains the row changes it was involved in, as committed by multiple transactions, and coordinated by multiple independent transaction coordinators, interleaved in a partial order.

  Incoming independent transactions
affecting multiple rows

T3 T4 T7
T1 T2 T5

| | |
V V V

-------- -------- --------
| 1 | | 2 | | 3 |
| TC | | TC | | TC | Data nodes with multiple
| |--| |--| | transaction coordinators
|------| |------| |------| acting on data stored in
| | | | | | different nodes
| DATA | | DATA | | DATA |
-------- -------- --------

| | |
V V V

t4 t4 t3
t1 t7 t2
t2 t1 t7
t5

Outgoing row change event
streams by causing
transaction


These row event streams are generated independently by each data node in a cluster, but to be useful they need to be correlated together. For system recovery from a crash, the data nodes need to recover to a cluster-wide consistent state. A state which contains only whole transactions, and a state which, logically at least, existed at some point in time. This correlation could be done by an analysis of the transaction ids and row dependencies of each recorded row change to determine a valid order for the merged event streams, but this would add significant overhead. Instead, the Cluster uses a distributed logical clock known as the epoch to group large sets of committed transactions together.

Each epoch contains zero or more committed transactions. Each committed transaction is in only one epoch. The epoch clock advances periodically, every 100 milliseconds by default. When it is time for a new epoch to start, a distributed protocol known as the Global Commit Protocol (GCP) results in all of the transaction coordinators in the Cluster agreeing on a point of time in the flow of committing transactions at which to change epoch. This epoch boundary, between the commit of the last transaction in epoch n, and the commit of the first transaction in epoch n+1, is a cluster-wide consistent point in time. Obtaining this consistent point in time requires cluster-wide synchronisation, between all transaction coordinators, but it need only happen periodically.

Furthermore, each node ensures that the all events for epoch n are published before any events for epoch n+1 are published. Effectively the event streams are sorted by epoch number, and the first time a new epoch is encountered signifies a precise epoch boundary.

 Incoming independent transactions

T3 T4 T7
T1 T2 T5

| | |
V V V

-------- -------- --------
| 1 | | 2 | | 3 |
| TC | | TC | | TC | Data nodes with multiple
| |--| |--| | transaction coordinators
|------| |------| |------| acting on data stored in
| | | | | | different nodes
| DATA | | DATA | | DATA |
-------- -------- --------

| | |
V V V

t4(22) t4(22) t3(22) Epoch 22
...... ...... ......
t1(23) t7(23) t2(23) Epoch 23
t2(23) t1(23) t7(23)
......
t5(24) Epoch 24

Outgoing row change event
streams by causing transaction
with epoch numbers in ()



When these independent streams are merge-sorted by epoch number we get a unified change stream. Multiple possible orderings can result.
One Partial ordering is shown here :

      Events      Transactions
contained in epoch

t4(22)
t4(22) {T4,T3}
t3(22)

......

t1(23)
t2(23)
t7(23)
t1(23) {T1, T2, T7}
t2(23)
t7(23)

......

t5(24) {T5}



Note that we can state from this that T4 -> T1 (Happened before), and T1 -> T5. However we cannot say whether T4 -> T3 or T3 -> T4. In epoch 23 we see that the row events resulting from T1, T2 and T7 are interleaved.

Epoch boundaries act as markers in the flow of row events generated by each node, which are then used as consistent points to recover to. Epoch boundaries also allow a single system wide unified transaction log to be generated from each node's row change stream, by merge-sorting the per-node row change streams by epoch number. Note that the order of events within an epoch is still not tightly constrained. As concurrent transactions can only interact via row locks, the order of events on a single row (Table and Primary key value) signifies transaction commit order, but there is by definition no order between transactions affecting independent row sets.

To record a Binlog of Ndb row changes, MySQLD listens to the row change streams arriving from each data node, and merge-sorts them them by epoch into a single, epoch-ordered stream. When all events for a given epoch have been received, MySQLD records a single Binlog transaction containing all row events for that epoch. This Binlog transaction is referred to as an 'Epoch transaction' as it describes all row changes that occurred in an epoch.

Epoch transactions in the Binlog

Epoch transactions in the Binlog have some interesting properties :
  • Efficiency : They can be considered a kind of Binlog group commit, where multiple user transactions are recorded in one Binlog (epoch) transaction. As an epoch normally contains 100 milliseconds of row changes from a cluster, this is a significant amortisation.
  • Consistency : Each epoch transaction contains the row operations which occurred when moving the cluster from epoch boundary consistent state A to epoch boundary consistent state B
    Therefore, when applied as a transaction by a slave, the slave will atomically move from consistent state A to consistent state B
  • Inter-epoch ordering : Any row event recorded in epoch n+1 logically happened after every row event in epoch n
  • Intra-epoch disorder : Any two row events recorded in epoch n, affecting different rows, may have happened in any order.
  • Intra-epoch key-order : Any two row events recorded in epoch n, affecting the same row, happened in the order they are recorded.

The ordering properties show that epochs give only a partial order, enough to subdivide the row change streams into self-consistent chunks. Within an epoch, row changes may be interleaved in any way, except that multiple changes to the same row will be recorded in the order they were committed.

Each epoch transaction contains the row changes for a particular epoch, and that information is recorded in the epoch transaction itself, as an extra WRITE_ROW event on a system table called mysql.ndb_apply_status. This WRITE_ROW event contains the binlogging MySQLD's server id and the epoch number. This event is added so that it will be atomically applied by the Slave along with the rest of the row changes in the epoch transaction, giving an atomically reliable indicator of the replication 'position' of the Slave relative to the Master Cluster in terms of epoch number. As the epoch number is abstracted from the details of a particular Master MySQLD's binlog files and offsets, it can be used to failover to an alternative Master.

We can visualise a MySQL Cluster Binlog as looking something like this. Each Binlog transaction contains one 'artificially generated' WRITE_ROW event at the start, and then RBR row events for all row changes that occurred in that epoch.

    BEGIN
WRITE_ROW mysql.ndb_apply_status server_id=4, epoch=6998
WRITE_ROW ...
UPDATE_ROW ...
DELETE_ROW ...
...
COMMIT # Consistent state of the database

BEGIN
WRITE_ROW mysql.ndb_apply_status server_id=4, epoch=6999
...
COMMIT # Consistent state of the database

BEGIN
WRITE_ROW mysql.ndb_apply_status server_id=4, epoch=7000
...
COMMIT # Consistent state of the database
...


A series of epoch transactions, each with a special WRITE_ROW event for recording the epoch on the Slave. You can see this structure using the mysqlbinlog tool with the --verbose option.

Rows tagged with last-commit epoch

Each row in a MySQL Cluster stores a hidden metadata column which contains the epoch at which a write to the row was last committed. This information is used internally by the Cluster during node recovery and other operations. The ndb_select_all tool can be used to see the epoch numbers for rows in a table by supplying the --gci or --gci64 options. Note that the per-row epoch is not a row version, as two updates to a row in reasonably quick succession will have the same commit epoch.

Epochs and eventual consistency

Reviewing epochs from the point of view of my previous posts on eventual consistency we see that :
  • Epochs provide an incrementing logical clock
  • Epochs are recorded in the Binlog, and therefore shipped to Slaves
  • Epoch boundaries imply happened-before relationships between events before and after them in the Binlog

The properties mean that epochs are almost perfect for monitoring conflict windows in an active-active circular replication setup, with only a few enhancements required.

I'll describe these enhancements in the next post.

Edit 23/12/11 : Added index