Can node2 replicate from node1 with auto-pos? (In other words, can’t node2 auto-pos on node1?) The answer is “no”, and the script above would print:node2 cannot auto-position on node1, missing GTID sets 7b07bfca-d4d9-11e5-b97a-fe46eb913463:5052064794-5052104637"GTID sets can be overwhelming to humans because they’re large, long sets. (See .) Let’s highlight the difference in the GTIDs sets above:node1: 7b07bfca-d4d9-11e5-b97a-fe46eb913463:5052064794-5054955706node2: 7b07bfca-d4d9-11e5-b97a-fe46eb913463:5052104638-5054955706The first part of node2’s GTID is greater (104638 > 064794) which means node2 has less GTIDs than node1. If that sounds confusing, here’s why: GTIDs are sequentially numbered transactions. Let’s simplify:node1: 1-10node2: 5-10node1 has ten transactions: 1 through 10. node2 has only six transactions: 5 through 10; it’s missing transactions 1 through 4The real node2 above is missing transactions 5052064794 through 5052104637, as the script would print. Consequently, when node2 attempts to auto-position on node1, it tries to fetch the missing GTIDs from node1 but fails because node1 has already purged them from its binary logs. This causes replication error “Last_IO_Error: Got fatal error 1236 from master …” on node2.Ghosted GTIDsHere’s the strange thing: why is node2 missing GTIDs at the start or middle of the set? Using the simplified sets again, how did node2 execute transactions 5-10 but miss 1-4? Presumably, if node2 is identical to node1 despite missing GTIDs 1-4 (which you can verify with ), then node2 either has the changes from 1-4 or those changes were overwritten by 5-10. I don’t know how or why this situation arises (probably due to cloning new replicas to replace failed ones in combination with purging binary logs), but I call these types of missing and nonrecoverable GTIDS “ghosted GTIDs”.Before showing how to fix ghosted GTIDs, it’s extremely important to decided if it’s safe to ignore the missing GTIDs. If unsure, or if the one node is not already replicating from the other, err on the side of caution and re-clone the node. Don’t risk messing around with GTID sets unless you’re sure. If one node is replicating from the other, use to verify that they’re in sync and identical. If true, then the fix is safe.The FixGTIDs are stored in binary logs because these (bin logs) are the only official, source-of-truth record of changes. (Configure MySQL with sync_binlog = 1!) Consequently, bin logs and global variables gtid_purged and gtid_executed are linked. To fix ghosted GTIDs, we need to rewrite gtid_executed which means we need to RESET MASTER to purge the binary logs. In effect, we’re telling MySQL to forget the past and let us rewrite history.WARNING: RESET MASTER breaks downstream replicas!Because of that warning, the first step is to isolate the node to fix. Continuing the example above, this means isolating node2 so that nothing replicates from it. For example, if node3 was replicating from node2, we would need to make node3 replicate from node1, else the fix on node2 would break node3 and we would have to re-clone node3 after fixing node2.Once the node to fix is isolated (it’s a standalone replica with nothing replicating from it), execute:-- All on replica to fix..., RESET BINARY LOGS AND GTIDS requires the RELOAD privilege. For a server where binary logging is enabled (log_bin is ON), RESET BINARY LOGS AND GTIDS deletes all existing binary log files and resets the binary log index file, resetting the server to its state before binary logging was started. A new empty binary log file is created so that , SELECT * from information_schema.GLOBAL_VARIABLES WHERE VARIABLE_NAME = 'LOG_BIN'; or. SELECT @@log_bin;.