Hi. My name is Luis Lage, and I'm the MySQL Replication Team Lead at Oracle and have been for a few years now. Today, I'll be talking to you about the new replication features in MySQL 57. There will be time at the end of the session for question and answers, but feel free to fire away your questions while the session is ongoing. Let me just give you a few seconds for you to go through the Safe Harbor statement.
Okay. So this is the agenda for today's session. I will start by celebrating the fact that MySQL 57 is GA. It has GA ed recently and together with it, the replication framework. Then I will talk a little bit about what MySQL replication actually is, showcase its building blocks and so on to give you an overview on how it works and what it actually is.
Then we'll dive into the new replication features in device 1257. And after that, I will talk a little bit about the work that we have ongoing by presenting some of the feature previews that we have on labs. Mysql.com. In the end, I will conclude the session by trying to give you an overview of our road map. And here are some interesting facts about MySQL 57 replication in particular.
So there were 40 replication work logs pushed to MySQL 57. So work log is a unit of work. And usually this means that we implemented this amount of work to create the new features. We have merged 8 contributions into MySQL's high thermal application. And in total, we have 19 major enhancements to the replication core and 14 refactorings and modularization related work that was also done in the code base.
And in addition to this, we have this one new entirely out of core replication plug in that we call MySQL Group replication still on labs, but exhibiting a very fast rapid release rate. And we actually want to thank you. So after 2.5 years of being in the making, the team is very happy to actually present what we have been doing to all of you and giving you this for you to use. So thank you for being part of it. A lot of you guys have actually contributed in one way or another either through bug reports, feature requests, contributions and so on.
So we seriously hope that you enjoy MySQL 5 7 in particular, the new replication features in it. MySQL replication is very simple in its architecture. That's basically one server that takes incoming updates, usually called the master. And then there's 1 or more servers that actually copy these updates from the master and install them in their local databases. These servers are often called the slaves.
And to make it all work, there's a log that is being moved around. So this log captures the changes on the master and is used on the slave to replay them against its local database. This log is usually called the binary log. In this process, there's a bunch of threads involved. On the master side, there's the fenders thread per slave connected.
And on the slave side, there's a receiver thread and a bunch of a player thread that actually apply the binary log against the local database. The binary log is an important part of the MySQL replication architecture. It captures the changes that have happened on the master side And then these changes in this file are copied to the slave and then the slave uses the same binary log in structure to apply the changes to its local database. Changes are captured on the massive side either in statement format, in which case the slave has to replay these statements against its local database or in row format, which is basically a binary representation of the rows that were changed. And in this case, the slave will install these changes to its local database.
Apart from data related events or units in the binary log, there are also control events, which basically are related to the fact that the server needs to know when a binary log is rotated, what's the format of the binary log and so on. So replication can be used for read scale out. So imagine that you're building a business, you're deploying your databases, you have opened your web shop, you have incoming loads and your business starts to grow and you want to offload the market by redirecting read queries to some other server. For this, you can deploy a slave and your slave will be continuously copying changes from the master, so you can do something like readwrite splitting. So once your business starts to grow, it could be that your reads is taking a lot of your slaves already, a lot of your slave capacity already.
So you might want to deploy more and more slaves, so you can achieve REIT scale out in this case. There's also write scale out, but write scale out is not really addressed by MySQL replication. But the fact is that there is MySQL fabric that could actually help in this case by sharding your data. But we will not cover this in this presentation. It's also used for for having means to provide a redundant MySQL service.
So in this case, we have here depicted a master and 2 slaves. And if the master crashes, we can make one of the slaves to be the new master and allow the service to continue without minimal disruption. Another typical example is to do some sort of online backup and also to do reporting or to rely on the slate to actually make these very big reporting queries without interfering with the performance of the master. And it's also useful for doing replication between data centers. This is interesting because it can be used for deploying solutions for disaster recovery.
So let's dive into the new replication features in MySQL 5 7. I'll split the newer application features throughout different areas. And the first one I'll address is usability and more online operations. There was a lot of work put into this area. The first one is online reconfiguration of Global Transaction Identifiers.
Global Transaction Identifiers is a feature that was introduced in MySQL 56 and allows a new positioning scheme in the replication stream for both masters and slaves. The fact is that this positioning strategy is incompatible with the old and traditional one, which relies on file names and positions inside the file. Ultimately, this means that to turn on this feature in MySQL 56, the user has to incur in some sort of offline because it has to synchronize all the servers before it can switch on the new global transaction identifiers in the replication stream. As such, to overcome this problem of having to synchronize the servers and then introducing some offline peer reading system, MySQL provides this procedure to turn the feature on online. So when the procedure is ongoing, both reads and writes are allowed into the system, and there's no need to synchronize the servers ever, no need to restart servers, no need to actually change the topology as well.
And the fact is that this procedure works in arbitrary topologies. There was a big effort put into this to allow this procedure to be crash safe in the sense that should anything wrong happen in the middle of the migration, let's call it that, in the middle of this migration procedure, the user can always roll back the changes. At this core, the procedure is very simple. We have to go through stages to go from off to GT mode on. So no global transaction IDs at all to global transaction IDs always.
And the fact is that we have to go through these stages to allow different positioning schemes to coexist in the same replication stream. So we're slowly morphing through these stages from old style positioning to the new style positioning. Anyway, the procedure is completely detailed on this manual reference manual page here, so feel free to have a look at it. We also have online reconfiguration of replication filters. It used to be the case that to change the replication filters, one would need to restart the server, and that's not the case anymore in MySQL Flight 7.
So we can change the filters replication filters dynamically by issuing this command here, change replication filter, replication do DD. In this case, that's we're exemplifying with the replication do DD, but it also is applicable to all slave filters. And the fact is that you just need to stop the replication threats, change the filters and restart the replication thread. There's no need to restart the entire server anymore. We have also introduced this online reconfiguration of replication receiver and the player threads separately.
And I will go into the details. So imagine that there is a replication topology just like this 1, a master and 2 slaves, and the master crashes. And then you want to promote 1 of the slaves to take A's role, so one of the slaves will become a master. In this case, the master the new master is B. Then one can failover C's receiver thread from A to B without having to stop C's applier threads.
And thus, this enables more online operations during failover procedures. So one can change the master without having to stop supplier threads. And the opposite or the reverse is also true. We can change a player configuration without actually having to stop the receiver threads. In this case, what is shown here is that we are changing the delayed slave period to 3 1,600 without having to stop the IO thread or the receiver thread.
We have also worked quite a bit on improving the replication monitoring in MySQL 57, in particular, the slave side of it. We work quite a bit in MySQL 57 to instrument the slave side replication framework and then expose the data collected through this instrumentation through replication through performance schema replication tables. These tables also allows us to more consistently and seamlessly integrate the monitoring of new features like multi source replication or MySQL group replication, where in this case, shows the status would, for instance, fall short or not scale at all. And we took this opportunity to group logically related information instead of having it all in one place. This resulted in 6 different replication performance schema tables.
As you can see here, there's a table for connection configuration, connection status, applier configuration, applier status, applier coordinator status and worker status. So one can actually given these tables, one can actually inspect what each worker thread is doing. In this case, we can see, for instance, which global transaction identifier some worker thread has actually processed, applied recently. We can also find out if a worker thread errors out, why does it hammer out, as you can see in these fields here. Improving replication performance, that's another area we focused a lot in MySQL 57.
So let's start by talking about improved supplier throughput, which basically impacts both the master and the slave. It impacts the master because the master has to put some extra information into the binary log for the slave to be able to schedule transactions in parallel in a more optimal way. Before I delve into the details, let me just say that in our internal benchmarks, we have observed a 10x throughput improvement when comparing the multi threaded slave applier against the single threaded applier. This, of course, is possible because the multi threaded slave is applying more transactions in parallel when the single threaded supplier is applying 1 by 1. But it also shows that the slave makes good use of the information that the master puts into the binary log when it comes to deciding which transactions can be scheduled in parallel.
Under the hood, what happens is that the master puts into the buyer log, which transactions have executed concurrently and have not blocked each other. Then the slave takes this information and if any two transactions have not blocked each other on the master during their execution, they will be scheduled in parallel. To illustrate this procedure, let's have this let's consider this example here. We have 3 transactions running on the master, and T1 is executing 1st. It starts committing and right about well, immediately after T2 also starts committing.
The point here is that T2 starts committing before T1 finishes its commit. This means that T2 is actually was actually never blocked by any of T1's locks. T3, on the other hand, starts committing after T1 has already finished its commit and released its locks. So we don't really know whether T3 was blocked by T1 or not. The interesting point though is that T3 was not blocked by any of T2's locks since T2 has not yet finished its commit when T3 started committing.
So ultimately, what this means is that the slave can schedule T1 and T2 in parallel, but not T1, T2 and T3. However, the slave can schedule T3 in parallel with T2 after T1 has finished its execution. So in a way, this creates the concept of a sliding window that the slave can use to parallelize transactions when they are to be applied. This new multi threaded supplier scheduling policy supports both replication formats. Whether one is replicating as statements or rule format, it will work.
The scheduling policy itself can be controlled using this system variable, slave parallel type. 1 can set it to logical clock, which is a new scheduling policy or database, which is the old scheduling policy from ISQL 56. Although these performance results are already quite promising, when we talk about performance development or improving performance, we usually refer to a highly iterative and highly repetitive process. So a lot of pain points, a lot of scalability issues that one had to deal with. And the fact is that we expect that this work doesn't stop here.
It will continue as we introduce new features, as we improve MySQL replication framework in 5.8 and so on. So this is really interesting work that is happening. We have also worked quite a bit on the user and sender thread synchronization. The fact is that when a session is executing a transaction and writing to the bin log, it has to compete with the sender threads for a resource, this resource being the binary log itself. Therefore, we work to make the synchronization between these different types of threads in a way in such a way that allows concurrent reads while the user sessions are writing to the binary log.
This removes a scalability pain point around the binary log resource. We have also done some large refactoring on the sender thread. This enabled to deploy a better memory management on its own data structures as well as removing some scalability issues. Ultimately, what this means is that even though the send buffer is still dynamic, it is not allocated and freed every time an event is sent. Instead, we have the buffer to be adaptive with respect to the workload.
So the buffer grows and shrinks according to some metrics depending on the workload that is being pushed, depending on the event load that is being pushed into the slave. Together this change, together with some enhancements that I've mentioned before, increase the massive scalability, reduces CPU consumption and copes with the better peak load. So if we have a peak of load, the buffer will grow and slowly shrink once the peak is over. So we've done some micro benchmarks to measure the impact of our changes in the standard threat. And what we found in our micro benchmark is that the master is able to sustain the throughput as we connect more slaves to it.
So in this case, we're comparing 5,616 against 5,74, and you can see that around 30 slaves, when we had 30 slaves connected to the master, you can see 5,616 already having some difficulties keeping up with this fact, while 574 is able to sustain the throughput even though it has more slaves connected to it. Another area that we spent some of our effort on was on semi synchronous replication, in particular, to make it fast or faster. For this, we deploy this acknowledgment receiver thread that is only responsible to collect the acknowledgments from slave and notify those sessions that are waiting for an acknowledgment to resume. This means that the Sender Thread is not responsible for receiving the acknowledgments anymore and can push the binary log to the slave as fast as it can. In practice, this means that consecutive transactions do not lock each other while waiting for acknowledgment.
So the standard thread is able to send D1 and D2, and then the acknowledgment thread will be responsible for receiving the acknowledgments as opposed to the send a thread, send T1 first, waiting for the acknowledgment and then send T2 and then wait for the acknowledgment back. The thread the acknowledgment receiver thread starts once semi sync is activated implicitly. It also stops when we deactivate semi sync. So given all the improvements on the standard thread and on semi sync performance allows us to think about moving the durability of our data from the local disk to the replication itself. So if I'm okay with considering changes durable as long as they are on my master and on my slave, then I can, for instance, disable all the f syncs to local disk.
And in the event of the master crashing, I can go to the slave. And since my changes were, say, my synced to the slave, I can get my changes from the slave instead. And while we were doing some of these benchmarks and comparing semi sync durability to disk durability, we actually found out that relying on semi zinc for durability or replication for durability in this case, we could get a lot of more throughput on the master side in some cases. And this is what this figure is depicting here. The 3rd area that I want to highlight that we have worked on in MySQL 57 is dependability.
In the original semisyncrance replication, as it was introduced in 5.5 and then 56, there is a possibility that some changes are externalized by concurrent sessions, while the original session that actually made the change is waiting for an acknowledgment. This could lead potentially to a lost update situation, in which case, for instance, the master crashed while the transaction the session was waiting for the acknowledgment from the slave and some concurrent read operation had already externalized the change. We worked in Fiveseven to make this go away. This is depicting what happens in MySQL 57 today. So we can see here 2 transactions.
One is an insert and the second one is a select. And the insert executes, prepares, is written to the bin log, the binary log and then it is sent to the slave and only when the slave acknowledged back, we commit the transaction in the storage engine. So at that point in time, we externalize the changes made by T1. Before this change, so in 5.5 and 5.7 I'm sorry, in 5.5 and 5.6, T2 could externalize the insert already before the acknowledgment is received from the slave. And this would happen because the waiting would happen after the commit was done and not before.
So in 5.7, we wait between the point in time that we write the changes to the binary log and the point in time that we commit the transaction to the storage engine. So any concurrent read operation will not externalize the data at that point in time. So in practice, as I said, master waits for the act before committing as opposed to master waiting for the slaves act after committing. And should the master fail, then any transaction that it may have externalized is already persisted on the slave. So this also means that even if I have not externalized the transaction on the master, there is also the possibility that it is already transferred to the slave at the moment that the master crashed.
The user can choose between the original semi synchronous behavior and the new semi synchronous behavior using this by setting this option RPL semi sync mass of weight point. So you can set it to after sync, which basically is the new behavior and to after commit, which is the old behavior from 5.5 and 56. Also in some in semi sync replication, we deployed this infrastructure for waiting for multiple acknowledgments instead of just one. So if a transaction wants to wait for more than one NAC before it resumes the commit, it can set the number of acknowledgments it wishes to wait for. In practice, the master does not commit a transaction until it gets an acknowledgment from Enphase.
And this is dynamically settable through this system variable, RPL semi sync master weight for slave count. So you set it to end, then you need an acknowledgment before the transaction is actually committed. This is a diagram depicting what actually happened under the hood. There's a master and 3 slaves, 3 semi sync slaves, and then there's a transaction 1 coming in, which requested 2 slaves to acknowledge before the session is released to the application. So once C1 commits or asks for a commit, the changes in the binary log are replicated to the slaves.
2 of them will acknowledge back and at that point in time, the MASA will release the session to resume. And that's pretty much it. Flexibility, also an interesting area that we worked on in MySQL 57, making it work in a lot more use case scenarios, making it work making it be more flexible and adjustable to different kind of use cases. When we introduced global transaction identifiers in MySQL56, we actually introduced a major feature or actually a major feature set. Global Transaction IDs is more of the 3 things packed together.
It's a global transaction identifier itself. It's the auto skipping procedure. So if a server has handled the transaction, it is able to know that it has handled it, and it will auto skip the same transaction if the user will wrongly try to resubmit it. And it's also the auto positioning protocol between masters and slaves, which allow slaves and masters to automatically negotiate which parts of the replication stream a slave is missing. And this is all good, but along with this came a set of requirements as well.
And one of these requirements is the fact that if a slave wants to use auto positioning protocol and global transaction IDs, it has to have the binary log on in MySQL 56. And this works for a lot of our users and doesn't work so well for some of our other users, which would prefer the slaves not to have the binary log on because these slaves will never be candidate slaves. They will never be candidates to replace the master in case of the master crashes. So they would rather like to use the auto positioning, but not have to have the binary log on. And this is what we have worked on in MySQL 57.
So starting in MySQL 57, the slaves can use GTs when the binary log is disabled. As I said, these slaves not having the binary log means that they will never be candidate slaves to replace the master in the sense that they cannot serve previous replication history because they don't have their own binary logs. But they still at the same time, they still can use global transaction IDs to actually do the auto positioning. So to make it all work without having the binary log on the slave, we need to save the Global Transaction ID's execution history somewhere. And this somewhere, this place where we store the global transaction IDs execution history is actually a system table called GTID executed.
You can see on this slide the scheme or the structure of this table, which is basically a table with 3 fields, source UUID, interval start and interval end. Global transaction IDs are inserted into this table as transactions commit and periodically a range compression thread runs and compresses all these records in the table into a single range. The period of this thread is configurable dynamically configurable using the system variable. GTID executed compression period equals n, n being the number of transactions between this the decompression procedure runs. There are some details to this procedure of storing global transaction IDs in a table.
So what happens if my binary log is enabled? Will the table be filled or not? What happens if I don't assign a global transaction ID to transaction and the binary log is disabled? Will this transaction get a global transaction ID assigned to it or not? So to answer these questions, let me just make it clear that if the binary log is enabled, transaction or global transaction IDs are stored in the binary log.
And on rotation of the binary log, these transaction IDs that were written to the binary log that is being rotated are copied into the system table. This was so to reduce the performance impact of storing transaction IDs or global transaction IDs into the VIN log and the table at the same time. If the binary log is disabled, we store global transaction IDs in a table transactionally. New transactions are not assigned the Global Transaction ID if GTID assignment is set to automatic because these transactions are local transactions. They don't exist anywhere else.
So it doesn't it's kind of meaningless finding them a Global Transaction ID. But the key point, the key takeaway from this slide is that we always store global transaction IDs transactionally, and this means that the process of storing transaction identifiers is crash safe. So if there is a crash, we will always recover the data together with the global transaction ID. Another big feature in MySQL 57 is multi source replication. Traditionally, or since the beginning of the times in MySQL replication universe, the user can only set up replication from 1 master to multiple slaves, kind of like a fan out replication scheme.
Now with MySQL 57 replication, you can have a server aggregating or pulling data from multiple masters. So in a way, a slate can have multiple masters. And this is great because it allows some different workflows for replication or enables some different workflows using MySQL replication. Some of the use cases that may be covered by this feature are integrated backup, so aggregating multiple different parts of your data that are scattered throughout different servers, aggregate them all into single server and then back everything up from there or having multiple shards of your data spread out as well across different servers and then you want to run cross shard operations, you can aggregate them in a single server and then run your complex queries there. Or for instance, acting as a single hub for inter cluster replication, so pushing your data all of your data in your data center into some server and then replicating across live area through that server.
To sum it up, a server can replicate from multiple sources. And for each source, there's an instance of the slave replication framework, which we call a channel. A channel is a receiver thread, a relay log and a set of a plier threads. And these channels can be operated or configured separately. With multi source replication, we have also support for inspecting the each slave instance, each channel instance separately through the new replication performance schema tables.
So we have things like replication apply status by coordinator, which shows multiple entries, 1 row per channel. The same thing goes for replication, apply status by worker and the same thing for replication connection status. So multiple records, multiple rows on these tables showing stats for each channel. This source verification is also integrated with GTIDs. It's integrated with crash safe tables.
And there's virtually no limit of the number of sources that a slave can have or a service can have. However, we have kept it in the source to 256. But if you're building the MySQL server yourself and you need to add more than 256 sources, then you can just search for the variable that defines this limit and change it to your liking. And we're able as I said, we're able to manage each source separately. So through the ChangeMaster command, you can add the clause for specifying on which channel that change master demand will operate on.
And there's also a bunch of new features and improvements that I was not able to fit on any of these previous categories, but they are still interesting to mention. So let's just quickly go through them. And here's the list. This is the list with smaller and yet interesting enhancements. Let me just quickly go through it then.
There's an enhancement to the multi threaded supplier, which makes it able to retry failed transactions. This was not the case in MySQL 56, so this is 16.7. There's an option as well to make the multi threaded supplier preserve the commit order. So the order that transactions appear on the relay log, they will be committing under that order. There's also some work done in the MySQL bill log tool.
Basically, we have new options, new SSL options and also an option to rewrite DB when outputting role events. There's also a very interesting function, global transaction ID function, related function, which makes the session wait until a given set of transaction IDs has been processed by that server. And there's also an option to track which transaction ID was generated as part of this transaction. And this information is reported back in the okay packet of the MySQL protocol. There's also support for exit transactions when the binary log is on.
There was a lot of work done in the exit framework in 57, including in the replication framework. So now when a user does an XA transaction and prepares it and then disconnects, this is preserved even if the binary log is enabled. And then there's this notable change or notable change to the defaults with respect to the binlog format and sync binlog. So the binlog format is now row by default and sync binlog is set to 1 by default as well. Finally, there are some interesting options to fine tune the binary log group commit procedure.
So we can add delays, we can tweak it to better fit our workload and then this also has impact on the amount of parallel transactions that can be marked as parallel on the market and then scheduled in parallel on the slate. So it can have impact on the throughput on the slate as well. Interesting piece of infrastructure that has been declared GA also very recently and that goes almost hand in hand with replication. So let's start. We have the MySQL router, which has recently been declared GA.
And the motivation to have created this piece of infrastructure was mostly driven by the fact that we wanted to interact with fabric without having to have to upgrade existing connectors or even support or even allow users that have or use connectors that do not support fabric at all to be able to use it anyway. And it's also one piece of infrastructure that hides the complexity associated with readwrite splitting and automatic failovers from the application itself. So the router acts as an entry point to our replicated system where all these things are happening. From the very beginning, the Maeskol router was designed while keeping performance and flexibility in mind. So it has a very good performance, and it has a plug in driven connect the router provides a connection based routing and simple load balancing and also the seamless failover based on fabric groups.
So when it coupled together with fabric, you can automate failovers without having to expose that to the application. Let me just go briefly through what we have been doing, and we have been releasing on labs. Minescale.com. We have on labs a very exciting new replication plug ins called MySQL Group Replication Plug in. It provides multi master update everywhere, meaning different clients can update the same role on different servers at the same time and this will be handled.
We the replication plug in this replication plug in also provides automatic group membership management and failure detection. And there's no need for server failover because they are pretty much every one of them is a master and they act as a group. There's it provides also automatic reconfiguration, no single point of failure. And it is a shared nothing state machine replication. So every server has a copy of the entire database itself.
And it is InnoDB compliant. So basically, you have all the look and feel from InnoDB and MySQL that you are used to, and you can deploy this in off the shelf hardware. This is a great technology for deployments where elasticity is a requirement, for instance, cloud based infrastructures. And the fact is that it is very well integrated with different with all the different features in replication. It has some requirements.
For instance, it requires global transaction IDs and role based replication. And you can monitor it through performance schema tables as well. It is elastic and self healing in the sense that you can add and remove nodes and the group will notice that. And when a node is removed, the group will notice that it has been it is gone. And if you add a server, the group will realize that the new server has joined and the server will automatically transfer the missing state, the missing binary logs.
As you can see here, it has had a strong development cycle. So lots of releases in 1 year on labs. And we're happy to say that it has seen a very strong community engagement, so very strong feedback from different community parts of the community. And a couple of side notes here. With MySQL group replication plug in for 579, we have introduced a new group communication engine.
So CoroSync is not needed anymore, and the full MySQL group application plug in stack is self contained in the plug in itself. Another interesting fact is that the plug in now builds on multiple platforms. So it's not constrained to Linux anymore. You can build it on Solaris and previous D, for instance, or OS X. Let's now have a look at how our roadmap looks like.
So what is next for replication? We will continue to build on group replication technology, the group replication plug in itself, so rapid releases, improved performance, stability and usability. We are also very focused on improving replication usability, instrument more replication, expose more stats replication stats through performance schema tables, add simpler administration administrative commands and more online operations. And also, of course, performance is a big thing, so continue to work on improving replication performance. And make sure that we interact or interact with all the other components for orchestration like fabric and so on and the MySQL router.
So if I had to split through different areas, this is how it would look like. So there's MySQL replication. And in the high availability corners, there's MySQL group application, recoverability, automation of failover and crash recovery. So these are all areas that are very interesting to us. On the performance side, work on the multi threaded supplier, address some of the scalability pain points and also delve into some high performance computing optimizations in the replication code.
On the ease of use, further instrumentation for exposing stats through performance schema tables. Configurability, that's also an area that we need to work on. And simplifying the user interface is also a very big thing. And integration, integration is also very big, I would say. We have MySQL fabric, router, modularization and pluggability.
So this is I think these are key areas to make replication more usable, more continue to be replication is very popular itself, but to make it even more popular and modernization of replication interface. So this is also a very big thing for us. So just to sum it up, if I had to make a picture or draw a picture of how will all these components play together, I would probably draw something like this. You have your application interacting with the router, the router interacts with the fabric for orchestration and sharding, and you have then your groups highly available groups, for instance, with group application where your requests are sent to. On top of this, to monitor all of these things together, you can have MEM or something else if you have already in your infrastructure and so on.
So let's just quickly do a quick summary. So I presented MySQL 57 replication, lots of new features, huge work put into this release as usual. And there are some things I would like to highlight. Semi Sync has gotten better. We have worked a lot on performance.
So the slave is now faster. There's less contention around the binary log. And also, it's easier to reconfigure replication. So there's more online operations for reconfiguring replication. And there's also much more flexibility when fitting MySQL replication in your setups because now we can do multi source replication in addition to regular replication.
And I would also like to highlight the fact that we have these lab releases for group application. They're out there for you to try to send us feedback. We have a lot of detailed blogs about MySQL group replication. Feel free to engage with comments on those blogs or file bug reports or just use it and give us or let us know your experience with it. And so where to go from here?
Well, you can go to theirmysqal.com and download the packages and install them, play with MySQL, the new version of MySQL, MySQL 57, set up replication and so on. You can go to labs.mysql.com. We have there the group replication plug in. There's a bunch of other stuff there, not exactly related to replication, but still might be interesting. And you can go to this link here, dev.mascula.com/hmanual and check our reference manual there for any detail that you might have missed and you want to educate yourselves on.
And you can go to MySQL high availability.com and read our blogs, our engineering blogs. And there's a lot of stuff there, a lot of interesting stuff, low level stuff as well. Feel free to drop a comment or 2 to engage us and so on. You are very much welcome to just hang around there if you want. So I hope you enjoyed.
Thank you very much for attending. It was a really nice hour here with you. So I guess the time for questions is open now. Thank you.