Are MySQL replications as smooth as you think they are? | Hacker Noon

image

Piyush Badkul Hacker Noon profile picture

@piyushbadkulPiyush Badkul

Attempting to reuse the wheel instead of reinventing it.

If you just want to skip to the summary for this piece of article, then just skim through the bold and highlighted part of the blog.

A database can be thought of as an electronic library with tons of data coming in every second into the storage. Data is the only major asset that a company possesses which helps it in the valuation too. Data is literally the most important thing the world can witness now. It is also assumed that wars are won or lost just due to the accuracy of data. So, it just feels right, that we should take a moment to know about the database which will store our data, its performance, its flaws among other things, before going to actually use or implement it.

<meta name=”monetization” content=”$ilp.uphold.com/QNWaFxFXJWpB”>

Timmy is the technical manager at a Multinational Company, where one of his responsibilities is to decide the software which should be used for a particular task if any arises.

At the moment, his concern is to provide a Database to their existing world-class Software so that they can maintain certain states of subscriptions. A lot of options appear before Timmy like MySQL, MongoDB, NoSQL, MariaDB, and certain others. 

image

It appears that MySQL catches Timmy’s attention the most as it is most widely used and is easy to learn and integrate. Moreover, he knows that his database will never get huge traffic and MySQL is stable as well as reliable. The Software with which Timmy wants MySQL to Integrate is offered in the Field as a High Availability Framework (or HAF).

Whenever we deploy the large framework or software, that will be used primarily every time by the customers and stopping (or halting) them just for a couple of seconds can result in a huge revenue loss, then these types of software are often run as the HAF. It means that the exact two copies will be started at the same time, one of whom will be a primary copy and the other one will be a standby or backup copy. God forbid, if anything happens to the primary copy, the backup copy actively takes its place and handles the traffic.

If the Geographic area is risky, then, in that case, we go for 2 redundancies, a Primary redundancy(PR)  and a disaster redundancy (DR). Two copies of the Softwares (Both Active and Backup) will run at both locations, so in case of an EarthQuake or any infrastructure failure, DR will come up which will be in a totally different location, and hence, the services will remain unharmed.

Both copies of the software have heartbeats and communicate through TCP. In any case, the heartbeat of one copy stops, the other one will come up. As simple as that. The information is constantly shared in between these two copies through any mechanism possible (TCP, UDP, WebSocket, or any other). 

image

It’s just like this image from Unsplash where you see only one Instance but in Reality, there are two instances working simultaneously to provide you with seamless services. A copy that is visible to the user is an abstraction of the other two copies (Active and Backup) which are always in sync with each other.

MySQL also works in the same way. We can use the one copy as a Master and the Other Copy as Slave or both copies as a Master. It depends solely on the requirements.

But, there is one flaw that I feel very few people are aware of.

When running in the StandAlone copy, if we execute any command and it fails (maybe due to overloading of MySQL, Timeout. Lack of resources to execute, or some other unforeseen reason), MySQL will reattempt to execute this query multiple times, till it gets it done. So, in the standalone copy, we can say that the consistency is good for MySQL

But, when running MySQL in HAF, changes at one copy of MySQL needs to be reflected onto the other copy, for which Master MySQL will execute a command for the Slave MySQL.

If for some reason, the query fails because the Slave is busy with other things, and the query fails, then the master MySQL will not retry again
That’s it, it attempts to execute the query one time which is specifically meant to replicate your data to the other blade. Moreover, your connection is severed. So, future queries will also fail. The only solution for this is to restart both copies of MySQL

It just appears to be a normal issue, but the amount of resources and revenue due to this is not worth being wasted. Moreover, chances are you are looking at your application for failure, whereas the issue originally arose from MySQL. I feel that it is better to code something better instead of devoting time and efforts to issues like this which comes out to be just unproductive.

If your requirement is the same as that of Timmy, then, it would be better to avoid MySQL and go for something better.

Please note that this issue has been reported with the Free Version of MySQL. 

Tags

Join Hacker Noon