When your WordPress gets to have a lot of traffic or you have a WooCommerce installed and you need it to always be working to avoid problems with your hosting, it is very likely that you need to mount your WordPress in a High Availability system.
In his day I already explained the different types of Hosting for WordPress that can be achieved and from that are you can build High Availability systems. This means that you have your website on different servers distributed in various parts of the planet (usually in 2-3 locations) and that all of them can respond in the same way to all requests.
For this we need quite a few elements prior to setting up the WordPress itself, which focus on the infrastructure and its configuration. What I’m going to explain is one of many possible options, but it’s simply an idea of what can be assembled and how.
To begin with, we need at least one centralized control system. In this case, we will use some Rancher type system with which to manage the infrastructure and its containers. From this system we can create and scale the infrastructure, its contents and part of the management in critical moments.
On the other hand, we will need several VPS distributed in several locations. To give an example, we could set up a VPS in Barcelona and another VPS in Madrid so that we would have a High Availability system focused on Spain. It can be mounted internationally, the limits are where you want to put them. In principle, you can mount a VPS with any type of resources, although taking into account what is going to be mounted it is highly recommended that, for example for a simple website we have a machine with 1 CPU, 2 GB of RAM and 20 GB of SSD, or if we have a project already a little more advanced at least 2 CPU, 4 GB of RAM and 40 GB of SDD. Again, resources depend on the project (or projects) to be included. After all, infrastructure today is quite cheap. It can even be considered that one main infrastructure has a series of resources and the rest has another.
What will each of these VPS have? Well, in principle 4 containers that can be assembled, for example, with Docker. One of them would have the database with MariaDB (or MySQL or Percona). Another of them could have ProxySQL for query management. A third element would be a Redis for the cache. To finish, we would have what would be the web server, with a nginx and PHP. This is the basis, other systems could be mounted.
Another element to keep in mind is where the data is stored. In this case, the databases will store the information locally, so that MariaDB and Redis will host the data on the local disks of each VPS. On the other hand, for the data of the web server (or the websites themselves) we can take advantage of setting up a distributed system with GlusterFS, which allows us, for example, that when an image is uploaded to WordPress it is automatically available in all locations.
For the MariaDB data we will propose a master-slave system. It depends on the infrastructure and its connectivity you can also mount a master-master system, everything will depend on the power of the infrastructure and the project. For the management we will propose that the ProxySQL allows to execute the SELECT queries (90%-95% usually in WordPress) in any of the machines, and the INSERT or UPDATE in only the main database. This part is quite variable, although this system is simple and functional.
The object cache in Redis would work in local mode. It could also be mounted in cluster mode, although the advantages of doing so would have to be analyzed, since it would depend on the traffic distribution system that is made.
For the distribution of traffic we can propose the advanced functionalities of Route53 at the DNS level thanks to which the system would decide to which infrastructure to distribute the traffic, and even detect if one of the nodes is not working and be able to divert the traffic automatically.
Another element to take into account is that of scheduled tasks (crones). To prevent them from running on all machines and duplicating tasks, from a higher level the system will decide on which infrastructure to run them. Ideally, they should run on the main machine where you work with the INSERTs in the database to reduce latency.
With this system, we will ensure that the entire infrastructure is able to respond at all times and that in just a couple of minutes and only in case any of the nodes fails, the system is automatically managed to resolve incidents, maintain the service and that later can be recovered by the team responsible for it, taking into account that the system is in incidence mode.
In the time that we have been doing various tests with this system the truth is that the scalability is very correct and in cases where there has been some type of cut in any of the data centers, the system has been distributed automatically in the rest of the systems, being able to recover the location later thanks to the distribution of the other points.
About this document
This document is regulated by the EUPL v1.2 license, published in WP SysAdmin and created by Javier Casares. Please, if you use this content in your website, your presentation or any material you distribute, remember to mention this site or its author, and having to put the material you create under EUPL license.