The joy of the cloud is that it makes it super easy to respond to changing demand.
Say you write a web app. And it’s really good! In just a few weeks, you go from 10 users to 100,000. In the bad old days, your app would have promptly fallen over, you would have had to buy new hardware and your sysadmins would have had to install and configure this hardware before your site came back up.
These days, scaling up your server from a 2GHz processor to a 20GHz processor, or from 1GB of RAM to 8GB of RAM, is as simple as using an ElasticHosts calculator. But what if you want to go to 10 million users? There’s still a limit on the size of a single server. To build a really high-traffic web application — and of course, to handle potential hardware or software failure safely — you’ll still need to branch out your app across as number of servers.
The good news is that the cloud makes this much easier too. Not only can you set up your high-traffic cluster as a series of virtual machines, managing the whole process from one command line, but you can connect them over a private VLAN and scale each element of the cluster up and down as you need.
Our new tutorial series looks at our recommended techniques for building a huge, high-traffic, redundant LAMP web application in the cloud. We have the following tutorials:
- LAMP Tutorials (1/6): Set Up a LAMP Stack on a Cloud Server
- LAMP Tutorials (2/6): Move MySQL to a Separate Cloud Database Server
- LAMP Tutorials (3/6): Create a Second MySQL Cloud Database Server
- LAMP Tutorials (4/6): Add a Second Cloud Web Server with Round-Robin DNS Load Balancing
- LAMP Tutorials (5/6): Add a Front-End Apache Cloud Load Balancer
- LAMP Tutorials (6/6): Add a Second HA Cloud Load Balancer
If you work your way through this series, you’ll have a completely redundant stack, balancing web traffic between two web and two database servers, and with failover of the load balancer, web and database server. Pretty cool, eh?
New to ElasticHosts?
To create a Free 5-Day Trial account, click here: Free Trial.