Liquid Web’s Cloud Sites is pitched as a “premium web hosting for serious developers”. It promises to run high-traffic websites reliably, and “fast.” It is also a managed hosting, which means that the hosting company is responsible for system updates, server security, and system maintenance. The customer is responsible for the application and its security. How does it stack up to scrutiny? let’s see.
Context: Among the few things I do at Ubergizmo, I take care of all things related to web infrastructure and coding. I play with different web hosting options, as part of this.
Many developers and webmasters, would rather focus on their web apps or website, rather than making sure that the server has the latest security updates, and if the network latency seems within an acceptable range. Also, if the server crashes in the middle of the night, someone else is alerted and has to reboot it. Cloud Sites customers also get 24/7 technical support, either over the phone or via a chat interface. More details on that later.
As of 01/15/2020, Cloud Sites has a minimum fee of $150/mo which includes:
And if you go beyond these resources:
The notion of metered Compute Cycles (CC) is no longer shown on the homepage, but it is still present in invoices and internal metrics. $2/GB is a bit expensive if you compare that to other options, including Amazon EFS which is also a network file system, like Cloud Sites has.
Unlike most other hosting plans, customers don’t pay for a resource (server), but the usage of that resource (metered billing), which means that customers pay bandwidth and computing resources they use. The computing resources are measured in a proprietary metric called “Compute Cycles, aka CC”. It’s not completely clear how it’s calculated, but it seemed to me it is heavily based on the time spent in system or PHP processes, but not directly linked to pure “CPU utilization”.
The exact formula has not been made public, but if you upload one large (1GB) file, and have someone download it with a slow internet connection, the CC usage shoots up. Weird, since this probably doesn’t use that much resources, but it does probably spawn a process that stays alive for a while.
Outside of weird cases, the CC computation seems to make sense for most hosting usage (static, WordPress, Drupal, custom APIs…). Do NOT use Cloud Sites for streaming purposes — use a CDN instead.
If you see something wrong, it’s possible to ask billing support to look into it. I saw something weird once, and support did find that something was odd and corrected the billing. It’s also possible to ask for a CC report to see what code file used up more resources. There is no real-time monitoring, but there’s a daily report update.
Except for the bandwidth overages that went from $0.18 to $0.15, Cloud Sites has never lowered its prices since I started to look at it (2010), but instead it does have hardware upgrades from time to time. It’s difficult to quantify the value of one versus the other, but in general, users have been happy if you are to believe the forums (which I agree with). Your luck may vary depending on your needs.
Because of the metered usage, customers have opportunities to reduce their bills by optimizing their content and code. Code that is executed faster uses less CC units and therefore reduce your bill. This gives everyone an incentive to pay attention to resource usage and discourages (to a point) abuses of the system. Technically, there’s no “abuse” since you pay for what you use, but there are exceptions (more on that later).
Offloading your static file serving to a CDN also reduces your bills, so is instantiating a Varnish proxy in front of Cloud Sites, if you can manage it. If not, use one such as CloudFlare (free).
Managed hosting platforms that charge you “per visitor” can be hugely expensive, and don’t offer an option to improve your billing situation by making your site less resource-intensive. In general, they are not at the top of my list, although they do serve their purpose if you don’t have enough technical skills.
The main page of Cloud Sites doesn’t say much about how the infrastructure is built beyond some generalities, like “it’s fast, it scales…”.
Cloud Sites is quite interesting from a technical standpoint. It is divided into a number of load-balanced PHP/NET clusters, each with 6-20 web nodes. It’s not clear how many clusters there are in total, but any given site runs on one cluster and pages can be served by any nodes within it.
"TRUE HORIZONTAL SCALABILITY"The Cloud Sites team could scale the cluster size if there was a need for it, but I don’t know of a site that has outgrown a cluster. If you expect a traffic spike for a specific event, Cloud Sites even has a special ticket request for that – and its team may decide to beef up the servers on that day, and will keep an eye on things – for free.
Each website can access one or many MySQL databases which can be created from the control panel (no limit in the number or size of the database). The MYSQL servers seem to be very powerful and host a large number of databases.
The good side of this is that the DB performance is usually excellent. It seems counter-intuitive, but I think that Cloud Sites MySQL servers often have excess (computing, RAM) capacity, so you do benefit from having a powerful DB server which is not very busy.
Since we have load-balanced PHP nodes, they need to have some ways to share the app files, and Cloud Sites is using a network-based storage, which I believe is a SAN (Storage Area Network). Again, Cloud Sites doesn’t make this official, but sources told me that it is a SAN.
This kind of networked file storage is highly unusual for web hosting in this price range (SANs are expensive!) and make it easy to scale server nodes horizontally behind a load-balancer. Web nodes have their own SSD local storage for the OS and local files.
HTTPS is supported, but without LetsEncrypt at the moment, which means that you have to buy your certificate, then install it with the Cloud Sites graphic user interface.
Technically, Cloud Sites is a “managed shared hosting”, but it has absolutely nothing to do with “shared hosting” in the traditional sense. Shared hosting is typically implemented as “putting as many sites possible in a single hardware box”.
Cloud Sites has the exact opposite goal of shared hosting: each site runs in multiple boxes at any given time. Cloud Sites users have ample resources at their disposals, because everyone pays for what they use.
"CLOUD SITES HAS THE EXACT OPPOSITE GOAL OF SHARED HOSTING"Cloud Sites is like pooling the financial resources of several customers to purchase computing power that any single customer could not afford. When I tested Cloud Sites, I moved a big WordPress site to it (~300MB database), then proceeded to do run Apache Bench (AB) on it, without page caching (WPSC/W3TC) and without in-memory object caching (there’s an opcode cache though).
This would make the most basic hosting crash due to the overwhelming number of concurrent requests. Cloud Sites didn’t budge.
Related: All the details about how to install and configure WP Super Cache (WPSC)
Since the PHP load was distributed across multiple servers (PHP is executed for AB each request and the page wasn’t cached), and the database cached the necessary data in the query_cache (the AB test only floods one page with traffic), Cloud Sites could take the traffic without a hitch. I could disable all caching and sustain production traffic.
"I COULD DISABLE ALL CACHING AND SUSTAIN PRODUCTION TRAFFIC"Because websites are served from a cluster of servers, if one PHP server goes down, others will continue to serve properly. It rarely happens, but when it does, the troubles are “diluted” by the size of the cluster. If the MySQL server goes down, you will run out of luck.
The good news is that when it happens, many customers are affected and will alert the support (which has alerts too) and someone else will take care of it and get all sites back online.
In 2019, the PHP performance has somewhat slowed down, and a fresh WordPress installation might have uncached response times of ~400-500 milliseconds, which is slower than what one would expect.
The MySQL servers feature a large amount of RAM and have no limit in database (disk) size, which is great. It is possible to instantiate many databases as well (no limit). This is something that gets expensive fast to get if you are just hosting one site with your own MySQL server(s).
Keep in mind that MySQL is not something that can be easily scaled by “adding more servers”. It’s possible to use sharding, read-only slaves etc, but if you get to that point, it’s unlikely that you would use Cloud Sites.
In general, it’s best to have enough RAM for MySQL to fit your tables and indexes in there, along with a healthy query cache. For mostly read-only content such as news, and articles, the cache is extremely useful. In short, it’s best to have a big box you can fit in it.
That said, there had been a few issues (maybe once every year) and most are related to MySQL/Database. In general, the symptom is a database disconnection, but the cause does vary.
The shared nature of Cloud Sites makes it difficult to exactly know what’s going on because a MySQL server gets hits from many sites at once. For example, it could be that one PHP node lost contact with the database, and you would get disconnected intermittently while other PHP nodes work fine.
It could be a bad MySQL neighbor that was abusing the server at the time. Once, we even had a badly configured MySQL setting (max_connections set to 5, instead of ~150) which probably happened after an upgrade or a config tweak. If you are a WordPress site owner working with tech support on MySQL performance, here are some slow MySQL debugging tips that could help (you and tech support).
Cloud Sites got a hardware upgrade and a move to MariaDB in October 2015, I have detected no issues with either PHP or MySQL since.
You have to watch for is long-executing MySQL queries. Sites that have slow DB queries (~30 seconds) may end up being cut by the load-balancer time out. That will result in a “500 Error” at the browser level.
What happens is that the load balancer doesn’t see any data transiting to a client for more than 30 seconds and cuts the connection. I bumped into this while doing heavy DB administrative tasks like trying to edit many items at once. The issue is officially documented here, and I’m not sure how it has evolved since I don’t bump into it.
Since we just talked about potential issues, let’s cover customer support. At the present time, Cloud Sites benefits from 24/7 chat and phone support. This is very important if you don’t want to have a lot of lag in between interactions with the support staff.
I found the staff to be very knowledgeable of the PHP stack in general, and a few people were outright brilliant, finding issues buried very deep and going through the effort of copying the whole site to a private debugging Cloud Sites cluster for analysis.
Since I have a software engineering background, it makes it easier for me to suggest possible issues and I always had a very courteous and productive relationship with the support staff because I felt that they really tried their best.
If an issue goes too deep, they might bring in the SysOps team, which can investigate network latency issues (another cause of lost connectivity between PHP and Database, when a switch blows up), SAN storage performance etc.
"IF YOU NEED APP-SPECIFIC SUPPORT, HIRE SOMEONE OR GO TO AN APP-MANAGED HOSTING"The Cloud Sites team’s main role is to support the server stack (LAMP or .NET), and not the application level (WordPress, Drupal etc). This is completely normal for most hosting companies, and it’s important for customers to understand that. If you need app-specific support, hire someone or go to an app-managed hosting.
I once helped another Cloud Sites customer who had a very deep-rooted issue in a corrupted wp_options table data. That was causing the whole site to be extremely slow (~8-20 seconds page generation).
It took me many hours (8-12?) in total to reproduce the issue on a test server, and come up with a fix that brought things back to ~0.3-0.5 sec. This is typically something that goes (way) beyond the call of duty for the Cloud Sites support staff, although they did try quite hard.
Since Cloud Sites is geared towards developers and agencies, most customers have the necessary skill sets, if not you could also hire a contractor from time to time. I know regular web publishers who aren’t super technically-savvy but still use Cloud Sites quite happily.
There was a Cloud Sites community forum that answers additional questions, but it is now defunct and has not been replaced. People could post any questions or suggestions and the Cloud Sites staff addresses them whenever they can, like recently about Let’s Encrypt SSL. This is more for general and non-urgent questions.
I try to discuss improvements or answer questions when I can there. Admittedly, other hosting companies have more vibrant communities (like Cloudways’s FB page), but many others have nothing at all.
As powerful and exciting that Cloud Sites can be, it does have a number of caveats or tradeoffs that prospective customers need to be aware of.
For security and isolation reasons, there is no command-line access, so changing file ownership/attributes or for copying/deleting a large number of files can quickly turn into a struggle. You can ask tech support to do some of these tasks for you, but it’s a bit of a hassle if you need to do it often. I personally had to ask for a quick favor a couple of times and I even hack the CRON system to launch PERL files to do simple things such as (large) DB dumps.
It’s also impossible to use a utility like Rsync if you want to reliably copy data to/from another server. This is an annoyance if you want to keep your data synced with a remote location, or if you want to move in/out several GBs of data. Doing this over FTP is not fun to say the least. Most of your files interaction will go through FTP of SFTP (ugh).
Since 8/31/2017 Liquid Web has integrated the Codeguard backup system within the management interface of Cloud Sites. This is a paid add on, which provides a convenient backup option for users.
Codeguard is a remote backup system based on FTP and direct MySQL access on the consumer side. From what I understand, it works similarly even integrated to Liquid Web’s service.
It works but is not ideal if you have to restore a multi-gigabyte site via FTP, or at least from a remote location. The Cloud Sites team also came with a free backup script that should work for small sites (big many GB sites may have timeout issues) called ZipitBackup.
If you have a huge site that gets corrupted or compromised (at no infrastructure fault), you may have to ask support to delete everything, then restore yourself via FTP… daunting.
For the same reasons that led to the lack of automated backups, there’s no easy way to “copy” all the site’s files elsewhere, nor there is a “staging” feature to create a copy for debugging an issue on a real production cluster, with the tech support’s being able to look at it.
As a developer, this kind of thing can be quite important. Talking about debugging, it’s not possible to install New Relic, which can be of tremendous help for issues that only happen on the production (Cloud Sites) hardware.
Since there’s no limit in the number of sites you can create, you could create an additional site at no extra cost and leave it in private mode (you are assigned a test URL). It mostly works, although it’s not anywhere as close as being able to “copy/stage” a site.
If you have a bug that only appears on Cloud Sites, there’s no easy way to remote debug using PHP since the debugger is turned off for performance reasons. There’s also no possibility to have New Relic or something like it because the system runs multiple sites.
When Cloud Sites servers were at the Rackspace datacenter, it was possible to instantiate 3rd party 100% managed services such as Managed Redis, Elastic Search or MongoDB from 3rd party vendors within the same datacenter (like ObjectRocket or Mlabs). As a result, you could easily get low-latency hands-off highly reliable appliances.
Unfortunately, the Liquid Web datacenters may or may not be close of one of those, and they are certainly not in the same low-latency network. I don’t know any 3rd party managed service that can be instantiated in LW’s datacenters.
Liquid Web may be too small to be of interest for such 3rd party services, and I don’t know a single one that will instantiate services in the LW datacenter. It’s a pity.
Cloud Sites is a great piece of technology that was created well ahead of its time. Even now, there aren’t a lot of comparable services, especially priced like this, with good support. Yet, there are many things the Cloud Sites team could do to improve in their service.
For example, there are no managed services such as Memcached, Redis, MongoDB or Varnish. Having an in-memory cache seems like a must-have to improve performance.
Cloud Sites could charge for these services and greatly improve its own profits because they are relatively simple to manage. All these services *need* to be hosted in the same data center as Cloud Sites to have proper latency.
If you really need Memcache, you can spin a VPS and install it yourself. I ran some tests, and assuming that you create the VPS server in the same datacenter as your Cloud Sites cluster, you can expect Memcache latency of 0.3 to 0.9 milliseconds.
This is not as fast as using two instances within the same rack (~0.2ms) or using a localhost Memcache instance (< 0.2ms?) but if you cache things that would otherwise take tens of milliseconds, it seems well worth it. The cost of the VPS is relatively high compared to Linode or other barebones hosting companies, but you get very decent, and immediate, support over chat.
The ability to copy/stage sites would get people to use more storage. Again, this would be a great thing for the service’s profit, since storage is billed per GB.
And of course… backups. As of today, any issue that would require a full reinstall of a site will cause much bigger downtime and billed hours simply because it’s impossible to script/automate a restore, and doing it over FTP does take a lot of time.
The admin interface of Cloud Site is pretty old, and frankly, it’s not the best panel ever. But, it does get the job done, and you can instantiate sites, create databases, create FTP users, CRONs, and manage domains/sub-domains. So yes, it’s not super-pretty and super-fast, but it gets the job done.
Cloud Sites is a powerful hosting platform. It supports both PHP (multiple versions up to 7.3) and .NET along with MySQL (MariaDB) and Microsoft SQL. That alone makes it a rarity. Try to build a load-balanced setup with at least one load-balancer, two web nodes and one MySQL node and you’ll quickly realize that Cloud Sites isn’t unfairly priced and it’s managed for you.
It is one of the rare load-balanced, reliable, scalable (to the size of a cluster) platform on the market that you can get in for $150 and have a known pricing structure to avoid bad surprises as much as possible.
For LAMP, there are few other scalable platforms that have fancier features (like Pantheon.io?) but I doubt that they would anywhere near as cost-effective as Cloud Sites. Other scalable managed hosting pricing structures are also more opaque (charging “per visitor”) and don’t offer an opportunity to reduce your bill by optimizing your consumption.
At the end of the day, Cloud Sites has great potential for pros who want reliable web hosting, but don’t want to manage the LAMP stack. The infinite number of domains per account is ideal for agencies, developers and “pro” publishers with good traffic. If you can get by with a $15-$50/mo box, go for it, there’s nothing wrong with that. Cloud Sites is not for everyone.
Although not perfect, Cloud Sites is a very reliable hosting solution, one that offers true managed horizontal scaling, a powerful MySQL database, and a 24/7 support that can handle system-level security and low-level software stack — at a starting price of $150.
In 2016, Cloud Sites was acquired from Rackspace by Liquid Web, and very little has happened since, except for incremental software version improvements. Nothing major is scheduled for the platform as far as we know.