TEN7 and the Linode Cloud
TEN7 has selected Linode to expand its cloud hosting. Listen in and discover how the decision was made, and why we're thrilled we did.
- What Linode is
- How Linode compares to other web hosting environments
- Why TEN7 chose Linode to expand its infrastructure
- TEN7's approach to experimenting with Linode
- How Ansible fits in with Linode
- Linode services in use at TEN7
- Linode's performance
- How Linode has changed delivery of Drupal support services to TEN7 clients
IVAN STEGIC: Hello and welcome to another episode of the TEN7 Audiocast. Today, we’ll be exploring our hosting infrastructure at Linode. I’m Ivan Stegic, TEN7’s Founder and President. Joining me today is Tess Flynn, our DevOps Engineer. Yo, Tess!
TESS FLYNN: Hello!
IVAN: So, Linode. Linux. How do you pronounce this thing?
TESS: I think you’re supposed to pronounce it Linode [long ‘i’], because if it gets too close to Linux [which has a short ‘i’], it might run afoul with some legal things, so they probably call it Linode. It sounds nicer as Linode anyways.
IVAN: I agree. It does sound nicer. Maybe you could give us a description of what Linode actually is.
TESS: So, naturally, when you’re thinking of getting any kind of cloud infrastructure provider, the two big ones in the room are gonna be Google and AWS, and I’ve always liked Linode because it provides a virtual private hosting service. It’s very straightforward, it has very clear pricing tiers, and it’s really good for the kind of stuff I like running my own projects on. I thought it was a really good choice for when we started thinking about expanding our hosting options as well.
IVAN: You described it as virtual private hosting. What did you mean by that?
TESS: You can tell I spent about 5 years doing cloud infrastructure stuff because I sound like a marketer. Alright, so a lot of hosting, really commodity, everyone-uses-kind of hosting is usually called shared hosting. And usually what that means is there’s some big server, out there, somewhere in the world, and you have one account on; and in that account, you’re going to host your site. The thing is that everyone is sharing the same resources, the same tier, the same equipment on that same server. The problem is that you’re locked in to basically being a user. And you have no privileges to update the operating system, install additional utilities, anything like that. So a few years ago, it used to be called co-located and then rented servers, and then custom servers, and then a few other things; and then eventually, we arrived at virtual private server which is what we have now. What that basically means is instead of having one server, where everyone is just a user on it, it’s a server that’s running some sort of virtualization system. Usually this is going to be any number different systems, typically it’s like Xen or Linux Virtualization system (LXC). Some services will also use something like vSphere from VMware. But those are just different options. But what all of those do is take a big piece of hardware and chop it up into little pieces of hardware and then you can give those little pieces of hardware free reign to do whatever they want within the amount of resources you allocate to that virtual server. And that’s really what Linode does. It has a data center, several data centers, it takes some of those servers and chops them up into littler servers, and then it gives you free access to do whatever you want with it. You get the root login, you can install software, you can change anything you want.
IVAN: And besides giving you more control over your own slice of that server, doesn’t it also give you the ability to be insulated from traffic spikes and use of the shared server, we started talking about, where you’re just a user, because that other user might have a traffic spike or might be hacked compromising the whole server. Doesn’t this also insulate you?
TESS: It does. There are utilities on basic units which let you do that as well. But the key benefit of virtualization lets you control all of those aspects and for all intents and purposes, you are running on your own piece of hardware with your own allotted quantity of guaranteed memory, guaranteed CPU usage.
IVAN: So how does that compare to, you know, the two elephants in the room that you mentioned earlier? How is that different than AWS and Google Cloud, and there are other competitors like RackSpace and so on.
TESS: So, whenever you’re talking cloud hosting, no matter what kind of hosting you’re doing, web hosting, infrastructure hosting, whatever, the two big gorillas in the room, so to say, are Amazon Web Services (AWS) and Google Cloud. And they both provide more or less the same service. In cloud lingo, we would call them an IAAS or an Infrastructure As A Service provider. Those provide basically raw hardware that you do whatever you want. The downside of that is that it comes at a cost of complexity. It puts the burden of complexity on the user, because AWS and Google don’t know what you’re gonna do with it. Are you going to run a PeopleSoft application, are you going to run web hosting, are you going to run a Hardoop cluster, what are you gonna do with it? We don’t know so we’re gonna give you all of the ability to manage it down to smallest granular level. And this makes it very difficult to get into these particular hosting environments, because there’s a huge technological learning curve that makes it very difficult to just get started on it. Sure, you can allocate an individual AMI on AWS in a few minutes. Sure, you can probably bumble your way through that. But trying to set up, say a cluster, it gets a little complicated, and you might spend more than enough time with your forehead in contact with your desk.
IVAN: And Linode provides you with kind of a starting point, doesn’t it? It’s not a general purpose IAAS. It kind of assumes you’re going to do web hosting, right?
TESS: Right. It’s a little bit more than an IAAS and a little bit less than a platform as a service provider (PAAS). The difference is that, with a platform as a service provider, typically, you have an API, you have a particular way that you interact with the server and then that’s it. A good example of a platform as a service is like Engine Yard or, honestly, Acquia Cloud is definitely a PAAS because all you really do is you push, git files up to their server and they take care of all the details for you. So, Linode is a step back from that. Instead of assuming exactly what your application is, it still gives you all the infrastructure stuff that you could ever want. But it has a slant, it assumes that what you want is to probably do web hosting, and that you probably don’t need any fancy VPNs, or private networks, or anything like that to connect all of your individual servers, you don’t need granular security control. It assumes that what you want is a Linux server somewhere that you pay money for monthly and a public IP Address. That reduces the complexity a huge amount, because now, all you really have to do is, “I need a web server” or “I need 4 different servers that will become, in essence, a singular web server.” Those are going, “I need 4 of those. If I go and allocate 4 of those, I get a public IP Address for each,” and there you go, you start putting it together at that point.
IVAN: And that’s so useful to us and to those people who don’t need the complexity and the deep flexibility that something like AWS ends up offering you. So we chose Linode at TEN7. We didn’t arbitrarily choose it. There were different service providers that we looked at, DigitalOcean was one of them. AWS was another one, would you speak to the reasons, or maybe the top few reasons we chose Linode and why it rose above the others?
TESS: Well in addition to it being kind of a web hosted slanting infrastructure as a service provider, it’s the one I have the most experience with. Which means that it was a lot easier for me to get started on it, I knew where all the tools were and knew how to interact with it. DigitalOcean would’ve worked as well, but there’s definitely one thing with Linode that I’m really looking forward to in the future. They are going to provide a Python-based API. In the future let us do even more interesting stuff with auto-allocation when we connect that up with Ansible.
IVAN: Oh man, I’m getting excited to talk about that in a little bit here but let’s save that integration speak for just a little bit here. So the API was what kind of what drew us to it, in addition to that fact that you had familiarity with it. We discounted the others because of their complexity, and now we’re experimenting with it a little more than we have in the past. What has your approach been to kind of starting the experimental work at Linode? And you can maybe talk about Ansible, as well, there.
TESS: So I initially was hoping to do the whole auto-allocation thing from the start. But, unfortunately, that Python API is still in beta. There’s still a lot of rough edges, so I decided that let’s hold off on that for now, and let’s worry about the important thing which is managing, installing the software, and the configurations.
IVAN: Does that mean using code, you would actually select which server Linode has, and then build them up without having to go through the process in their UI, in their configuration panel. Basically do all of that from code. Is that what you mean?
TESS: That is exactly what I mean. Right now, the workflow is that you have to go to their dashboard which is a web UI, you select the service tier that you want, you allocate the server, you set the root password, you specify which base image that you want. Linode provides several other ones like Arch, Debian, Ubuntu, I think CentOS is in there as well, a few others. So you select what operating system that you want to have on there and then after that, it boots up, and then you can go in and start running your configurations. Right now, all of that has to be done manually. We have to basically build a beachhead. Once the beachhead is built, then we can connect that up with Ansible and do all the server configuration. What the Python API, in the future, will grant us is the ability to just go into a text file, specify “I need 4 servers, they have the service tier, I want them to have these kind of IP Addresses or Public IP Addresses, Private IP Addresses. They run this operating system. Go and allocate it for me.”
IVAN: It sounds an awful lot like Docker.
TESS: It can be. I’m thinking this is a little bit more like Kubernetes, honestly.
IVAN: So we don’t have this auto-allocation in place. We’ve configured servers at Linode. Do you wanna talk about kind of what the workflow is now, and how we’re using Ansible and what our approach is right now?
TESS: Right now, how this works is that once we build the beachhead, we go into our git repository which is GitLab instance. We’re going to create a new repository which represents that server set. It’s a functional grouping of servers. If it’s, say, a production server, it might be multiple individual servers. If it’s a development server, it’s typically one really big server that hosts absolutely everything, and what happens is that we add the playbooks to that. We have GitLab configured to run a runner. That runner connects to one of those servers on Linode’s cloud and then starts executing all of the Ansible code. We rely very heavily on Ansible roles which is provided by galaxy.ansible.com, in order to quickly configure all of the different software that we need so that we’re not writing a bunch of custom code for each one. We’re relying on currently well-supported, open-source software, to go and configure all of our stuff.
IVAN: The services and how the servers themselves are set up for use, we spent a fair amount of time kind of thinking about how we’re going to configure these servers in relation to each other, and how they might be different to things we’ve done in the past. In the past we’ve had a single monolithic cluster of machines that services many clients. Our approach now is to be a little more stratified, right? So, a group of machines per clients or per functionality. Do you want to talk about the kind of the vertical and horizontal scaling that we’ve discussed, and what’s our approach with, for example, the new TEN7 site at Linode.
TESS: So one of the big things with hosting that we have currently is it’s set up more or less like a shared-hosting environment, and that’s perfectly fine and perfectly workable. But we could do better. When we leverage a cloud-environment, we can actually utilize the inherent isolation that a virtual private server system provides us to provide more isolation per client. Thus, everyone gets more dedicated CPU, more isolation. If there’s a single server problem somewhere, because as we all know, with anything computers, there’s always gonna be a problem somewhere. If we have one of those, it affects fewer clients because each one is running on several smaller systems that are dedicated for them, instead of shared between several different clients.
IVAN: And what about the Stack?
TESS: So we spent a lot of time thinking about the Stack. The Stack is one that I spent many many, many an afternoon just kind of sitting here going, “Hmm, how the heck am I going to build that?”. Eventually, after a lot of research and a lot of thinking, it came down to well we need to look at the pricelist, and we need to see what kind of resources that we need, or what kind of scaling knobs that we really want to have. So if you can imagine that your infrastructure is like a big Star Trek console, and you have so many different knobs in front of you, it’s “Oh, I need more power, Mr. Sulu.” You have to go and scale that up somewhere. So you have to figure out what those knobs are, and how you want to be able to control that. What ended up happening is we ended up with a 4-server system. The very first server is an HAProxy server, and the reason why we have HAProxy as supposed to, say, NGINX is NGINX does kind of have a little bit of a licensing thing, where if you want to want to do SSL termination and load balancing, you need to pay them a little bit more money... And we like open-source at TEN7.
IVAN: We sure do!
TESS: We rely exclusively on that if we could. So, HAProxy is well-supported, used by a lot of companies, and it’s completely open-source. So we decided, let’s use HAProxy. So the first node is an HAProxy node. Now if we want to jump all the way to the other end, we have a database node which is running in MariaDB, otherwise known as MySQL. It’s another distribution which is completely MySQL compatible.
IVAN: And also, different in licensing to MySQL. Again, trying to keep us on the side of open and shareable, and not in the hands and clutches of licensing that may change like the MySQL and Oracle licensing.
TESS: Exactly. So those two systems you might think, “Well, are you going to display those out with multiple servers?” We actually were thinking not doing that. You can actually scale vertically, you can scale horizontally out with multiple servers, with both HAProxy and also MySQL but it’s far easier with Linode to make those vertically scale. Which means just making the server bigger, throwing more resource at those servers.
IVAN: So vertical scaling means adding resources. Horizontal scaling means adding additional copies of the servers themselves.
TESS: More servers versus a bigger server.
IVAN: Got it.
TESS: So that handles both ends of the hosting. The middle part is a little bit different. Now, you might wonder why don’t we have just one node in the middle. Well, we have a number of different services we need to support. We want to do Varnish for caching. We want to do Memcache for database caching. We need Apache somewhere in there in order to do HTTP hosting, and then we’re going to also need PHP. So the way that this started working out is there was a big debate whether or not we want to make those a slightly bigger server that can support all four of those. But in the end it was actually easier to make two servers. So we have one server which is just a Varnish server, and this is entirely dependent on the kind of resource allocation and price tables that Linode has. Because of that particular way that they allocate their resources, the amount of memory that they allocate per price tier, it made more sense for us to put Varnish on its own individual node and give it as much memory as possible. Then, the next node below that is the webnode, and that one’s going to have Apache, PHP-FPM, and then memcache. So that server has a lot of services in it, but you have to go through HAProxy then varnish, and then to that node to get to it. And the database node is likewise isolated because we have memcache. So that any repeated queries go to the cache instead of directly to the database each time. Now, this varnish webnode pair of servers is horizontally scalable. We can actually add more pairs of those as necessary to the cluster, and then we’re going to share the file system between those using GlusterFS.
IVAN: So that was another design decision that we made, was to not use something like an NFS mounted separate node, but to rather have the file system on each of the webnodes, and to then do some sort of intelligent synchronization between them.
TESS: That’s correct and the reason why is with NFS, you introduce another single point of failure. Whereas with GlusterFS, if one node goes down, it will still keep running.
IVAN: How does that affect the deployment to the webnodes, and maybe you could talk about kind of one of the issues we ran into with Gluster.
TESS: So, the problem with Gluster is that it’s not very good at sharing small files, and if you work with Drupal sites, what is it mostly made out of?
IVAN: Small files.
TESS: Little tiny files. Our initial tests in order to make this work were...less than satisfactory, we could just say. After a while, we managed to get the performance to a little bit better by switching to the most recent version of GlusterFS which has a number of different performance tweaks for small files. We enabled caching mechanisms so that every individual server maintains it’s own copy of the files, that it doesn’t ping the network every time. By doing that, we got a bit better. In the end, we decided that the best thing that we can do is to host the individual file directory on GlusterFS, but not the rest of the webcode. So all of Drupal core, all of your modules, all of the themes, none of that gets shared. Only the files directory. And the reason why is whenever we do a deploy, what happens is that we run Ansible on the HAProxy node. The HAProxy node will communicate to every individual webnode and parallel redeploy the site to each one of those. And this actually very easy to do because with Ansible, it’s just add another entry to the inventory file and away you go. So we do that, each one of those now has their own perfect copy of the web server code, of the website code, and then each individual file directory is shared between all of them so everyone’s in sync. They all utilize the same vertically scalable database systems so that’s all in sync. So all of that works.
IVAN: I love the solution. The HAProxy that we decided would be part of our Stack, that really functions as a load balancer. Right now it’s balancing just to one varnish head in a standard deployment, but it could potentially be balancing between other horizontally scaled varnish instances if we really wanted to. Linode provides their own load balancing node that you can use to do the same thing that HAProxy does. Why did we decide choose HAProxy and not just not use Linode’s solution?
TESS: Well, the biggest thing is that this isn’t that the Linode balancer is a value-added product, and it’s a good product, but it’s not really one that we need. What we really needed was something that was a little bit more control. We needed something that could load balance between multiple varnish heads. Something that can also do SSL termination and ideally, something that we could do ourselves. The other major concern was cost, because it’s a value-added service, it comes with an additional cost over their regular hosting. Now I’m not saying that there isn’t a place for this. There’s totally a place for Linode balancers, but it doesn’t really work with our particular use case.
IVAN: And they have other services as well that I think we’ll be using, and maybe we’ve already implemented them. But, they have a backup and snapshot service.
TESS: Right now, we’re not planning on deploying that for individual webnodes, because we have our own Tractorbeam backup solution that will backup those to an S3 compatible service like Google Cloud or Amazon S3. But for other infrastructure systems we might deploy another GitLab instance somewhere. We’ll probably want to leverage that backup solution, because it’s all in one, and it’s a server-level backup rather than an application-level backup, which gives us faster recovery time in the event that something goes wrong.
IVAN: So in the event that something does go wrong with our hosting services, we don’t have a backup of the servers themselves. We have backups of the files and the dead database snapshots, and if we want to rebuild the hosting infrastructure, am I right in saying we just very simply rerun the Ansible scripts and do them on a new set of Linode machines? Or maybe in the future, when the API is out of beta, we simply rerun it with a new command and it does all of that for us.
TESS: That’s the idea.
IVAN: Wow. That sounds really amazing.
TESS: One thing that I picked up from cloud-hosting providing, providers, and as well as years of working with Docker is you’re better off treating your servers as disposable. It probably also helps that I grew up using a glitchy Mac Plus running on system 7.5.3 which it’s not meant to run. So I got used to things crashing a lot.
IVAN: It’s nice to be able to have everything in code now, isn’t it?
TESS: Oh, it totally is. I think it was last week someone said, “Oh, could I get a solar core for this particular site?”. And I said, “Sure”. 5 minutes later, I just wrote 2 lines of code, hit deploy and it was there.
IVAN: Amazing. I love it. What’s your experience been with performance at Linode? I mean, you kind of hear how everybody’s performance is approximately the same, right? And I don’t know if that’s true in reality. So I’d love to hear what your qualitative experience has been while doing the work you’ve been doing.
TESS: It’s been more than satisfactory. Their ability to handle all of our particular needs for sites have been a lot more impressive than I had expected them to be. They do the job. I hadn’t put it to any real stress testing, yet, but I have good sense that it will be able to take any of the load that we need.
IVAN: And right now, the one live site we actually are running at Linode is the TEN7.com site, and I don’t know if it’s my own bias, or if it actually does feel a little faster and snappier when I use the site and when I use the backend CMS, but it kind of does feel faster. I don’t know if that’s true. Do you think so too or do you think you’re biased as well?
TESS: I wouldn’t trust myself on basically just a feeling, because I would require actual hard numbers, but it wouldn't surprise me because one of the major things that we did do is we fundamentally changed our hosting strategy. TEN7.com used to be on our cluster which was effectively a shared-hosting environment, which means we’re subject to any of the other resource demands of any other sites that was on that cluster. Now, it’s on its own hardware, we definitely have a lot more resources that we can throw at it.
IVAN: Tess, how do you think this changes the delivery of the Drupal support services we provide by using Linode?
TESS: We actually have a greater amount of control that we can provide for each individual client. Before we were limited by what we might affect on every individual customer. So if we decide to make a cluster wide change, it affects absolutely everybody. So we have to be very careful, and very measured and very concerned. This unfortunately also leads to the least common denominator problem. One site, for example, might say, “I need Solr 4 in order to work.” And suddenly, you have 3 other sites who said, “Yeah but I’d rather be running on Solr 6.” Now you have a problem. With this kind of environment, you don’t have to worry about that. Everyone’s individualized, so that you can actually specify even at a hardware level what kind of resource that they need. If a particular client says, “You know, I don’t think that our site is running fast enough.” we could add more resource very easily to their site to improve that performance, and we can do that on demand.
IVAN: And potentially, the cost is lower as well.
TESS: Yes, it is.
IVAN: Which is always a good thing. Well I feel like I’ve learned a lot here with you in the last amount of time here with you. I definitely know how to say Linode now and I’m going try to remember.
TESS: But can you say IAAS, PAAS, one after another, 3 times, fast. I can’t. I worked in that industry for 5 years.
IVAN: I think that brings us to the end of this Audiocast. Tess, thank you so much for sharing your insights today with us. Live long and prosper.
TESS: Peace and long life.
IVAN: Please visit us at TEN7.com and keep an eye out on the TEN7 blog for future Audiocasts. This is Ivan Stegic. Thank you for listening.