SE-Radio Episode 264: James Phillips on Service Discovery

Filed in Episodes by on August 3, 2016 1 Comment

JamesPhillips100x125Charles Anderson talks with James Phillips about service discovery and Consul, an open-source service discovery tool. The discussion begins by defining what service discovery is, what data is stored in a service discovery tool, and some scenarios in which it’s used. Then they dive into some details about the components of a service discovery tool and how reliability is achieved as a distributed system. Finally, James discusses Consul, the functions it provides, and how to integrate it with existing applications, even if they use configuration files instead of a service discovery tool.

Venue: Skype

Related Materials

 

 View Transcript

Transcript brought to you by innoQ

This is Software Engineering Radio, the podcast for professional developers, on the web at SE-Radio.net. SE-Radio brings you relevant and detailed discussions of software engineering topics at least once a month. SE-Radio is brought to you by IEEE Software Magazine, online at computer.org/software.

*   *   *

Charles Anderson:            [00:00:36.15] Hello, this is Charles Anderson for Software Engineering Radio. Today we’re talking to James Phillips from HashiCorp. At HashiCorp, James works on Consul and Serf. He also developed fault-tolerant avionics for SpaceX and called out “Flight Software is Go!” during the first SpaceX missions to the ISS. He has also worked on web applications and embedded systems. Today we will be talking to him about service discovery. Welcome to Software Engineering Radio, James!

James Phillips:                     [00:01:05.10] Thanks for having up on the podcast. It’s good to be here. I do have an odd mix of embedded things and web applications in my background, but it was good training for what I do at HashiCorp with real-time mixed with distributed systems.

Charles Anderson:            [00:01:21.20] Great, so let’s dive in then. Can you tell us at a very high level what is service discovery?

James Phillips:                     [00:01:27.03] At the highest level, it’s basically a system that lets you say where is service Fu, and that answer is usually something in the form of an IP address and a port number. A service can be a logical entity within your architecture; you might have a database or you might have a pool of application servers, so usually there’s a set of processes that back a given service, and for most highly available/scalable things there’s more than one process that makes up a service. Service discovery lets you find those things on a spectrum; you can have a fully statically configured service discovery system, or a very dynamic service discovery system, so that the techniques you use to do service discovery vary a lot depending mostly on how fresh the information is and how changes get put into your service discovery catalog.

Charles Anderson:            [00:02:20.16] When you say a very static configuration, that makes me think of just using configuration files, where I might update them occasionally and I can use a configuration management tool like Puppet or Chef. What’s wrong with that approach?

James Phillips:                     [00:02:37.04] That approach can work okay when you have a small handful of things you manage, but as soon as you get even a medium-large-ish to a very large infrastructure, it becomes very hard to keep that catalog up to date by hand. The times involved to do a convergence run can run into minutes, and that’s often not fast enough to propagate a change. Also, having a human in the loop to check in changes to that catalog, manage a list of IP’s and then [unintelligible 00:03:05.06] to push that out becomes a burden, in a system that you want to run as automatically as possible.

It becomes a maintenance burden and a response time problem for an infrastructure. Once it’s beyond a trivial scale, that gets really hard to manage.

Charles Anderson:            [00:03:24.04] I can imagine even at a small scale, you mentioned response time. If you need response time, it’s still going to take a while.

James Phillips:                     [00:03:31.00] Yes. If you do a master failover and you want to point all your clients to somewhere else, you want to change to go out quickly; you don’t want that to take tens of minutes to propagate through your system.

The configuration management tools definitely have a role, even if you’re using a service discovery system for bootstrapping your nodes or getting the service discovery system kicked off. But in terms of managing the ongoing state of your infrastructure, that becomes very difficult to do with a static setup.

Charles Anderson:            [00:04:04.13] We’ve been talking about scale, and that hints to larger networks. What network scope is appropriate for service discovery? LAN, data center, cross data centers?

James Phillips:                     [00:04:16.21] My short answer to that is yes. It can have a place at all those different levels. Within an application, within a process, you wouldn’t necessarily have it; you’re not going to find components or plugins with a service discovery mechanism. But once you have a process trying to find out where another process is, it’s appropriate there.

When you get into highly available or globally distributed infrastructures, it definitely makes sense for some service in data center A, to reach out and find an instance of another service in data center B. Usually, there’s a preferential scope. You want to use things that are close and nearby. Sometimes it’s part of normal operations to find things remotely or at much larger tier across the internet, and it definitely makes sense when you’re talking about geo-redundancy, failover and things like that.

[00:05:15.14] Ideally, your service discovery system would let you cross those boundaries without having to have a totally separate instance that’s totally disconnected and your application has to know about how to go reach out to all these different things.

Charles Anderson:            [00:05:28.27] Would service discovery be comparable to the zero configuration networking (Apple Bonjour)? Is that a form of service discovery?

James Phillips:                     [00:05:40.01] It definitely is, in terms of some of the most basic — What’s the IP report of an instance of this type of service? There’s other facets of service discovery, like managing other types of configuration or orchestration, where something like that probably isn’t ideal. It’s also a system that often requires dependence on things like multicast. That’s a low-level piece that might be part of a bigger service discovery solution, if you look at a full spectrum of what you need in a complete solution, but it would be a very limited form of service discovery.

Charles Anderson:            [00:06:19.18] You mentioned processes finding one another. Can you tell us some scenarios where service discovery would be appropriate? When you talk about processes finding each other, I think of something like a service-oriented architecture, or possibly microservices.

James Phillips:                     [00:06:37.15] Definitely, yes. Even for a monolithic application, your basic configuration, like “Where is the database? How do I connect to it?”, it even makes sense in that type of architecture. But as you move to service-oriented or microservices, you’re going to have a much more distributed set of pieces that need to find each other; you’ll have a much more dynamic environment in terms of different pieces of your architecture coming and going over time.

In general, you’ll have more than one instance to choose from, so you’ll want to choose a healthy one to talk to. The things get a lot more dynamic as you have an architecture that’s distributed across functional pieces. Service discovery really shines there. In the extreme, if you’re running under a resource scheduler like Kubernetes, Mesos or Nomad, your pieces are placed onto machines in your cluster by an automated infrastructure. At that point, there’s no chance to have humans editing config files and pushing out changes; you need something that in real time can manage where all your resources are, which ones are healthy and where to find them.

Charles Anderson:            [00:07:47.15] Are there applications or scenarios where I wouldn’t want to use service discovery?

James Phillips:                     [00:07:55.05] I don’t think so. The basic hygiene of separating the configuration out of your application is a good thing to start with early on. You don’t want to be hardcoding IP addresses in your source code. You’re going to want some separation there from the beginning. It’s also easy to progressively use different features of service discovery mechanisms as your application gets more sophisticated.

With a service discovery system that supports a DNS interface, you can use service discovery by just having your application look up a host name. You can have a zero-touch integration even very early on, and over time you might need to use orchestration features to elect a leader among many potential leaders in your pool of services. It’s possible to start simple, with an integration that’s very lightweight, or essentially even zero impact to your application code, and then expand from there. But to start that way, versus hardcoding or maintaining these things, is a good way to go.

[00:09:10.22] Also, in a world where you might be building AMI’s (Amazon Machine Image), separating out the service discovery piece into a separate layer is handy for not having to bake a new AMI when some configuration changes. Having that layer in there can even have practical implications for your deployment pipeline and how you manage your images for deployment.

Charles Anderson:            [00:09:34.22] You said something that made me think about immutable infrastructure, and if you’re talking about baked-in AMI’s and extracting out this variable configuration information, that would make sense, right?

James Phillips:                     [00:09:51.17] Yes, and having that layer there and being able to dynamically push changes out after something is deployed, without having to invasively get into that image and change its configuration is a super big plus. That same AMI can connect to any database, or connect to any instance of your API server, because it’s getting all that configuration on the fly, from the service discovery system.

Charles Anderson:            [00:10:17.05] So move at least the configuration-related variability out of your infrastructure. That makes sense. Let’s move into some more technical details. What data are typically stored in service discovery repositories? You mentioned host names or IP’s and port numbers; is that the extent of it, or there’s more?

James Phillips:                     [00:10:43.15] That’s the most fundamental data, but when you get into managing services and their configuration, you generally need a little bit more information. You’ll typically have things like database username, potentially credentials, tokens and other configuration-type information that goes along with the basic IP import information that’s already stored in the services discovery system. It’s also a good place to put things like feature flags. There’s a general bucket of key value information that nearly every service discovery system supports, which is useful for capturing the whole set of configuration data that you might need for interacting with other services.

Charles Anderson:            [00:11:30.12] I was going to ask about something like database credentials; so those could be in service discovery. And I hadn’t even thought of feature flags. You mentioned a key values storage thing – what are the typical components of a service discovery tool?

James Phillips:                     [00:11:51.23] If you want to take a complete solution for service discovery, it really comes down to four different pieces. There is the core service discoveries (Where is this thing? What’s its IP import? How do I connect to it?). Any complete solution generally needs a health monitoring piece. It’s not really interesting to get an instance that’s no longer functioning when you’re looking for something to connect to, so having a health monitoring component creates a really fresh and live set of data that’s going to be very easy to manage within your infrastructure.

You need some configuration storage. That’s generally in the form of a K/V store (key/value store) and that can also be used for coordination, so we can talk about some applications around that, and then an orchestration piece. It’s often useful to be able to, say, among many possible instances, help to have tools to elect a leader, or make sure that a certain operation is done in a way that doesn’t have any erase conditions.

[00:12:52.14] Some systems even let you do things like send events out, or run commands – do more dynamic things if you want to have an event go out across your fleet and cause some action to happen.

Charles Anderson:            [00:13:07.16] Would load balancing be a function of a service discovery, or is that going to be something separate?

James Phillips:                     [00:13:15.14] It can be. It’s an architectural choice with different service discovery systems. Some make load balancing first class part of the architecture. You’ll route all your traffic at some load balancer, and that will manage talking to different healthy instances. It’s often the case though where you can avoid load balancing with the service discovery system and avoid that whole tier and potential point of failure. If you can talk directly to a healthy instance by virtue of how your service discovery system works, it’s actually nice to avoid load balancing if you can. There are different trade-offs and different choices.

[00:14:01.17] One common thing – within your data center you might not use load balancers at all, and you’ll use your service discovery system to manage handing out, “Here’s a healthy instance of this service. Talk to this guy, and if it fails you can get a new one.” But then you’ll use your service discovery system to configure an external load balancer that your customers are using. Say that your website IP addresses were pointed to, so that you can have combinations of things. Some systems run load balancers internally, but route traffic over overlay networks, and things like that. There’s quite a range of different solutions over there.

Charles Anderson:            [00:14:35.22] How do servers that want to offer a service know where they’re going to register to? And on the flipside of that, the clients that are looking for a service – how do they know? Do we need another layer of service discovery?

James Phillips:                     [00:14:53.24] It would be turtles all the way down for service discovery. You generally need some kind of seed or some way to introduce a new entity into the service discovery system. Some systems do that with some well-known DNS, some server records from some route service to talk to. HashiCorp’s Consul has an Atlas join feature; we have a free hosted joining service that will help you find other nodes to talk to initially.

There are also different architectures in terms of how you access the service discovery system. Consul, for example, runs an agent on every node, so your applications only ever talk to their local agent, and the Consul manages how to talk to servers and how to route your requests around.

[00:15:40.01] With other systems you might have to locate an essential server and make requests against that. So there are different strategies, but there usually is some sort of bootstrapping process or service that you need to kick things off.

Charles Anderson:            [00:15:52.26] You said seed… Yes, at some point in time, following all the turtles down, we get an egg or a seed there.

James Phillips:                     [00:16:05.22] Yes, there’s maybe a hardcoded IP address or something similar somewhere, but things like Atlas join make that pretty easy. You can have a well-known thing to reach out to you and it will manage finding you a server to join with.

Charles Anderson:            [00:16:19.14] I want to touch on Atlas towards the end here. In the meantime, given that service discovery the way we’ve been talking about it is going to be used possibly by pretty much the whole app, even trying to find the database and things like that, it’s going to need to be fairly reliable. What are some features or characteristics of service discovery tools that provide the reliability?

James Phillips:                     [00:16:47.25] That’s definitely a prime concern for a system like this, which could potentially be like a nuclear single point of failure if it wasn’t done properly. A key component of any service discovery system is to be distributed and replicated. You need to be able to build out redundancy to whatever level of failure tolerance you want. That may mean having redundancy within a local area network scope, so you might have multiple servers. Then you may also want redundancy by having different federations of servers that can talk to each other and might be in different geographic locations.

[00:17:32.24] Having a distributed system with your application that can handle a server going completely offline, or two servers, or whatever level you want to provision to is very important. Then there are the concerns of dealing with what happens when the system is down altogether. In the event of a partition, there are different strategies. If you had a system that has a consensus algorithm and requires a certain number of service for a quorum, and if you get below that number of servers — maybe you can’t make rights to your system anymore, you can’t make any changes, you still want the reads to work well. There are tools that are exposed to the clients to say, “Hey, I’m willing to take some stale information. What’s your best information that you had as of this long ago?” So you want it to operate even in degraded modes and fail gracefully.

[00:18:29.25] At a very practical level, there are a lot of techniques that good service discovery solutions have in terms of managing how they deal with retries, how they avoid thundering herds when things come back online, how they randomize traffic to spread load at different components, how you scale certain events based on the size of the clusters, and things like that. There are quite a bit of layers that go into making a robust service discovery system that’s going to keep working no matter what.

Charles Anderson:            [00:19:03.25] When you say distributed system, you mean in the classic sense of three or five or seven servers, a consensus algorithm possibly…? That would get us into the CAP theorem, for example, in terms of making some decisions or trade-offs there?

James Phillips:                     [00:19:27.22] Yes. The different tools have different strategies and different trade-offs. There are AP-type systems. We can give some examples of specifics, but there are systems that make more of an AP bent, that will have better availability, but potentially much harder to reason about failure modes, where you might end up with two different sides of your cluster working in two different ways. Then there are systems that go more the CP route. Those are often easier to reason about for operators, and they have a consensus algorithm that lets you know the savior cluster and have a known behavior in the event that things go wrong that’s fairly easy to reason about.

[00:20:19.10] Depending on your application there are different considerations, but I would say most systems tend towards a CP-type architecture.

Charles Anderson:            [00:20:28.01] So preferring consistency and partition tolerance over availability?

James Phillips:                     [00:20:34.03] Yes, but with the caveat that you have a read-level availability even in the loss of quorums, so you need to think about that case and not have it just knock out. The minority of it can’t make progress in terms of making rights can still read the current state as it was at the time the partition happened, for example.

Charles Anderson:            [00:20:56.07] That makes sense. We’ve mentioned a couple of times about databases. Suppose rather than running my own database server, I am hosting in Amazon AWS and I’m using their service for RDS to provide my MySQL or Postgres server. Would I still be using service discovery to discover the database, even though in theory Amazon is going to keep it pretty stable for me?

James Phillips:                     [00:21:25.03] I think so. Tools like Consul have support for what we call external services. Those are static registrations, but they are served in the same manner in terms of Consul’s APIs as any other service.

Charles Anderson:            [00:21:42.17] So it’s about consistency. You’re not special-casing Amazon.

James Phillips:                     [00:21:45.05] That’s right. You can manage it in the same place, you can retrieve it in the same way, and if you were going to manage some portion of your database [unintelligible 00:21:53.12], it will nicely ramp in to being supported by Consul without changing your clients. It’s getting that basic hygiene of having all your configuration in one place. You can definitely do that with external services.

Charles Anderson:            [00:22:08.03] Right, and it also gives you the flexibility then, if everyone is using the same pattern, to make different decisions in the future.

James Phillips:                     [00:22:15.09] That’s right. And the overhead is pretty low. It’s one level of indirection, which software engineers are always okay with just one more level of indirection.

Charles Anderson:            [00:22:25.15] One more level and we can solve any problem.

James Phillips:                     [00:22:28.18] That’s right.

Charles Anderson:            [00:22:29.19] So how important is security in a service discovery tool? Do we need to use cryptography either to encrypt the communications or authenticating clients and servers?

James Phillips:                     [00:22:43.17] Security is pretty huge. This system will direct your internal services to connect to things and make [unintelligible 00:22:50.00] to them; it could be key to finding where the database is, you can potentially have credentials in there… Any actual production deployment of a service discovery system needs to think about security. Generally, people use TLS to secure all their traffic. It’s usually important to verify the identity of your servers, so you can also use certificate verification. You don’t want random nodes on your network joining on at servers and potentially interfering with the quorum, or a rogue amount that can push bad data in there.

[00:23:30.26] On top of the basic transport-level stuff, most tools also involve some layer of access control type tools available within the service discovery system itself. This way you can scope who is going to be able to create new entries, who can delete things, who can update key values, you can segment your paths to a different part of your organization and kind of self-manage different parts of your service discovery system.

Charles Anderson:            [00:24:00.03] I would also think it probably could depend on the scope of the network that you’re using. If we’re just talking about a local area network or a rack on premises, that might be different than if you were talking about a number of nodes trying to find each other on Amazon.

James Phillips:                     [00:24:21.26] It could be, yes. Different practices might make sense in different ways at different levels of organization. But it’s generally good to start with TLS and some of the basics in there, and only start creating Apple policies once your organization gets complex, or you get a couple of different teams using the same service discovery system. Some of that can evolve, but it’s good to have a basic level of transport security underneath as a baseline setup.

Charles Anderson:            [00:24:58.24] You’ve mentioned Consul as a service discovery tool – obviously, since you’re working on it. We’ll discuss that in a moment, but what are other tools that are out there in this space that you know of? I don’t expect you to be an expert on any or all of them, and we certainly don’t have time to dive into them, but what are other tools that listeners might be interested in?

James Phillips:                     [00:25:23.08] Yes, I’m not an expert in every service discovery system out there. I do work on Consul full-time, so that’s right, I know the most. There’s a class of tools that I would lump into the key/value stores. Zookeeper, Etcd, Doozer and — Consul actually has a key/value store as well. These generally provide a highly-available key/value store that can be accessed by clients. They often include mechanisms like coordination, locking-type features, so you can coordinate around a key and make sure that value updates are atomic. They generally include some kind of health checking notion too, where a client will maintain a connection and if the connection closes, something can happen in the key/value store so you can know that there’s some notion of some client being alive, and a way to have the servers help manage that. Then these generally couple some type of consensus algorithm to provide a distributed replicated key store.

[00:26:36.19] The main thing that differentiates these different solutions and where Consul is a little bit unique is I think of these often as lower-level toolkits. You can build service discovery systems on top of these, but they don’t necessarily have maybe as deep a mapping of all the service discovery concepts in a form that’s easy to use. They are kind of like a building block piece, but not necessarily a full solution. They might have basic health checking in the form of monitoring a TCP connection or having a check-in kind of thing with the central server, but you wouldn’t be able to do more general health-checking things like running a Nagios script or something like that.

[00:27:20.05] They often don’t handle the more global skills we talked about – spanning different geographic regions and things like that. That’s generally left to be something that a client manages. A full-service discovery solution would encompass more than most of the basic key/value stores have, but as a building block, they’re capable of doing pretty powerful service discovery type things.

Charles Anderson:            [00:27:46.23] In episode 229 of the podcast we talked about Zookeeper. It was more on the level of key/values store and coordination algorithms rather than high-level things like service discovery.

James Phillips:                     [00:28:08.08] Yes, and there are other solutions. There are things like SkyDNS, which is a DNS-centric system that’s backed by a consistent distributed system under the hood to manage the failure tolerance. SkyDNS and Consul both have a DNS interface, which makes it really easy to integrate with your existing applications with essentially no modifications. You don’t have to do a deep client integration, you just look up a host and get ports from server records, and things like that.

There are different features in these different products, but SkyDNS is one, and Consul has a DNS interface. Then there’s others that are more unique, hybrid solutions. SmartStack composes several different pieces, and it uses Zookeeper under the hood. It actually uses Zookeeper for its consistent database, and it uses HAProxy as its load balancing layer, and manages all those pieces together for you. It’s kind of a unique architecture. It’s a little more complicated, but it has a very specific opinion about, “You’re going to use a load balancer; I’m going to manage the load balancer for you”, and then it composes several pieces around that.

[00:29:26.02] Then there are things like Eureka from Netflix, which has more of an AP style to it. It has some of the pieces of the basic, “Find me an IPM port.” It has less of the coordination, general K/V store type features, so it’s a different mix. All these have different health checking strategies, a different mix of basic interface-type things; some have thicker clients, some have really lightweight clients, some have zero clients, like DNS interfaces. Among all these there are many different types of architectures.

Most solutions cover something on the order of one-fifth to three-fifths of what I would consider an ideal scope for a full-service discovery solution. Many of these tools will do one part really well, but they’ll require you to get better health checking; or you might have really good knowledge of where services are with IP imports, but you don’t have a K/V store, so you’d have to use something else there. Most of these will require you to compose several things together to get a complete solution.

Charles Anderson:            [00:30:45.25] Speaking of complete solutions, perhaps we can then dive into your wheelhouse, so to speak, and talk about Consul. Consul is an open source service discovery tool from HashiCorp, where you work. At a high level, what features does it provide? You were discussing trade-offs or how different applications have different sets of things that they’re bringing. What does Consul bring?

James Phillips:                     [00:31:18.23] Consul is unique, and it grew out of experience. We had an earlier project which we still use, and it’s one of the basic layers that Consul’s built on, called Serf. Serf is an AP system that is used to maintain a basic list of members in a cluster with some additional metadata on them, and to do some health checking. We’ll talk a little bit about how that fits in, but in the experience with building Serf and thinking about what do operators have to reason about, what should a service discovery system integrate together to do well – Consul grew out of that.

[00:31:56.28] It comes down to four different pieces. The basic service discovery piece that we’ve talked about – Consul focuses on that, but ties it to things like health monitoring, which is our second key component at a deep level. It’s nice to have a good catalog, but if the catalog is out of date, then it’s not that useful. By coupling the service discovery engine with a really rich health monitoring engine, you have a much more powerful service discovery system, and that’s a set of tools that you can choose out of the box.

So discovery, health monitoring, configuration – I mentioned before, many of the systems include K/V store, and that’s a key part of Consul as well. We can talk about this in detail, too. Even our configuration store is tied in with our health monitoring store, which means it’s tied in with service discovery.

[00:32:41.12] The final piece is orchestration – the ability to coordinate with your service discovery system, the ability to send events and manage your cluster in real time – that’s another key component of Consul. Tieing all those four pieces together is what makes Consul stand out from the rest. It’s got all those things thoughtfully integrated in a way that’s really easy to use and that will do most of what people want right out of the box.

Charles Anderson:            [00:33:12.21] What’s the basic architecture? You mentioned agents and communicating with servers.

James Phillips:                     [00:33:20.04] Sure. The basic architecture of Consul is that you run the Consul agent on every node in your cluster. It’s basically a single Go binary and it’s easy to deploy, and that runs everywhere. Then you’ll have some subset of your nodes that are servers. You’ll usually have three or five, depending on your failure tolerance. Three can handle one server failure, five can handle two. They run the same agent, but it’s in a different mode (server mode).

So each node is running an agent. Service registration and health check, and all those basic functions about what’s going on on a given node actually happen with the agent on the node, and then that agent syncs the information back to the servers. And the servers run the raft consensus algorithm to apply changes in a consistent way to a state store.

[00:34:19.21] It’s a little unusual, and most people don’t appreciate it first-glance, but the agents are actually the source of truth for all your service configuration information. The servers manage central [unintelligible 00:34:30.27] which is the catalog, and they also manage information that belongs to the whole cluster, like the K/V store, your ACL’s and things like that. If you were to say swap out your servers, eventually once the servers come back online, the agents will sync their service configuration back up with the catalog and you’ll be off and running. It’s an interesting division of labor.

Charles Anderson:            [00:34:56.00] All of the agents have the full key/value store?

James Philips:                      [00:35:01.29] They don’t. The agents have their configuration for their node services, and when you make a change at key/value store or request a key, those are actually requests that are forwarded up to the servers.

Charles Anderson:            [00:35:17.05] Okay, so they are managing truth for their local node.

James Phillips:                     [00:35:20.15] Yes, and they’re syncing that truth back up with the servers, and then the servers maintain basically a central copy of that truth, as well as the key/value store. So that’s managed, essentially, by the servers.

Charles Anderson:            [00:35:33.06] What protocols does Consul support for either the service that’s starting up that wants to say, “I’m available”, or the client that says, “I need to find this service.” I believe you mentioned DNS as one option.

James Phillips:                     [00:35:51.07] That is one option. Once you have the agent running, then you can disable these or enable these as appropriate. The two main interfaces for talking to it are the DNS interface and an HTTP API. Generally, you’ll run the agent on your local node and you’ll have your applications always just talk to localhost, port 8500. Your applications will always just talk to Consul using their local nodes’ agent interfaces. Behind the scenes, that agent is keeping track of where the Consul servers are. If a server comes and goes, it’ll put it on the list, it will manage that list of servers that are possible; if a client uses the HTTP API to, say, read a K/V entry, the local agent will forward a request up to the server and route the information back and serve it to the client. The clients always just talk to local hosts, essentially, and they can do it over HTTP or DNS.

Charles Anderson:            [00:36:57.15] Okay, but if a server is saying, “I’m here and available”, that wouldn’t be over DNS, right?

James Phillips:                     [00:37:05.17] That would not. Underlying the Consul system we’ve mentioned Serf, sort of under the hood there. As a background process, each Consul agent does a periodic ping of another random agent, as well as an exchange of gossip information. There’s an underlying technology that’s based on an academic paper, called Swim. But there’s a gossip protocol that’s kind of a low-level part of Consul. For example, when you join a new node, you need only the address of one other node, and you’ll talk to that one about the other nodes in the cluster, and then start participating in gossip and the random probing, to check for the aliveness of the other nodes.

[00:37:53.21] That’s a mechanism that your agent on a given node will use to discover that another server came online, for example. That’s like a low-level part of Consul’s architecture. It’s extremely powerful, because the gossip layer provides a built-in distributed failure detector, so Consul’s constantly checking the health of all the nodes in the cluster and maintaining membership information about nodes that are coming and going.

The nice thing about Swim is that it’s done in a way that has a low-volume constant level of traffic that scales nicely across the cluster. It’s not some [unintelligible 00:38:36.05] thing, it’s a constant level where each node will pick one other random node to probe, and then there’s a gossip propagation mechanism to get the information around nodes coming and going around the cluster. There’s sort of an AP underpinning to Consul, that provides this information of which servers are available/this node is no longer functioning. That’s a low level of traffic that Consul maintains automatically.

Charles Anderson:            [00:39:09.09] So it’s maintaining AP, but then Consul’s service itself – is that AP, or is that more CP?

James Phillips:                     [00:39:22.29] The events of, say, a node coming or a node being declared dead – those propagate through the gossip layer in an AP fashion. Then as those events are recognized by the servers, they get fed into the catalog in a consistent fashion. When you do service discovery and you say, “Give me a healthy instance of this service”, that’s actually a request that’s going up to the servers and being done based on the consistent view of the cluster, but there’s a low-level AP view of the cluster that’s being constantly maintained and updated by all the nodes, in particular the distributed failure detector, so that nodes are checking each other and then reporting that information back up. Then the servers feed that into the catalog using the consensus algorithm. Your view of the cluster is always through a lens of this consistent catalog, but the underlying mechanism that helps realize that view and provide the events of “This node just joined, this one went away” is happening by this gossip protocol under the hood.

Charles Anderson:            [00:40:30.12] That explains the confusion that there are at least two distinct layers and they are aligning themselves differently on the CAP triangle, so to speak.

James Phillips:                     [00:40:43.13] This is a really good place for a diagram. You can almost think of the Serf layer as a layer that informs the consistent layer, but the consistent layer is what you use for actually doing the service discovery. It’s like a transport mechanism, but I don’t want to create more confusion there. It’s a way to move events around or announce that “Hey, this server just joined” or “This server is not longer healthy. I don’t think we should be using the services that are on it anymore.”

Charles Anderson:            [00:41:18.11] If I’ve got an application and it wants to find services – and we’ve talked about doing that through DNS or HTTP, but I’ve got experience with Ruby on Rail, where we’ve got a whole bunch of YML files. Do I need to rewrite my application in terms of how it’s configuring itself instead of using YML files to DNS or HTTP files, or is there another option?

James Phillips:                     [00:41:53.11] There is, and it’s very common to do this. Any application that is driven by configuration files, we have some tools and there are other community tools – two in particular, which are called Consul Template and Envconsul (both HashiCorp tools). What those do is you’ll run a process on a given node, it will make a blocking long poll request up to the Consul servers to watch for things to change, and whenever a change is made to the configuration, they’ll render a new config file from a template (in the case of Consul Template) or, in the case of Envconsul, they’ll restart an application and set environment variables with configuration variables that came from Consul, and then send that up or restart a process.

[00:42:45.28] You can basically get a [unintelligible 00:42:45.07] watcher that’s looking for configuration to change in Consul, renders out a configuration file and then restarts the service, or whatever action you want to get your service to reload its configuration. That’s a very common clue into existing applications, to just render its config files directly from Consul using one of these tools, and have it get restarted when things change.

Charles Anderson:            [00:43:11.22] And similarly with the initial start, we need to run the template to generate the config file, then it’s safe for me to start my application, and it will just read the config file.

James Phillips:                     [00:43:23.24] Exactly. Or you can use the Consul Template process to generate that initial config and then restart the service, or let it start the first time. Consul has the concept of watches, too. You can register a script with Consul and say, “When anything changes with this service, or when a node is added, or when somebody touches this key, run the script and pass the information in as a JSON [unintelligible 00:43:49.18].” So there is the capability to have a reactive script. That’s another way to get at this information in an application that otherwise doesn’t know much about Consul.

[00:44:01.11] Mitchell wrote a thing called Consul Structure – you basically get a magical map that’s driven by stuff in Consul, piped directly into the brain of your Go application. There’s different levels of integration, but it’s very common to use DNS and something like Consul Template to talk to an application that doesn’t have any knowledge of Consul at all.

Charles Anderson:            [00:44:28.21] I can see how that would be really helpful. For the record, when you speak of Mitchell, you’re speaking of Mitchell Hashimoto. We talked to him earlier on the show about Vagrant, if people want to go and check that out.

I wanted to ask another one of these turtles questions. So we’ve got a cluster of servers (3-5 servers). How do they start out and find each other? Is that another example of some seed knowledge that’s necessary?

James Phillips:                     [00:44:59.08] They use the same underlying cluster-join mechanism that’s based on Serf, but the member information for a server says it’s a server, and when you have an existing server that sees a new one join, it adds it to the Raft quorum. There’s a parallel process to start up the consensus algorithm, but it’s based on the same event of “Hey, this node just joined, and it’s a server.” You generally want to run multiple servers. Consul lets you form a completely new cluster. There’s a configuration called bootstrap-expect that lets you say, “Hey, there’s going to be three. As soon as you see three, start up Raft and start going.”

[00:45:47.17] Once you have a cluster that’s already running, it’s easy to add more servers. If you want to go from a three-server to a five-server configuration, you can do that on the fly. There’s a process for reducing it, as well. In terms of your experience as a Consul user, the way you join it to the cluster is very similar. You’re just joining an agent that’s running in server mode.

Charles Anderson:            [00:46:11.03] Something you mentioned earlier – HashiCorp has a service called Atlas. I believe it has both a free and a paid-for tier. I think of it as a unification of a number of HashiCorp tools. Can you tell us a little bit more about Atlas?

James Phillips:                     [00:46:31.16] Yes, Atlas is HashiCorp’s paid set of workflow tools that’s built on top of our open source pieces. We’ve got a suite of products, Consul Enterprise is one of them; the other two are Terraform Enterprise and Vault Enterprise. There’s a web-based component to those, that lets you manage users. In the case of Terraform, you can see a change that’s going to be made to the infrastructure, have it reviewed by somebody that can apply it… The workflow layer is provided by Atlas.

[00:47:07.02] In Consul’s case, there’s a free piece of it called Atlas Auto-Join. What this lets you do is solve that initial turtle problem. You let Atlas be the turtle zero, at the base, so when you bring a new node up in your cluster, you can say “Here’s my cluster name, here’s my Atlas token. I want to join”, and it will give you some addresses to join your node to the cluster. It avoids you having to manage a fixed list of servers or a fixed IP list for that initial bootstrap.

[00:47:41.22] The paid portion of Consul enterprise gets you features like a richer key/value editing interface that lets you have organizational permissions for different parts of your key store and an interface through Atlas to view and edit those, and an alerts feature. You can take Consul’s health checking information and have those events feed into systems like Slack, PagerDuty, HipChat. You get a hosted solution that lets you bridge onto those escalation services and notification services directly from Consul.

Charles Anderson:            [00:48:17.17] Not to sound dismissive, but a typical open source strategy of most of the tooling is open source and you can run it yourself, but when you get into enterprise situations where they want things like access control lists and other complexities, that’s where the paid-for service comes in?

James Phillips:                     [00:48:39.12] That’s right. The management, the workflow, the auditing – those types of pieces are what Atlas provides, and it’s working in concert with Consul, which has an open source component. You could build a PagerDuty escalator thing yourself, or you could use Atlas; that type of strategy.

Charles Anderson:            [00:49:01.25] Makes perfect sense. As we wrap up, is there anything else you’d like to tell our audience about Consul or service discovery? I’m sure it’s a huge topic…

James Phillips:                     [00:49:14.10] There’s a pair of features worth mentioning that are pretty unique to Consul. We mentioned that as part of the underlying Serf mechanism that underpins Consul you have nodes randomly probing each other at periodical intervals to determine if the node is still healthy, and that feeds into the catalog. If a node is not reachable to a Serf layer, then after a few confirmation, that node will be declared dead and those services that were on that node won’t be given out as healthy services to other nodes. That’s the basic, bread-and-butter capability of what those random probes are for. A neat side-effect to that is that you’re basically getting a randomized sample of the roundtrip time on your network between all your nodes, in automated fashion, all the time.

[00:50:05.19] In Consul 0.6 we implemented a network coordinate subsystem which uses a set of tomography techniques (measures the dimensions of something by measurements from the ends). As Consul is doing those probes, it feeds those roundtrip time measurements into a physic simulation which ends up creating a set of coordinates for all the nodes in your network. Those coordinates can be used to calculate the roundtrip time between any two nodes. So you get an automatic model of your network that lets you make interesting decisions about it.

[00:50:45.28] Building on top of that, one thing we didn’t talk a lot about is that Consul lets you also join different sets of Consul servers together, and what we call the Consul LAN configuration. You can have a Consul in Europe and a Consul on the East Coast of the United States, a cluster in Asia… Those server clusters can know about each other and you could discover services from one data center in another one. What’s interesting is that the network tomography model can feed into that in a very automatic way.

[00:51:21.23] The second feature I want to talk about is something called prepared queries. In Consul you can look up a service locally; you can also define something called the prepared query. That can have a policy like, “If they are known locally, try the next three closest data centers by round trip time and see if you can find an instance in any of those.” That can all be done automatically with network tomography. By defining a very simple, three-line JSON thing and registrating a prepared query, you can enable something like geo-failover. If your clients are using the DNS interface, you won’t even have to [unintelligible 00:51:56.19] your clients. It’s pretty awesome.

[00:52:00.22] You can say, “If I can’t find a read replica here, just try the next three data centers”, and Consul, by maintaining that tomography information, will be able to calculate, “Well, if it’s in San Francisco, I’m going to try the Asia one, but if it’s on the East Coast, I’m gonna failover to Europe.” Consul will manage that architecture on its own, and if a data center goes offline, you’ll get the next closest one after that. All that can be managed and expressed in a really simple way, as a DNS query that your application does, which is pretty unique, when all the pieces of Consul come together like that.

Charles Anderson:            [00:52:35.20] That really drives home what you were talking about earlier, about other tools and having different pieces of the puzzle, and layers. We stack up enough turtles and we have a fairly awesome functionality there.

James Phillips:                     [00:52:52.16] Yes. My other thought was that the agent on its own can do some basic health checking without you really configuring anything, and that’s a good way to get a little experience with how it works before you’ve actually tied it to anything critical in your application. Things like the network tomography can give you some neat information without doing anything, without creating much network load or a lot of risk, and then you can build on other features over time. It’s pretty easy to get started.

Charles Anderson:            [00:53:24.20] How can people follow you, Consul, or the other open source projects from HashiCorp? Anything you give us we can put into the show notes, so you don’t need to spell out URLs or anything.

James Phillips:                     [00:53:37.26] I’m @slackpad pretty much everywhere (Twitter, GitHub). The HashiCorp Twitter account, @HashiCorp is good for getting our latest blog posts and announcements. It’s also @HashiCorp on GitHub; most of our open source projects are there. Some still live under Mitchell Hashimoto. We also have webinars pretty regularly, so you can watch for those. There is a lot of information on YouTube from past webinars, conference presentations and things like that.

[00:54:06.11] We have two conferences as well. We have an upcoming HashiConf EU in Amsterdam, and HashiConf US in Napa, California. Those are ways to get face time with HashiCorp engineers, and there’s a ton of open presentations, things like that. So there are a lot of ways to find out about the company.

Charles Anderson:            [00:54:27.21] That’s great. Thanks for your time, James. I’ve really enjoyed our discussion, and I hope our listeners have, too. This is Charles Anderson for Software Engineering Radio. Thanks, bye!

Facebooktwittergoogle_pluslinkedin

Tags: , , , ,