Search
Luke Hoban

SE Radio 482: Luke Hoban on Infrastructure as Code

Luke Hoban, CTO of Pulumi, joined host Jeff Doolittle for a conversation about infrastructure as code (IAC), which allows software development teams to configure and control their cloud infrastructure assets using code in contrast to other approaches such as using web interfaces or command line interfaces. Luke described how IAC allows teams to apply good software development practices and patterns to their provisioning and management of cloud infrastructure resources. Various related topics were explored in more detail such as managing and applying IAC definitions, security considerations with IAC, compliance enforcement when using IAC, and testing practices to employ when using IAC.

This episode sponsored by Linode.


Show Notes

Related Links

From the Show

From IEEE

From SE Radio

Transcript

SE Radio 00:00:00 This is software engineering radio, the podcast for professional developers on the [email protected] se radio is brought to you by the computer society. Your belief software magazine online at computer.org/software. This episode of se radio is exclusively sponsored by Lenovo. The node makes cloud computing simple, affordable, and accessible. Whether you’re working on a personal project or looking for someone to manage your company’s infrastructure, Linode has the pricing support, security and scale you need with Linode. You get consistent and predictable pricing across 11 global markets, 24 by seven by 365 human support, rich documentation and policies and controls to strengthen your overall security posture, allowing you to grow at your own pace. You just consistently ranked Linode as one of the leading public cloud providers on both G2 and trust radius. Want to learn more about infrastructure as code and Terraform visit linode.com. That’s L I N O D E slash se radio and download their ebook, understanding Terraform and other resources for free today.

Jeff Doolittle 00:01:06 Welcome to software engineering radio. I’m your host, Jeff Doolittle. I’m excited to invite Luke Hoban as our guest on the show today for a conversation about infrastructure as code Luke Hoban is the CTO at where he is. Re-inventing how developers program the cloud prior to pollute me, Luke held product and engineering roles at AWS and Microsoft and Microsoft, Luke co-founded TypeScript developed go support for visual studio code and was part of the design teams for ECMAScript and C sharp Lucas, passionate about building tools and platforms to enable and empower developers. And as a deep believer in the transformative potential of cloud, Luke, welcome to software engineering radio. Thanks. Great to be here. We’ve had some shows in the past and we’ll reference them in the show notes that have talked about infrastructure as code and just for listeners who maybe aren’t familiar with the concept first, we might use the phrase or the three letter acronym IAC throughout the show to refer to infrastructure as code, but let’s start with, from your perspective, a brief overview of what is IAC for listeners who are new to the concept. And also why does it matter?

Luke Hoban 00:02:15 Infrastructure is code is really the idea that we’d want to take the cloud infrastructure that we’re managing. And we want to encode it as code, not just as something where we point and click in a, in a console and, and create some stuff in a sort of ad hoc fashion where we write down in texts and in programming languages and in software and in code the way that we want our cloud infrastructure to run. And I really think about this as sort of bringing the notions of software engineering into the space of cloud infrastructure. And we sort of seen how software engineering principles, practices, and tools have enabled us in lots of other areas to scale up the complexity of what we can build and manage and sort of get value out of. And we really want to do that same thing with the cloud as the, as we want to build more and more complex things that have more and more value and get more and more benefit from what the cloud providers are offering to us. One of the core things we need to do is bring some of their software engineering tools to bear on that.

Jeff Doolittle 00:03:06 So what are some of those specific software engineering principles or practices that you would highlight as beneficial in this kind of an approach?

Luke Hoban 00:03:15 Yes. I think some of the traditional ones, when folks think about just infrastructure as code as it’s traditionally been, even from the kind of chef and puppet and Ansible era to the sort of Terraform and polygamy and CloudFormation and that sort of thing, I think folks often think about the first piece being, Hey, I can take my description of what I want my instruction to be, and I can write that down in text. I can put that in a file. I can put that file in source control. So you start to get some of these basic things where I can describe my desired state, describe what I want instead of me having to just describe a set of steps, where to click in a console or something like that. I can write it down in a format that I can version and I can version control.

Luke Hoban 00:03:48 That’s where instructors code, I think really started in tons of value you get just from sort of that aspect, right? That’s seen a huge rise in the complexity of what folks in Denver manage. But I think really when I think about it, you know, I kind of joined the instructors code movement, I’d say kind of after that phase had already been well-established and now it was sort of looking more at what are the other places where we can bring software engineering benefits in there. I think it’s a whole bunch of additional things like, Hey, what about versioning and packaging? What about testing? What about, you know, the ability to build reusable components? This is something that software is really great at, right? We can, instead of reinventing the wheel every time we can kind of go and solve a problem and give it a name, give that function a name, give that class a name, put that in a package. We use that from, from new or NPM or something. How do we componentize? How do we create abstractions then how do we get IDE benefits so that we can sort of productively work with the platform? How do we get all these other things that software engineering gives us? How do we bring all that into, you know, scaling up what we can do with cloud infrastructure? And so for me, that’s what I see really is all about.

Jeff Doolittle 00:04:51 Yeah. So you’ve covered a bunch of things there, many of which we’ll get to in the show. And when you started out, I kind of sensitive distinction there between basically a declarative approach versus an imperative approach. You know, the imperative is, you’re literally saying provision this resource, if we’re talking about infrastructure versus declaring is there is a resource or there will be such a resource. And it seems like that’s, that’s one of the distinctions that you’re drawing at least where things have moved in the IC movement. And then that’s interesting to the related it to making it work with existing developer tools there at the end as sort of a way to leverage maybe the knowledge that software teams already have as they’re adopting these kinds of approaches.

Luke Hoban 00:05:29 Yeah. I think both of those are great points on the first one there about kind of declarative versus imperative. One of the interesting things with ISC generally is I often think of it as really the core thing that I see is fundamentally is a desired state. And I don’t think about that purely as being about declarative or imperative. It’s more that the thing I’m describing is the desired state of the environment. And that’s particularly important because unlike a normal piece of software that I might write where I sort of run it and it ends at some point and maybe at the process dies or something like that, I started up again with infrastructures code and cloud infrastructure. It’s there forever. If I say, I want to have an S3 bucket, well, that is three buckets going to be there forever,

Jeff Doolittle 00:06:06 Effectively.

Luke Hoban 00:06:08 My infrastructure programs are things that describe the desired state. And when I make changes to those, it’s actually sort of changing the program that’s running the whole time. Right. And so it’s sort of this interestingly different model for kind of how I think about it. And the fundamental part of that, I think is that what you’re doing is you’re describing what’s the desired state I want now. And when I make changes to that desired state, how do I get from where I am to that desired state? And then I think kind of in front of that, the feel of the experience of offering infrastructure’s code, I think there are some tools that really lean on a very, very declarative model. They sort of invent a DSL or they use Jason or gamble or something like that to have a very declarative feeling interface to that desired state model.

Luke Hoban 00:06:49 But I think tools like what we work on at Ballou me, things like CDK that AWS is working on, they actually provide an imperative experience for users where they can sort of use a lot more of this richness of software engineering capabilities and software based capabilities, but they’re still defining the desired state. They’re just using higher level primitives to kind of do that in more expressive languages. And so I think, you know, for me, I think that whole spectrum of different ways that users might interact with infrastructure as code is really interesting. But at the end of the day, I think the defining characteristic there is, Hey, but it’s a desired state model I’m describing what state I want my infrastructure to be in an underlying ISE tool is going to, it’s going to move my cloud infrastructure into that.

Jeff Doolittle 00:07:25 That’s a great clarification. And I’m glad you mentioned the concept of desired state because I’ve heard that as well, kind of in this realm that that’s really what we want. And that’d be great if we could just do that into the world as well, right. Into the university as my desired state. And it just happens. But you know, I guess we’re still working on that. We’ve seen some of how it matters, right? I mean, you mentioned other things like reuse componentization these kinds of things, which is all great stuff that we all really want to be able to benefit from not having to reinvent the wheel all the time. So listeners are going to be all over the place. Some are going to be doing existing applications and some are going to be building new applications, but give us a sense of if somebody is interested in exploring the space, where did they get started?

Luke Hoban 00:08:02 The great thing about IAC is as you know, a ton of different on-ramps right. Everything from, Hey, let’s imagine that you’ve got a whole bunch of existing infrastructure that you have just kind of point and click. You’ve gone in the console. You know, you had to get a project up and running as quickly as possible. So you click the few times you created your bucket, you created your, your web app, whatever it was. And now you’re kind of realizing, oh, well, I don’t exactly know what I’m managing here. I don’t exactly know how to reliably recreate this environment because I want a test environment that matches this. Like, how am I going to go and create that? And so it’s probably around that point that you’re realizing, oh, this is where I kind of need infrastructure as code, or I need some, some way to describe what I’ve got to repeatably, recreate it, to understand when I, when and how to make changes to it, that sort of thing.

Luke Hoban 00:08:41 You know, there’s a few things, one, there’s a lot of ways to go and export what you’ve got inside these cloud environments who was like polygamy. A lot of the other IC tools have ways to sort of import existing infrastructure into the tool when he has a premium port command, similar things in a lot of other tools where you can kind of go and take existing things you’ve built and sort of bring them into that icy environment and start managing them with your ISE. Even if they didn’t start off in life managed by an ISE tool. There’s also a lot of folks who they take the approach more of, Hey, I’ve got my existing stuff, which I’m going to leave alone. I know what state that’s in, but I forgot this new stuff. I’m going to be able, or I’m extending my application to have some new components.

Luke Hoban 00:09:19 Maybe I’m adding in a new service or I’m adding a new capability, new caching layer or something like that, that new project I’m going to kind of use IAC from the beginning, but I’m going to reference some of that existing infrastructure. Typically a lot of these ISC tools offer a lot of different ways to discover existing infrastructure bind into that and kind of use it from the new piece that you’re building using these tools. The key thing is really that just learning how to use the ISE tools. And I think, you know, for , we’ve gotten, you know, gloomy.com getting started and, you know, you can go there, you can go through a really quick, you know, five minutes to kind of have a simple application stood up in your cloud environment. And then you can go through that process of, Hey, how do I use this tool to map to my problems, not just these sort of generic problems, if you’ve got any amount of sort of software engineering kind of background. I think these tools tend to be, you know, pretty easy to pick up really as with everything in the cloud, a lot of the complexity comes down to just all the value, all the concepts that you’ve got in that cloud and all the tools you’ve got available to you to, to pick up off the ground and, and leverage. And that’s where you can use the ISE tools to help you explore and discover and sort of table.

Jeff Doolittle 00:10:23 Yeah. And that’s great to hear that listeners can get started. Haven’t we all you use, whatever portal is available to you from the vendor. And then after a, while you say this isn’t very sustainable, you know, I’ve seen teams where, okay, I have to remember pushing the buttons in the same order in production as we go live, that I pushed them in dev and in test and the odds of you getting it right are actually quite low. And the more complex the system is the likelihood plummets. So the ability to adopt these things with existing systems seems really helpful. Now there’s also more movement towards command line interface, which I’m a big fan of. I like the command line and being able to drop down to the command line, but maybe distinguish a little bit as well there between I’ve got the pretty gooey with Azure or Google cloud or AWS.

Jeff Doolittle 00:11:08 And then I’ve got the command line, which is pretty powerful, but next thing you know, I’m writing bash scripts to automate my, you know, I can have mine. So talk a little bit there about to maybe where’s the line. I guess what I’m saying is I feel like the line is clear for me between the gooey and some kind of automation, but the CLS can serve the command line interfaces. They can kind of give you the sense of, well, I am doing infrastructure as code, but maybe distinguish those for a little bit when you’d say now it’s really time to graduate from just using the CLI and start using an ISE.

Luke Hoban 00:11:36 The tool does a great point. There are really several different steps here. I think there’s, there is this sort of UI driven thing where you’re just kind of pointing click around. Then there is the bash scripts or the PowerShell scripts or whatever, whatever it is like that that is just, you know, chilling out to AWS CLI commands or something like that. Right? One of the troubles that that kind of scripting based approach runs into is just that it is very fundamentally imperative in the sense that I’m bashing on the cloud API,

Jeff Doolittle 00:12:03 I’m going to do

Luke Hoban 00:12:04 This, I’m going to do this. I’m going to do this. And the problem is, well, what happens if the, my environment ends up in a slightly different state, or if I make a change to how those scripts work, and I need to handle the fact that I had it already provisioned environment, I don’t get to this provision by environments from scratch every time is that notion of, Hey, can I move from one state to another reliably, uh, is something that the tools that are built around sort of the scripting kind of solutions tend to not handle well. And that leads to fragility. It leads to, you know, especially for agility in the cases where, Hey, there’s some operational issue and I’ve got to get some change out, but oh, my script doesn’t quite handle that approach yet. And so now I’m now I’m rushing to fix my scripts in the middle of an operational. It’s always

Jeff Doolittle 00:12:41 The worst time

Luke Hoban 00:12:43 That was early cloud and sort of pre cloud VM based era. There was a lot of that where folks, you know, the majority of what folks did in infrastructure management was these kind of scripting approaches. And that’s still very, very common, but we find for almost everyone who’s managing anything with a meaningful amount of complexity to it. And that’s practically anyone trying to really use the cloud today. They find that those kinds of tools don’t scale super well, and that they do need to have something that’s a bit more kind of desired state focused that is able to take an existing piece of infrastructure, move it into the state. They want. That’s a key distinction. I think we see there’s always going to be little bits of this scripting, you know, just any of those operational tasks also, which are things that are going to need to sort of be imperative actions that you take. And you’re always going to have a little bit of that around the edges, but we find that folks, the more that they can pull into sort of IEC tools, the more confident they feel about what they’re managing in the cloud.

Jeff Doolittle 00:13:35 So really at the end of the day, that’s going to come down to managing the complexity, which as you point out, as soon as you go cloud, you’re introducing orders of magnitude of additional complexity, unless you just, you know, run a single VM, in which case it’s really barely the cloud. And then you have to consider maybe poking away at the gooey is good for exploring what’s available. But when it comes down to managing routers and VPNs and compute and storage and all and buses and all this stuff, then it starts to get pretty daunting pretty quickly. And I imagine one of the things see is, are trying to do too, is create some amount of consistency. If, if every team has their own custom Yammel or Tuml file or Jason or, or whatever, and their own custom scripts, that also adds complexity, that’s hard to maintain and a burden of training and the person who wrote it leaves and nobody knows what it does. So I imagine that’s also something that I see tools are trying to help with as well.

Luke Hoban 00:14:27 Absolutely. It’s one of the interesting things I remember seeing kind of, when I came into this specifically, I came from a background working on developer tools, mostly for application developers and kind of program languages and IDs and developer tools and that sort of thing. And coming into the cloud infrastructure space, you know, it was kind of amazed how much of what gets done is copy pasting stuff or rent, right? You, if you went to an application development team and they said, oh yeah, we solved that problem by copy a couple of hundred lines of code from this code base over to this code base, like everyone would laugh you out of the room, right? You don’t allow to do that. If you’re an application developer, right. You got to, you know,

Luke Hoban 00:15:01 There’s not the standard. The standard is, oh, I’m going to make a use of a library. I’m going to find something on NPM that accomplishes this task. I’m going to use that instead of reinventing that wheel or copy pasting it out of that code base, we have lots of great tools available to us for how to do this. And so that notion of reuse of abstraction is sort of so deep in what we think about for application development and still that’s not in the cloud infrastructure space. We do see a lot of copy pasting. Here’s this 300 lines of code I wrote in a Yammel file over here. I’m just gonna copy paste that and put it over here as well. That’s good the first time maybe, but the hundreds times now you’ve made a change over here. All these things are diverging instead of having a hundred lines of code, imagine you’ve got 10,000 lines of code you’re managing and it just creates a ton of risk and complexity and management overhead. And so that’s where we sort of see, Hey, like these are the things we know how to solve. We know how to bring kind of software engineering tools and we know to bring program languages, IDs, packaging, those kinds of things in to help with these problems and to give folks the ability to kind of manage those problems as they scale.

Jeff Doolittle 00:15:59 That’s great. So you referenced before kind of almost like there’s two phase, maybe it’s not two, but some kind of like there is this sort of early phase, early stage chef puppet Ansible, which are still around. And then there’s what you kind of referred to as this second, maybe phase Poloma Terraform. So maybe tell us a little bit about what distinguishes sort of those two timeframes in your mind. And then also what kind of distinguishes say a balloon from a Terraform and those other players in this space as well.

Luke Hoban 00:16:26 Great question. Um, and maybe I’ll split it actually in a sort of my view on kind of three phases of this, in a sense, I think, you know, the first one is sort of pre cloud or early cloud where really most folks we’re managing think of it as for VM based architectures, right. Where they’re managing a couple of VMs. Typically they knew, you know, it was very much the pets, not the cattle, but those BMS were the work of the team managing that, that application was to run the right software inside that VM. And they were just responsible for all that. They wanted to make sure they did that reliably and repeatably. So they were using ISE. They were using things like chef and puppet to sort of orchestrate how they deploy things inside that VM. But it was all about code running inside a virtual machine and fairly simple typologies outside of the virtual machine.

Luke Hoban 00:17:09 Maybe there’s a database in a single VM or a couple of VMs, but generally pretty simple infrastructure outside of that, that’s mainly managed imperatively through VMware or something like that. That was sort of the first phase where I see started to become a thing. I think then there was sort of the early cloud phase where a lot of those patterns moved into the cloud and there was more API driven capabilities inside the cloud providers. And so the networking and compute and everything could be provisioned by API. And so now you said, had Terraform and cloud formation and arm and things like that come about as ways to put desired state in front of all of these new pieces of the system that were a programmable, but still the level of complexity of what folks were building. There was typically not that high, right? It was typically maybe I’ve got tens of things.

Luke Hoban 00:17:55 I manage it, I’ve got my VM, I’ve got my order scaling group and some networking pieces, but I’ve got a fairly discreet set of things. They don’t change that often. I can be pretty intentional and sort of slow in a sense with, with how I manage these. I’d say with that phase, we’re kind of going into now that a lot of organizations who are really leaning into the cloud are going into it. I think really everyone is going to be heading towards is what right. So they think of as modern cloud and that’s sort of the, the containers, Kubernetes serverless these modern technologies that have been built to not just be in the cloud, but sort of be native to the cloud. And it’s not just lift and shift my BM architecture into the cloud and get some API driven capabilities. It’s Hey, I take a totally different approach to the architecture of how I build and deliver my software to take advantage of what the cloud provides.

Luke Hoban 00:18:40 And the interesting thing with those architectures is they tend to lean a lot more heavily on me, composing a bunch of managed services, either managed compute services, like with serverless or managed data services or managed sort of fully baked services, whether it’s like ML services or something like that. And what happens there is that a lot of the complexity ends up moving from, I no longer have to manage it. I no longer have to run that VM myself and put the software in that thing myself and manage that. And so some of that complexity that I had to manage before it goes away, and that’s an operational burden off of me, and that lets me move a lot faster and get to market quicker where the burden moves though is actually into the edges between all their services, right? The way that I connect S3 to ISI, to, to Lambda, to red shift, a lot of the complexity is now in how do I compose all of these pieces of value?

Luke Hoban 00:19:28 I’ve gotten the cloud and it’s that complexity. That is what I see is all about. It’s about how do I connect all these things, right? That’s what cloud ISE is about. The one things like pulling me or is it from me, that’s the really interesting place where we’re kind of going to is we’re actually benefiting a ton from all these managed services from all these modern cloud technologies and it’s net reducing the complexity, but we build a lot by taking all this operational burden off us, but the one place where it’s sort of pushing some of that additional complexities into the configuration of how all of these things are connected together. And so that’s causing the need for more and more sort of infrastructure and more and more scale in the infrastructure’s code management. That’s where we see folks looking for tools like polluting is, Hey, I’m just realizing that to build this modern cloud solution, I need to manage hundreds, thousands, tens of thousands of cloud resources. I need those to be versioning and changing every day or every few hours. This is a software problem. This is not a, just an infrastructure. This is a software problem, figure out software tools to solve.

Jeff Doolittle 00:20:27 No, that’s a good perspective. And you know, you mentioned complexity. It reminds me of Larry Tesla’s law of conservation of complexity, which we’ll reference in the show notes, which is basically says, you never get rid of complexity. You just move it around and hide it. Right? And the reason you say your car is easy to use is because all the complexity is behind the dashboard and under the hood and a similar fashion, right? We’re not getting rid of it, but we see it shifting and moving into places that now need to be addressed in different ways.

Luke Hoban 00:20:52 I think the great thing also is this complexity isn’t, isn’t coming out of nowhere in a sense it’s coming because folks are getting more and more value out of the cloud. The building blocks are so valuable, right? They’re letting them deliver more quickly, deliver more reliably, deliver more securely. So folks want to go and take advantage of all this stuff, but the one cost of that as they extract that value and as they try to attract more and more value is just that there is more complexity. There is more stuff that they’ve got to connect up together.

Jeff Doolittle 00:21:16 Absolutely. Maybe talk a little bit now about what distinguishes those, we talked kind of about the history and sort of, sort of where things have been, where they’re going. So what distinguishes maybe the different vendors in this space of how they approach. I see, what do they emphasize? Different tooling, things of that nature,

Luke Hoban 00:21:35 The one probably most consistent thing for anything that’s sort of thinks of itself as an IC tool is this notion of kind of having a desired state model and describing the state of the world. And then within that, I think there’s some of the traditional tools that we’re really focused on largely in guests originally and OSTP provisioning and that kind of thing. Then there’s sort of another set that’s really focused on cloud management. So managing cloud API APIs and the configuration of different cloud services. And so I think that’s one place where there’s some differentiation you’ve got here, you know, chef and puppets on one side, and you’ve got your Terra forms and cloud formations and arms on the other side, within that second camp of kind of managing the cloud. I didn’t think there’s one big category that’s sort of about, is the interface to that.

Luke Hoban 00:22:13 You know, we talked about this earlier is the interface of added declarative model or kind of a software based model. And, you know, there’s a set of tools like, you know, CloudFormation and arm obviously using Jason and Yammel as kind of a way to describe that fairly limited domain specific languages really meant to be about more treating the infrastructure as data than treating it, this software in a sentence. And that that’s good at small scales, but that maybe, you know, it’s more difficult at very large scales. And then you’ve got tools like Terraform, which have their own domain specific language. It’s not quite Jason enamel. It’s a bit more expressive than that, but it’s also not quite as expressive as traditional kind of application development languages, and then tools like that I work on. And the eight of our CDK, these are tools that let you use existing programming languages, that you may already be familiar with, TypeScript Python, go.net and use those environments to manage the desired state of your infrastructure. And so for a lot of application developers, both folks who already have, or understand that their scale is such that they need these tools to manage complexity, or they have that background of working in one of those software ecosystems, and they want to continue to leverage the test tools and the packaging tools and the componentization tools, ID tooling that they’re used to those tools can be great for them. So those are sort of some of the big categories that I think kind of differentiate some of the existing tools

Jeff Doolittle 00:23:33 And back to what you mentioned before about adopting good software engineering principles here with that approach, you’re kind of mentioned at the end, it seems like it fits better in a productive developer flow where, well, we send in pull requests and we do code reviews and we test these things and we do security audits on them. It seems like that approach can maybe kind of dovetail does that. I mean, that’s kinda the point, right?

Luke Hoban 00:23:56 I think a lot of teams already have good software during practices around their, you know, their.net code bases and they’re TypeScript code bases. And what have you, the things you were just talking about, the pull requests workflows and that sort of thing, but also the testing, the continuous integration that they have a package manager somewhere, maybe it’s Artifactory or something like that, they have all these things set up to kind of manage their software for the application side, with this class of tools that kind of pollutes in you can kind of go and we use all those and teams are more likely to go in sort of reuse those and say, Hey, I think I should be testing my infrastructure because I know how to test using the tools that I’ve got available to me in C-sharp. How can I apply that now to my infrastructure, because I’m doing that in C sharp as well,

Jeff Doolittle 00:24:36 But from that let’s segue then into, okay, I’ve decided to adopt IC whether it’s a brand new system or an existing system that I’m gonna kind of do side by side or convert, whatever it might be. So now if I’m adopting this approach of code that represents a desired state of my infrastructure, what are the ways people manage these definitions? It’s one thing to say, well, you write code, okay, you write code, but are there distinctions maybe between standard engineering practices? Are there also similarities? How do people start to think through, okay, I’m going to do this? How do I organize and maintain the set of descriptions of desired state

Luke Hoban 00:25:14 It’s in the same way that a lot of application development might start that, Hey, I, you know, I might start with a very simple program where I’ve just got kind of one code base. It’s got all the logic for what I want to build out, but over time I might realize that there’s some pieces of this that are actually reusable components. And that it’s actually easier for me to understand what I’m doing here. If I take this subset of my application code, factor it out, give it a name, give it a nice interface. And now it’s easier for folks to both understand and test and validate that piece, but also to more easily understand what the kind of full application is doing because it’s now framed in terms of higher level building blocks put together. And so I think the same sort of thing really happens with, with infrastructure’s code where people will generally start by just kind of building out their infrastructure, writing it down as code and kind of just a flat format as it grows, as they evolve, as they understand where this sort of versioning boundaries are, where the sort of abstractions are in what they’re building, when they stand up their third kind of microservice or something, they realize, oh, well, a microservice is actually this set of five things, right?

Luke Hoban 00:26:14 So maybe I should give that, that notion a name. And now I can sort of make improvements across all of those pieces if I, if I do it in one place. And so we see that sort of a lot of that same evolutionary path in terms of how do I manage that code base tends to happen. And, and folks who kind of come into these problems with a bit of an engineering mindset, tend to be able to apply those tools and tend to be looking for those opportunities to sort of simplify and modularize and componentize the applications. So I think that’s one aspect of it is sort of the code management and how, how a code base evolves. That’s one that’s really important. Then maybe we’ll talk more about it later, but bringing in some of these other tools like testing package, those sorts of things, as it gets more and more complex, we have ways to really lock in the behavior of different pieces of this version of the behavior of pieces of our infrastructure.

Jeff Doolittle 00:26:59 Yeah. Now its infrastructure has compiled packages that you deliver, which we will definitely get more into as we go. But speaking of that sort of thing, if I’m doing infrastructure as code and Sam and you know, environment like in Java and using maybe I’m using Maven and.net, I’m using new Gaddam and TypeScript, I’m using MPM, whatever it might be. If my typically using just what a vendor that I’ve chosen has given me, or am I also leveraging other existing packages out there to help sort of add to what I can do with ISE?

Luke Hoban 00:27:29 Yeah. I think that’s one of the really exciting things I think, you know, sort of answer is both, but I think that’s also something that I think it’s shifting more and more now with these ISE tools, you, you have access to the raw building blocks that the cloud providers provide, right? So sort of everything available in AWS, everything available in Azure, everything to be able to crowd flare, you kind of have the raw API that the cloud providers, you know, so you can do everything, but those tend to be low level building blocks, right. And for a lot of folks going and figuring out again from first principles, how do I connect, you know, all these different pieces I need together. There’s a lot of people reinventing the wheel on how to build the best practices, VPC and native, right? And so then there’s sort of libraries that can be built on top of that, which are that best practices, VPC, Hey, I don’t need to go and put together those 15 building blocks myself every time I can just go and use this higher level building block that has built into it with some core screen.

Luke Hoban 00:28:19 Parameterization, here’s a way to sort of get the most common patterns for how to configure a best practices, VPC as documented by AWS. And now I can, you know, for most purposes, for the common 90% case, I can just use that and write a few lines of code and I get all the infrastructure I need. And so those kinds of libraries, whether it’s libraries that you build as an end-user, or is it for your organization that capture your best practices or whether it’s part of something that’s available in the open source or from a vendor as part of a high level library they create. I think that’s a place where we’re seeing more and more of that. We usable infrastructure libraries being built out. Now I’d say it’s still relatively early for that, but I think there’s going to be a lot of progress. I just, as the complexity folks are trying to manage, keeps to go up, keeps going up, we’re going to see that continue to be an area where that ecosystem grows and stuff like we’re doing a lot of work to try and kind of build out that ecosystem around kind of the model.

Luke Hoban 00:29:15 But it’s something that we’re also plugging into all the other existing ecosystems, whether it’s helm charts or CloudFormation templates or arm templates or other things where there might be ecosystems of reusable components

Jeff Doolittle 00:29:27 That could go to some really interesting stuff. And we’ll talk future more at the end, but I’ve got a bunch of synopsis firing right now thinking about, well, I want a template that’s going to let me run my system with these components. You know, what happens if I run it on, you know, Postgres with SQS, what happens if I run it over here with, you know, Mongo and Azure service bus or whatever, just like, I don’t know, run both, what do they cost? How do they benchmark, how do they this? And it seems like that’s the sort of thing we might even be moving more into as we can automate these things and define them in more consistent templatized kinds of ways, which could be really, really interesting.

Luke Hoban 00:30:04 You kind of took it even one step further, which I think is a few interesting ways that I think are really interesting. You know, they’re sort of building reusable libraries that just take a single pattern on a single cloud provider and give it a name, but there’s a lot of patterns that can make sense across multiple cloud providers, right? Some of the basic run, a Postgres database and a Linux VM, Hey, we can run that on any cloud. So maybe we can build that, you know, components as an interface that I can sort of develop to. And now I can instantiate that on any cloud provider, right? Because the, the differences between the cloud providers, while there might be some performance differences and those kinds of things, just like I can run Java or.net on a variety of different platforms. So I can in theory, run my cloud application on a variety different platforms. And so that kind of that abstraction capability also starts to get into kind of how can we think about multi-cloud and

SE Radio 00:30:54 Visit linode.com/se radio and see why Linode has been voted the top infrastructure as a service provider by both G2 and TrustRadius from their award-winning support offered 24 7 365 to every level of user to ease of use and setup. It’s clear why developers have been trusting Linode for projects both big and small since 2003 Linode offers the industry’s best price to performance value for all compute instances, including shared dedicated high memory GPU’s and their upcoming bare metal release Linode makes cloud computing simple, affordable, and accessible, allowing you to focus on your customers, not your infrastructure. Want to learn more about infrastructure as code and Terraform visit linode.com. That’s L I N O D E slash se radio and download their ebook, understanding Terraform and other resources for free today.

Jeff Doolittle 00:31:42 So I’ve got that. I see we’ve got a repository, we got our definitions in there. We’re starting to get this thing going and now I need to actually apply these definitions. So talk a little bit about that aspect of IAC. Imagine maybe at first there’s some manual aspects to it. Maybe you then can begin to automate some of these things, try to move towards full automation. And in that context too, I’m very curious to know about the fact that engineers don’t ask, how can we make this thing work, engineers ask, what’s going to break. What’s going to go wrong. So when we’re talking about applying these things, things like monitoring fault tolerance, recovery are going to be really important as well. So give us kind of a flavor of, I want to adopt this. I’m starting to learn it, but now I need to actually do this stuff out in the wild. And how am I going to do that effectively,

Luke Hoban 00:32:30 A few different layers on this. And I think a lot of what I’ll talk about is sort of true for most infrastructure’s code tools. So the last step maybe is something good. We’re doing some interesting work on with Columbia. I, but the first step is really I’ve written some new desired state. And I want to, I want to deploy that. I want to say, Hey, I want to make that desired state be my real estate that I’ve gotten in the cloud provider. And I’m going to do that, you know, using a CLI command, for example, I’m going to run, you know, AWS confirmation deploy, what have you. And I’m just going to say whatever code I’ve got. I want to make that the truth, right? So that kind of CLI based workflow is that really the starting point that most folks use for getting started with instructors code is that I’m going to use a CLI to deploy that infrastructure.

Luke Hoban 00:33:09 And I’m going to sort of interactively understand what the state of that deployment is. And I’m going to see sort of the status of what’s deploying. I’m going to, if something goes wrong to your point, I’m going to understand that sort of synchronously during the deployment, because I’m watching that CLI output and I’m going to go and take some corrective action based on what that error messages or whatever I’m going to fix up my desired state. If it was actually an invalid desired state, since that’s why it failed, I’m going to fix it up and then rerun that deployment and move to that new fixed upstate, what have you. And so I think that’s where a lot of folks start that’s where ISE traditionally has kind of focus. That’s a good starting point. What teams working in production environments want though, is they don’t want to rely on some operators sitting at their machine running commands that are modifying the cloud, right?

Luke Hoban 00:33:53 Uh, they, they want to move that into cross shared processes that are, you know, typically a sort of was delivery process or something like that, where changes to infrastructure are pushed through kind of a delivery pipeline. Like they’re driven by a good process where you to push that code into a particular branch. And I just get the PR and the PR is going to show a preview of what changes might get made to the cloud provider. And someone’s going to review that and say, yes, I accept. And this is okay to deploy. And now when we deploy that, we’re going to sort of see that process to a continuous delivery pipeline. And so that’s one of the great things we can take all of that, those same tools that we used on our desktop. We can move those into a CICB kind of environment and take advantage of all that sort of continuous delivery infrastructure that we’ve got, but use it not just for application delivery, but for delivery of changes to our infrastructure itself.

Luke Hoban 00:34:42 So that’s where we see, you know, most serious users today are kind of doing that. They’re putting their IC tools into continuous delivery. The last step is one that we’ve been working on at preliminary, something that we call automation API. And this is a thing that we actually saw was a lot of the teams we’re working with who really are sort of software teams trying to manage chronic restructure. They were looking to go beyond just TICD and kind of build their own custom software solutions that managed the delivery of infrastructure changes. So where they could really deeply customize what steps happened in between different parts of the cloud infrastructure changes. They could offer that as software, that the version of the software. And so automation, API was sort of a way to embed the deployment of cloud infrastructure into your own software solutions. That could be, Hey, I’m building my own sort of self-service provisioning portal where users can come up and say, I want an instance of this thing.

Luke Hoban 00:35:35 And so that’s going to go and kick off a deployment of that thing, or I want to change this configuration setting or whatever, and they can build up that application as a piece of software, and it’s going to drive a whole bunch of infrastructure to the group processes on the backend. I see that being kind of a next wave of like how we drive infrastructure delivery is going to be these sort of custom software solutions that folks build that aren’t just a CITT workflow, but are even more to the metal automated solutions. And we’re seeing, you know, we’ve seen a ton of folks doing really interesting things with that automation API. And I think that’s something we’re going to see in a lot of other places. We also see variants of that in the Kubernetes space, which the operator models and that sort of thing, where folks find really interesting extensibility models into, into Kubernetes that are kind of based around some of the same ideas of software driven automation of infrastructure changes. And so that’s a trend that’s sort of more nascent, but is going to be really interesting.

Jeff Doolittle 00:36:29 And really that triggers for me, even a bigger concept of maybe listeners are familiar with some of the workflow engines that are out there. Like I know Netflix has conductor, there’s Apache airflow, and speaking of monitoring fault tolerance, recovery, these kinds of things. And this is the first time I’ve thought of those tools in this kind of a context it’s typically, I’m thinking of those in terms of a workflow for a business process, for a customer that we’re trying to build software for, but it seems like you could take that automation API as you’re describing it. And maybe it already does some of these things, or maybe it needs that bigger kind of workflow engine. But now that ability to say, yes, I have my desired state, but steps could happen that could go wrong. I need to retry or need to whatever it might be. And how do you see that stuff kind of playing out as well and taking this kind of even workflow automation for these sorts of things,

Luke Hoban 00:37:17 Both those sort of more generic workflow solutions, the kind of things you mentioned that as you say are mostly about kind of business processes, that sort of thing, I’d say there’s a set of these they’re kind of being developed over the last couple of years in the market that are more focused on dev ops kind of scenarios, things. I probably relay things like filament is doing some interesting stuff. There there’s a set of technologies I’ve seen kind of being developed that do try to apply some of those same ideas, but into, into some of the specific nuances of kind of the dev ops workflows and then automating those things. And a lot of the times things like instructors code deployments are steps in those workflows, but to your point, there’s a more complex workflow that needs to be managed on top of that. And supporting those kinds of tools is one of the scenarios for things like automation, APL.

Luke Hoban 00:38:00 But I think those, those taking that another step further where they build sort of declarative processes on top of how we manage that. And I’m not sure exactly where that class of things will go. I think that’s also reasonably early, but I do think capturing as many of these processes in software of some kind is something that’s just a trend that, you know, is, is always going to be there, right? If we can capture workflow processes and repeatable software based solutions, that’s going to help us to scale those processes. And that’s something that’s true across every industry.

Jeff Doolittle 00:38:28 Some things it depends on the nature of your business, the nature of your use case. Some things are always going to be easier. And some things are always gonna be harder depending on what you’re doing. And one of the difficulties, I think, as we start moving more into some of the depths, I think of testing and things of this nature with IAC, what’s really nice to be able to do is to basically stand up a near production environment with an I’ll call it production quality data. You might have to sanitize PII or personally identified information, these kinds of things, but also that’s tricky if you’re Uber, you don’t get to do that because you have way too much data to do, but can I see help if maybe I’m not Uber? And most of our listeners aren’t with that kind of thing where I can actually say, I want to hit a full production environment with production quality data. And when I confirm it, flip that over and kind of a blue-green deployment sort of an approach.

Luke Hoban 00:39:20 Yeah. I think there’s a whole bunch of sort of different patterns there that are interesting. But I do think that that sort of ISE solutions generally can be a big part of it. I think one of the first things along that path is just, Hey, the ability to even create another copy of my environments, right? And if I’ve written down the description of what my infrastructure is, I at least have some hope that I can recreate it, right? Whether that’s just to create a one-off staging environment that I can use, or whether that’s to every developer can stand up their own version of the environment, or, you know, some of the things that we do this internally for some of our stuff, but we’ve seen a lot of our customers do interesting things around us, where every PR that they open is going to stand up a whole new copy of the infrastructure.

Luke Hoban 00:39:57 And that copy is going to go away when the PR is closed. Right. But that know it could be either the whole infrastructure. It could be a subsidy infrastructure, but they use infrastructure as code as a way to describe what is that thing. I’m going to stand up a copy on every one of these instantiations and what I’m going to parameterize that via some settings that maybe the user can control, but at least I have that repeatable process where I can go and stand this up and tear it down. As soon as you have that ability to stand up your infrastructure, again, that opens up so many interesting possibilities opens up, staging environments opens up per developer environments. It opens up these sort of review apps, kind of style PR things. It opens up testing abilities. Now my tests can stand up an environment, run a bunch of tests against it and tear it down to test against, against a production environment.

Luke Hoban 00:40:45 But I think to your point, it’s not just the infrastructure there, right? It’s how do I get the data seated into that thing? How do I get, you know, there’s a few other things I might need to do to sort of get that environment to really represent something production like enough so that I can sort of validate what I need to validate. Right. And I think that’s where one of the interesting things with some of the more modern infrastructure’s code tools is they really give a lot of flexibility for how you intertwine kind of management of infrastructure with taking specific actions. And so if you need to run database migrations or do seeding of your database, you need to sort of intertwine some actions into that equipment. There’s a lot of tools to sort of reach out of the kind of pure, desired state model and into, Hey, I’m going to write some custom code here that goes and runs a command against, you know, I just stood up my SQL instance in the cloud on now I’m going to run some commands to sit, imperatively put some data into that thing to seed it, or to pull that data over from some other environment where I already did that data cleaning, whatever.

Luke Hoban 00:41:44 So I think like a lot of these IC tools, even though their primary focus is around managing cloud infrastructure. There’s a lot of ways to go in, break out of that and sort of runs and custom logic that can help you to define the pieces of your environment. That part just managed by a

Jeff Doolittle 00:42:00 Now let’s say I create a desired state definition and let’s say I have two environments running in. They’re actually different for whatever reason set aside the arbitrary nature of my setup here. But I’m curious when I’ve defined, this is a tool like pollute me or Terraform, is it creating a diff of my desired state versus actual state? Or do I have to actually make sure that whatever desired state I’m defining is already pegged to a particular sort of like database migrations, I can define a new database and then the migrations are going to be specific to the state of the actual database, but it’s nice if I can just declare the state I want and say, well, database a is a little different than database B, but the tool just figures out the right migrations for both. How do the tools kind of help with that sort of thing? The IC tools?

Luke Hoban 00:42:47 Yeah. In general, you know, a lot of these tools have the ability to that. They do keep track of what state the environment is in just because it’s, it’s typically expensive to try and recover all of that state from the cloud providers, they manage their, their own view. And so then they can compute, Hey, this is the Delta I need to apply. They typically have options for refreshing that data, pulling that data out of the cloud provider and understanding what that drift is between what’s the desired state that I understood that the world to be in based on the last time I did a deployment and what’s, what’s the actual state of the world. And so we can then go see that diff see what has changed and then decide whether we want to kind of drive that back into the desired state that, that I’m managing or else I can make changes to my desire to take, to say no, that wasn’t intentional change.

Luke Hoban 00:43:29 My desired state is just wrong. Maybe somebody went in and said, I want four instances that have two. And I haven’t yet changed my desired state. And I don’t want to undo that. I want to actually just represent that back in my state. So then they can go make that change. Now, when they do that, refresh it, won’t say there’s a diff and everything will be okay. So there is that process of sort of reconciling those two things and understanding when drift has happened and then sort of taking action to correct that either in the real estate or,

Jeff Doolittle 00:43:55 Well, I think that’s a good segue to talk some about security and the infrastructure as code world, you know, in an ideal world, nobody can touch your database. Your migrations are pristine and they’re always working against the state. You expect in the real world, somebody can get an essay, password, and bypass all that. Somebody can go to your AWS portal and spin up a new instance of something or mess with things. There’s a couple aspects to security here as well. I mean, I guess one of them is security around who can create the definitions, but I think that’s pretty straightforward standard software engineering practices, but I think more interesting is how do you check the security of the infrastructure that you’re wanting to stand up? That’s one, and then also dealing with compliance who can deploy what infrastructure, you know, somebody says, I want to Kate’s cluster, that’s the biggest Kates cluster that’s ever been built and it’s going to cost a million dollars an hour. Okay. We got to deal with that too. So security kind of like, how am I deploying secure cloud systems and how can I see help with that? And then also security around you can’t do that.

Luke Hoban 00:44:56 Yeah. So obviously obviously an interesting balance and a bunch of directions, cause you know, security piece is really important, but then there’s also, the everyone wants to provide the sort of developer velocity and how do we empower folks to have as much capability as they should have without having any of the capability they shouldn’t have? And that’s a high balance. Cause sometimes those are really conflicting

Jeff Doolittle 00:45:14 And give your kids all the freedom they can handle and no more

Luke Hoban 00:45:18 Exactly, sort of the same content. I’d say a couple of things. You know, one that I think is a starting point for me for this is just the, the cloud providers have just amazing set of capabilities built in around security and managing security. That’s really, the starting point is understanding, taking advantage of all the primitives that are available at various different levels to manage security, whether it’s at the item level, whether it’s at the sort of network protection level, whether it’s at the account level, there’s a whole bunch of layers of sort of, you can apply different security practices that are typically built into these crowd API that are pretty deep level and they have incredible building blocks to work from there. So the first answer is sort of the infrastructure’s coach lets you use all those building blocks, right? And so a lot of what you’re doing when you’re kind of managing security practices, code aren’t necessarily specific to the infrastructure code tool.

Luke Hoban 00:46:05 They’re just about, you know, taking advantage of all those primitives and taking advantage of all those building blocks you can use to bake their security practices into the infrastructure patterns you’re using. I think as part of that, that tends to be well, we talked earlier about creating best practices, reusable components. Well of course there’s best practices, reusable components have security best practices in them as well. Right? So I can capture those best practices once I can then have everyone in my organization use the same thing. If I, if I realize that there’s some hardening I want to do around that component, I can do it in one place and everyone else benefits from it, the same sort of things that help with sort of application security, right. Where I can actually rely on the building block and building on top of is secure by design.

Luke Hoban 00:46:44 And so now I get all these benefits. That’s really the core thing. There are some things that the infrastructure is code layer that I think are really interesting around security and they sort of two pieces that I think of one is we kind of talked about the automation piece and moving from, Hey, a developers on their machine, just going and running some command. And the environment you’ve got is that a developer just is copy pasting some in production credentials onto the machine and they’re running a command. Eventually you’re going to run into some problems, right? The security of your deployment is going to have some issues. And so folks typically are doing that where they’re, they’re creating continuous delivery pipelines or something like that with is privileged agents capable of doing deployments into the production environment. But there are a whole bunch of checks in place whether it’s manual gating approvals or code reviews or what have that are going to make sure that only things which are kind of approved have been validated, have been run a preview that we know what kind of changes this is going to make.

Luke Hoban 00:47:36 And so the review has gotten a chance to see that ahead of time. There’s a bunch of process and sort of validation in the same way that, Hey, if I’m building a mobile app and I want to deploy that out to the world, I’m going to put a whole bunch of gates in place to make sure that only a build that actually works and that has been validated and it doesn’t have some Trojan horse built into it. I’m going to put all these checks inside to make sure that that’s a good thing. The same thing we can do kind of in our, in our continuous delivery and infrastructures code really works very well inside those environments that help me understand those previews, help me understand what changes. And then I didn’t say the last piece is policy and that’s something that a lot of the instructors code tools offer we offered for prelim.

Luke Hoban 00:48:16 You kind of have a way of describing policy associated with infrastructure. So things that have to be true about my infrastructure for it to deploy correctly. And so I can have my compliance team and my security team build out a set of policies that are going to apply to any infrastructure I deploy or maybe any infrastructure that’s going to deploy into my production environment and say, Hey, like you cannot open up a new public IP address in this VPC or something like that without getting an approval for that. Or you can not make an S3 bucket public or that sort of thing. And so we can put this policy in place and now these tools will enforce that policy on any deployments that happen using environment with bulimia, for example, we can actually use software to describe our policy as well, right? So we can build reasonable policy extractions.

Luke Hoban 00:49:03 We can build arbitrarily sort of rich ways of doing it. Maybe I for costs, for example, I can go pull down the AWS price sheet. I can go compute the cost of everything and I can say, Hey, your cost increased by, you know, when I tried to deploy this giant Kobe’s cost of your cost increased by a million dollars an hour. So you probably don’t want to deploy this right. Put a big red mark in my deployment. That’s going through that continuous delivery pipeline and warn that reviewer that, Hey, you probably shouldn’t review except this because it’s going to increase your costs.

Jeff Doolittle 00:49:29 And then calculating that before you ran it, is he right? But I think it’s kind of what you’re saying.

Luke Hoban 00:49:33 Yeah. You can see that during that preview the reviewer before they emerge that into production can see that and they can say, no, we have to make this change before we go that, so those are the sorts of things I think you can do. And there’s, there’s a ton more richness that can be built with the kind of modern compliance policy tools that are available now.

Jeff Doolittle 00:49:51 Absolutely. And speaking of that, maybe a bit of a side note, but our vendors like preliminary or others, are they sort of reinventing policy or is there a potential for adoption of, you know, you mentioned cloud native before, and of course there’s the CNCF cloud native computing foundation, which leads me to think of something called open policy agent. So is there an ability to use existing tools in the ecosystem to apply policies within an IC context?

Luke Hoban 00:50:16 Yeah, so I think there’s a whole bunch of different answers for different

Jeff Doolittle 00:50:18 Guys. We don’t want to go too deep into that. It’s a little orthogonal, but I thought it was at least interesting in the subject of compliance to talk about a little bit

Luke Hoban 00:50:25 Open policy agent obviously has been growing fast over the last couple of years here. And, you know, especially in kind of the Kubernetes space, but really expanding to a lot of other spaces as well with our pollution policy solution. We’ll let you write those policies in, you know, in the existing languages that we have, like TypeScript and Python and things, but we also let you write it in policy agent, which sort of its own has its own language.

Jeff Doolittle 00:50:47 Exactly.

Luke Hoban 00:50:48 Yep. So let’s, you kind of write rego based rules and then use open policy agent as the execution engine for those rules. But instead of enforcing that policy against the Kubernetes manifest, you can force that against the plume equipment. I think there’s a lot of interesting stuff going on in this space and I think all the vendors, but certainly Pilou me really trying to integrate into and let folks bring places where it places where there is a lot of innovation happening. Let’s bring that in and let folks use that with our tools. But the core thing is we want to make it so that you can plug your at that policy solution into the pollute me execution of your, of your cloud infrastructure deployments, whether that’s using TypeScript deprive phone or using rego.

Jeff Doolittle 00:51:26 I actually think that’s a great story because options is a good thing to have people need options. Like we mentioned before using the gooey to stand up cloud infrastructure, it’s not right wrong, good, bad, it’s different. And there’s a context for it. It becomes bad when the complexity becomes a maintainable or it’s too easy to break it. And that’s when it’s time. So in a similar fashion, I think it’s great that there’s options here. You can use the language you’re familiar with, but you might have. And I think this is coming, we’re going to more and more people who are super familiar with OPA and rego and that’s going to become a, possibly a specialization all of its own. And to be able to leverage that asset, that resource, that knowledge and plug that into this whole IC pipeline, I think has a lot of promise. So that’s really cool to hear that that’s an option even already.

Luke Hoban 00:52:09 Yeah, it’s an option now, but I think it’s something that’s going to be, I agree with you. I think it’s going to be an even bigger thing as we have as we move forward. And as, as the specialization does grow even larger around not offering the infrastructure, but offering the policy around the infrastructure because there’s a ton of richness.

Jeff Doolittle 00:52:24 Yeah. So now from that kind of big scale discussion, right? How do I manage all this infrastructure and all the policies and compliance and the cost and everything. Now let’s bring it down to, I’m a developer and I’m doing IAC and I want to be able to test. And for me, and I, I know for a lot of our listeners as well, it’s important to try as much as you can, to be able to run things locally. You know, when the wifi dies, if I can’t get to my cloud provider and I can’t develop that as a bad situation to be in. So from the spectrum of fully isolated tests in my own environment, using IC to maybe some integration testing with cloud resources, cause I want to check how things are going to work with near production type provisioned resources. Tell us a little bit about the testing landscape with IAC, kind of along that spectrum,

Luke Hoban 00:53:06 When you’re doing application development, you largely want that to be as isolated as it can be from, from any of its dependencies in a sense, right? And so you want to be able to run that application locally. You want to be able to sort of do that inner loop of development on that iterate really quickly inside the application code base for that piece. You don’t want to have the local database that you can use instead of having to go out to the cloud, but whatever it is you do for your application development, w typically doesn’t have as much of a role there because obviously you’re trying to keep everything locally where testing really starts to impact I see is in a few different places. So one is once you do have infrast cloud infrastructure that you’re managing as part of your solution will a, you want to make sure that infrastructure does what it says it does, right?

Luke Hoban 00:53:47 And so you want to be able to validate things or validate that things are true about that infrastructure. And so I think of that as sort of unit testing your infrastructure, where you write, Hey, you’ve got your infrastructure description over here, but now I’m going to write some things that I believe need to be true about the results of that infrastructure. So, especially as I build more and more complex components that describe infrastructure, I want to say, well, if I’m building this best practices, VPC component, and I say, I want three subnets. Well, listen, let me make sure that there really were three subnets created. Let me really make sure that they have non-overlapping site arrangements right. In my desired state, but am I really? Yeah, but a lot of this is things that I can just say, I don’t even have to deploy my infrastructure.

Luke Hoban 00:54:29 I can just validate that my desired state program describes something that, that has this to be true. And so I think that unit testing piece is just me writing down all those sort of contracts of what things need to be true and just validating that that can be very fast. It can be validated as just as part of a unit testing my infrastructure without having to deploy it. I think then sort of validating things are true when I deploy that’s where actually a lot of the policy tools we just talked about can come in because they can validate things as I’m deploying infrastructure, or even when I’m previewing, I can say, well, I need to validate that this is true. And I want to fail immediately if something is not true, because that indicates I’m outside of the sort of state I want it to be in.

Luke Hoban 00:55:08 And I want to go and take some remediation. Action is I think policy tools can sort of be part of a testing solution at that layer as well. And then the last, which I sort of think of as the most interesting is sort of the integration testing and that’s where kind of being able to stand up environments or subsets of environments from scratch is really interesting where to test some components, if I can go in and just create a production like environments, then run all of my application level tests against that, but where the environment is real, right, I’m actually using real cloud infrastructure. I’m using the real network topology. I’m using the real, you know, a database that is configured exactly the same as my production database. Just maybe with different data insight that can give me a whole bunch of confidence that not just my application is correct, but the infrastructure that I’m wanting to run my application in.

Luke Hoban 00:55:54 And especially as increasingly, we’re having to change both our application code and our infrastructure kind of in lock step, we’re making changes to when it’s got a serverless application and we were changing, we’re adding in some new data service, that’s going to trigger a serverless function. And so the code I need to write and the data service that I’m connecting to are kind of being co-developed. And so now I really want to, when I, when I want to test that, I really wanna be able to stand up a copy of that environment, validate that that not just the code does what it says it does, but that it does what it needs do relative to the data source is going to trigger it. And so, yes, that is something that I can’t just bounce it entirely on my machine, but it’s something that’s essential to my application.

Luke Hoban 00:56:33 So I need to test it and if I can make it so that that whole environment is something that I can stand up really quickly, uh, run my tests against and tear down. Now that opens up so many possibilities for me, creating much more confidence. Now, when I open up that PR I can run this battery of tests to validate that the application and the infrastructure work correctly together, I can get that green check mark. And now I feel confident that when I merge in into production, it’s going to work correctly. That category of testing is really where instructors code hens to unlock really fundamentally new things that give me significantly more confidence in my,

Jeff Doolittle 00:57:07 And then what about auditing the production environments? And that’s another way to test is let’s say I’ve configured my Kubernetes cluster to have a minimum number of nodes with these pods and then a maximum number, you know, so I don’t like blow the budget or whatever these things might be are IC tools able to kind of help me with that auditing and monitoring piece as well of production assets.

Luke Hoban 00:57:28 Yeah. I only one piece is by some of the drift detection I talked about earlier, where we kind of go and understand, pull that state back in and understand whether it’s changed from desired state. And then combining that with the policy solutions gives us the ability sort of enforce policy against the real estate. But if you want that more alive, typically the cloud providers themselves have kind of policy solutions that live inside the infrastructure, and it can be even tighter to that, like eight of those configures immigrant beds, but the place where it was, I would say the infrastructure is code can come in really helpful. There is, I think what a lot of folks do is they do interesting things where they build custom solutions for kind of testing and production, where there, whether it’s chaos engineering and really advanced kind of things like that, or it’s just simple things where, Hey, I’m going to run a workload against, you know, I’m just going to be constantly running a background, Canary workload against my application so that I just have some constant mode and I’m able to see how that’s behaving and things like that.

Luke Hoban 00:58:20 They’re not, you know, they’re custom solutions that things I’m going to build. They’re just kind of another piece of their structure, but they tend to be just more ISE, right? They tend to be, here’s three more pieces. I’m going to deploy into production as part of my environment. And if I do that with ISC, well, Hey, I can actually get all of that. Even when I’m standing up this staging environments, right? So now my staging environment can have that Canary job running inside of it as well, because I’ve described that in a, in a repeatable way. And so we find that often these ISE tools unlock the ability to sort of build a lot more of those kinds of solutions into the applications. The application kind of knows how to have its own sort of test in production and infrastructure embedded into it as part of how I deploy. I can get that

Jeff Doolittle 00:59:01 Gloomy Terra farm, the other ones that are sort of in this third wave you described earlier, how are these actually executing? Is there a runtime that’s running somewhere in my cloud environment? Or am I running something in a CIC you container speak a little bit. Cause I think that kind of helps answer some of these questions too, about things like audit and things like that. You know, you mentioned the thing, but you’re, you’re building that yourself. So how does it kind of work in that standpoint from a runtime standpoint and actual when you execute these things?

Luke Hoban 00:59:28 Yes. A lot of that comes back into some of these questions about, well, what are the, is it a manual deployment or is it a CSP and deployment, automated deployment? And really at the end of the day, these IC tools tend to be some piece of software somewhere that’s, that’s running. That is gonna, you know, when you ask it to change, the new desired state is gonna move you to that new desired state. That could be a CLI tool. It could be a CLI embedded in a CIC workflow where when I kicked that workflow, it pulls out the, the latest get committed and pushes me to that state. Or it could be some other more advanced workflow like we were talking about with full engines that might be built on top of that. So really you can embed these things into a lot of different environments, whether as just tools or software packages or services ISE itself, I think this is prescriptive around that.

Luke Hoban 01:00:10 I think it kind of depends on how you build that into, but yeah, I think in general, we see folks moving towards those more automated solutions for sure, but they tend to be running from outside of the cloud environment in the sense, right. They tend to be modifying the cloud environment. They’re sort of part of the control plane effectively. The customer is using to manage their cloud environment. The sort of data plane of that deployment is what lives inside the cloud. And that is going to be all the various pieces. And that’s sort of where I made that distinction, that there’s a lot of interesting things you might want to build to put in that data plane, right? You might want to build this Canary deployment into your actual cloud deployment. You might want to build, use some of the enterprise services for security and compliance, that sort of thing. So I think you kind of have those options to use IAC to describe that, but also the options too, during that deployment process with your IC tool enforce policy. And

Jeff Doolittle 01:01:03 Well, this has been a fascinating exploration I think of, of IC. And I want to wrap up by asking you a more future looking question related somewhat to ISC, but also more broadly, how do you think cloud tools are going to shape the future in a, what we hope will soon be a post COVID-19 world? The economy’s changing the world is changing. So how are cloud tools and maybe ICU specifically, what do you envision them doing over the next, you know, five to 10 years,

Luke Hoban 01:01:30 Obviously, you know, the last year, year and a half or so have been absolutely abnormal in a lot of ways. But one of the things that, you know, you’ve just seen as a consequence of everything that’s been happening has just been kind of every organization realizing how important software is, how important the cloud is to their business and how much leverage that they can get for continuing to have impact with their business via software and via the cloud. And so I think the first part of the answer to this is I think cloud is going to continue to be even more significant factor in how organizations are investing in building for their future. Everyone who didn’t already believe that has been taught, taught a lot of new lessons about how important that is, but even the people who really did believe it, or just realizing the scale at which they can get value out of, out of the cloud.

Luke Hoban 01:02:16 Absolutely. That can drive their business. So I think that’s the backdrop for it. And then I think these tools then maybe a little bit of a broken record on this, but I think one of the things that, you know, I just see over and over again, is that as you want to take advantage of the cloud, you want to take advantage of these building blocks. That really are incredibly valuable. Getting all that value out is going to require building more and more complex solutions. On top of that, that’s a good thing. Cause there’s complex solutions, the more complexity that means the more value you’re getting, but now all of these tools have to come and take up the burden of how do we manage that complexity? How do we build software engineering teams that know how to manage very large amounts of complexity in the cloud for companies that are really going to lead their industries in how they take advantage of the cloud. They’re gonna be bringing in these engineers who are able to manage the cloud at scale and build at scale solutions on top of the cloud. And I think that’s where cloud development tools generally and in ISE specifically are gonna play important roles and helping those teams to continue to build amazing things, to take advantage of everything in the cloud.

SE Radio 01:03:16 This episode of se radio is exclusively sponsored by Linode Linode makes cloud computing simple, affordable, and accessible. Whether you’re working on a personal project or looking for someone to manage your company’s infrastructure, Linode has the pricing support, security and scale you need with Linode. You get consistent and predictable pricing across 11 global markets, 24 by seven by 365, human support, rich documentation and policies and controls to strengthen your overall security posture, allowing you to grow at your own pace. You just consistently ranked Linode as one of the leading public cloud providers on both G2 and TrustRadius. Want to learn more about infrastructure as code and Terraform visit linode.com. That’s L I N O D E slash S E radio and download their ebook, understanding Terraform and other resources for free today.

Jeff Doolittle 01:04:03 Well, Luke, if people want to find out more about what you’re up to, where would you send it?

Luke Hoban 01:04:07 Yeah. Twitter at Lou Coban, github.com/glucogon feel free to drop me a note either of those places. And of course everything we’re doing at the limit.

Jeff Doolittle 01:04:15 Great. So all the familiar places. All right, well, thank you so much for joining me today on software engineering

Luke Hoban 01:04:20 Radio. It’s great to be here. This

Jeff Doolittle 01:04:22 Is Jeff Doolittle. Thanks for listening.

SE Radio 01:04:26 Thanks for listening to se radio an educational program brought to you by either police software magazine or more about the podcast, including other episodes, visit our [email protected] to provide feedback. You can comment on each episode on the website or reach us on LinkedIn, Facebook, Twitter, or through our slack [email protected]. You can also email [email protected], this and all other episodes of se radio is licensed under creative commons license 2.5. Thanks for listening.

[End of Audio]


SE Radio theme: “Broken Reality” by Kevin MacLeod (incompetech.com — Licensed under Creative Commons: By Attribution 3.0)

Join the discussion

More from this show