Search
Nate Taggert

SE Radio 320: Nate Taggart on Serverless Paradigm

Kishore Bhatia discusses with Nate Taggart about Serverless. Topics include: understanding the motivations for this computing model, deep dive learning about Serverless architecture, development frameworks and tools. Learn from Nate’s experience with Serverless paradigm developing Operations tools at Stackery and find out various approaches, challenges and best practices for architecting and building Serverless applications.


Show Notes

Related Links

Nate’s blog articles:

Other sources:

Transcript

Transcript brought to you by IEEE Software
Male: This is Software Engineering Radio. The podcast for professional developers on the web at se-radio.net. SE Radio is brought to you by the _____ Computer Society and by _____ Software Magazine. Online at computer.org/software.

Kishore Bhatia: Hello. This is Kishore Bhatia for Software Engineering Radio. Today’s episode is about _____ architecture with Nate Taggart. Nate is the CEO and co-founder of Stackery. The enterprise serverless operations console. Nate was previously a product manager at New Relic where he launched New Relic Browser. He ran the data science program at GitHub as a technical product manager. Nate’s most recent blogs about serverless are published at Stackery.io/blog. Welcome to Software Engineering Radio, Nate.

Nate Taggart: Thanks, Kishore.

Kishore Bhatia: All right. Today we are going to talk about serverless. We’ll understand the motivations for this computing model and do a deep dive learning about serverless architecture, development frameworks and tools. We’ll also learn from Nate’s experience with serverless paradigm developing operations tools at Stackery and find out various approaches, challenges and best practices for architecting and building serverless applications.

[00:01:13]

So, Nate, before we dive into what serverless is, I’d just like to, you know for our listeners understand a contextual history of, you know how did we come here? What is the distributive computing model that led to serverless.

Nate Taggart: Yeah. Absolutely. So if we step back in history of web infrastructure, let’s say we go back 20 or 30 years, people used to buy physical hardware. Right? Buy a server. They’d install it in their closet. Somebody’s job was to actually plug the wires in. And if you got too much traffic, more traffic than the server could handle, the solution would be to buy a bigger server. So this is what we’d call scaling vertically. And this worked. It was a very simple model. But it had some challenges too.

[00:02:01]

First off, there was no flexibility in scaling. You had to predict in advance how much traffic you were gonna have and buy an appropriately powerful server to handle that. And if you hit a limit you had to make a large capital purchase of a bigger server, which could be five or tens of thousands of dollars.

As, you know companies got more sophisticated about their infrastructure, they did start to add multiple servers and generally they would load balance traffic across multiple servers. This did give some flexibility in terms of scaling, but still was a long range planning activity. Eventually about ten years ago now, the cloud started to emerge. And instead of buying, spending a big capital expense upfront to buy infrastructure, the model was that you could rent infrastructure down to much more granular levels. So you could pick different sizes of servers, and you could even rent them by the hour instead of having to commit to them for the lifetime of the server.

[00:03:03]

So this is the model that predominantly has been driven by Amazon, AWS. And for years it’s worked really well. We changed this model that we thought of in terms of vertical scaling, get bigger servers, and it switched to a more horizontal scaling model, where you could just add more server and you could scale them up and down.

And ultimately what we were doing here was we were increasing the utilization rate of the servers. So instead of buying a big server that was more capacity than you needed, you can now scale up or scale down. And that worked well except that as applications became more complex, you can think of big monolithic applications, it became harder and harder to manage the underlying infrastructure to coordinate with application changes. We wanted to make improvements to applications more quickly. We wanted developers to be able to ramp up on a software project more easily. So we started breaking apart these giant monolithic applications into micro services.

[00:03:03]

Now these micro services were much easier to develop because they were smaller portions of functionality. It was quicker and easier for a developer to ramp up on it. But it introduced this problem of orchestration. Of now you’re managing multiple players of complexity when it comes to managing your infrastructure and your utilization rate. So for example, if that one application is now a dozen micro services, you had to individually manage how each of those services scaled. That became complex.

And so in the last five years or so we introduced this idea of orchestration. Now orchestration is just the idea that individual services can be scaled, turned on, turned off, restarted, recovered, flex up or down in terms of capacity. And those services can run atop a cluster of compute resources.

[00:04:59]

So now there’s essentially two levels that you can optimize on. You can optimize the underlying infrastructure, the cluster of services. Or you can optimize at the application level, the individual resources available to each micro service.

Now this works well in terms of managing utilization and it also helps developers ship applications more quickly, but it introduces a large degree of complexity. You have more points where you’re managing scaling. You have more points where you’re managing health. And so this became, you know a job function that requires some degree of sophistication. And what we’re seeing with serverless is that for a large portion of the market that need for orchestration, while they have it, they have the need because they want to drive efficient use of their infrastructure, they don’t necessarily see that as a business differentiator for their organization. They don’t want to hire and develop a deep orchestration skill set because that doesn’t really help their organization meet the organization’s goals.

[00:06:04]

And so instead they look for other alternatives. And what we’ve found with serverless is that they’re able to outsource that skill set. Outsource the need for orchestration. Let Amazon or Microsoft or Google handle the orchestration layer, and instead let their team focus on developing applications and then managing the health of the applications on top of this new model of managed infrastructure.

Kishore Bhatia: Gotcha. Well, that was pretty good detailed view on how we came about say the challenges of, you know scaling, managing applications and you know the whole application stack with, you know operating systems, libraries, the app environment. And you talked about cost there and the efficient utilization aspect as well. When you say outsource the need orchestration at this point are you talking more about infrastructure as a service, you know platform as a service? Can you go a little bit deep on that side?

[00:06:59]

Nate Taggart: Yeah. So orchestration is really it’s an engineering need. You need to keep all of these micro services connected. Allow them to be aware of each other. Understand that since they’re fundamentally running on physical hardware, hardware fails occasionally. So you need some level of redundancy, some level of healing to happen on the underlying infrastructure. And so with orchestration what we’ve done is we’ve abstracted a way in a sense, we’ve abstracted the infrastructure from the application. So that the application can run on a set of shared resources that can self-heal and the application itself can now flex up or down on top of that.

Kishore Bhatia: So what kind of benefits does that, you know entail for different aspects of applications? Are you talking more about it becomes easier to manage just writing code and applying those apps? Or is it more on the cost side that these developments have happened?

[00:08:08]

Nate Taggart: So there’s changes to every stage of the application lifecycle. From a developer’s standpoint, it tends to be easier to work with a micro service than a monolithic application. Picture an application that has a million lines of code. It can take a while to ramp up in your understanding of how that application works. And every time you make a change to that application there’s potentially a lot of impacts unintentional that you don’t realize will happen to the reset of the application. It’s very interconnected and intertwined.

But as you start to break that application apart into functional units, you know as micro services, that allows a team to say specialize on authentication. Or specialize on a checkout system for the e-commerce site. Or whatever it is that allows them to ramp up very quickly. Become an expert on that part of that code –

[00:09:03]

And provides a clean interface to the rest of the application, typically through something like an API.

Kishore Bhatia: Gotcha. Yep. So more speed, more functionality that needs to be managed and maintained independently of each other when we talk about micro services here.

Nate Taggart: Right. And so that’s, you know that’s on the development side of the house. But keep in mind typically you develop an application and that may take weeks or months. And then you run that application for years. A big part of the application lifecycle is maintaining its availability and health while it’s running and serving customers.

This micro service model gives you another layer that you can manage the performance of the application. You can have some services that grow more quickly than others. And you don’t have to scale everything linearly.

Kishore Bhatia: Gotcha. Right.

[00:10:00]

So when we talk about serverless, are these strengths, you know we just discussed scale, manageability, cost, utilization and also the concept of developing applications at a speed as well as at a I think a functionality level where, you know modules and individual services, micro services are more independent of each other when it comes to the whole life cycle, where is the actual trend driving this? Is it more on the business side? Is it technology that is driving this adoption of a computing model that is more cloud driven and more infrastructure abstract driven?

Nate Taggart: Yeah. I mean I think it’s fundamentally both. So from a business side every business, regardless of the service that they deliver, they’re fundamentally looking to ship products more quickly. They want to increase their velocity. They want to be able to create more value and, of course, capture more value from their customers.

[00:11:01]

And at the same time they want to manage cost. So they don’t just want to throw infrastructure at the problem and scale up arbitrarily. They want to be able to ensure that they’re maximizing their utilization of the infrastructure that they’re paying for. And they want to balance that efficiency with, of course, the risk that they might go down or have an error or in some other way fail to deliver their services.

But from an engineering standpoint, you know it’s somebody’s job to develop these services. And if they’re trying to meet an organization’s goal of developing and releasing products more quickly, then they have to look for ways there the individual people on their team are able to be more productive. So micro services offers a pattern for that where a developer can ramp up on a team more quickly without having to learn all of the code base. And, you know of course there are other patterns too. We’ve seen in the last five years the emergence of containers as, you know a pattern and trend that people have moved towards. And that helps developers in at least at one level helps developer maintain consistency between their development environment and productions environments –

[00:12:08]

And reduces the risk that as they are releasing products and shipping things more quickly that they’re able to do that safely and reliably as well.

Kishore Bhatia: And defining the paradigm itself, when we say serverless, we do know that there are servers in serverless. So how would you go about defining that in a way that it comes out technically strong but also has a meaning for business, you know development, ops and security folks? So we’ll just go down trying to understand what is serverless.

Nate Taggart: Absolutely. So first off, there’s I think a lot of debate around what is serverless. Some people think of it as a literal compute model. Maybe better described as functions as a service. This is a code that you develop and will run on demand in the infrastructure provider.

[00:13:00]

I like to think of it more as a development model where an organization is able to take advantage of managed services, managed infrastructure and in that way, you know the focus may be on the compute level of the application, but could, with this definition also touch on, you know data storage and API delivery, you know other aspects of the architecture being fundamentally a managed provided service by a cloud providers.

Kishore Bhatia: Well you touched about the definition being more development centric. You say it’s a developing model. How would you break that into, you know various aspects of development that serverless adds into or makes it easier? Can you go down a bit, you know elaborating? And you mentioned APIs. You mentioned storage. You mentioned manageability. How does that make developing easier?

Nate Taggart: Yeah. So at the end of the day developers have a lot of aspects to the job when you look at the entire software delivery lifecycle.

[00:14:05]

So if you could focus and specialize on a single level of that lifecycle, you could potentially increase your productivity pretty dramatically. So, for example, a developer might say, you know the way that I add the most value is to focus on writing code. Focus on developing the functionality and the business logic of my application. And they may not feel particularly skilled at managing the underlying infrastructure. Configuring a network properly is a very different skillset than writing code. And there are people who specialize at each. For small organizations it can be challenging to build a team that has all of the skillsets required to manage the entire lifecycle and so manage services _____ a shortcut to that problem.

[00:14:59]

But even for large organizations, I mean if your business can focus on building products and deleveraging products and doesn’t have to focus on, you know managing boiler plate infrastructure that frankly, you know Amazon or Microsoft can do better than you anyway, there’s potentially a competitive advantage that you gain there where your business is focused on building and creating value and your competitors are focused on managing boiler plate hardware.

Kishore Bhatia: Right. That makes sense. And does that apply equally to, as you mentioned, the small startup scale teams and the large organizations that already have deep expertise in say data center management or network database storage management?

Nate Taggart: Yeah. So keep in mind that these business, they’re always changing. They’re always evolving. I think counterintuitively we’ve actually seen enterprises embrace serverless much more quickly than the broad startup market.

[00:16:00]

And I think a big component of that is just how much is at stake. I mean a large enterprise may be spending millions of dollars a year on infrastructure and they may have thousands of engineers writing and developing products. So if they can cut, you know manage their infrastructure costs even by 20 or 30 percent, and many of them see even much more significant savings, that can be pretty material. On the other hand, if they can help developers ship applications more quickly, that can also be really material in their business.

And I think the headlines that we hear a lot about with serverless are that it’s so much cheaper to use. And I think there’s truth to that. Sometimes that drives businesses to look at serverless. But what I’ve found is that most businesses embracing serverless are actually doing it because of the velocity that it adds to their team. They find that they’re able to build and release products much more quickly using serverless infrastructure because they have to do less work in planning and managing for capacity, scaling the application –

[00:17:08]

Making sure they have high availability. They can outsource that functionality and instead focus on their application and the health of the application.

Kishore Bhatia: That’s an interesting point. And I’d like to come back to the comparison, you know you made on cheaper versus velocity of developing applications. But, you know just earlier during the definition you said that there are certain definitions out there that also talk about the functions as a service aspect of serverless. Can you define what you mean by functions here? Functions of a service. And why is that relevant to serverless.

Nate Taggart: Yeah. So I think there’s a lot of design patterns that people can embrace when they switch to serverless infrastructure. One thing that’s important to understand is that there’s very rarely a way to lift and shift. And what I mean by that is it’s difficult to take an existing legacy application that’s on traditional infrastructure –

[00:18:04]

And simply move it to serverless infrastructure. Almost always requires some form of redevelopment, re-architecture.

Now that said, there’s a few different patterns in emerging in how people build applications for serverless infrastructure. Now at a basic level you can fundamentally build a lot of applications very similarly to how they’re built today. Say for example, you put an API endpoint and connect it to a function as a service. What that would do is as you had a request to this API endpoint, your code would run and return a response. Now that looks very much like kind of a traditional infrastructure model. There are some minor changes. For example, that function is not long-lived. It will start and stop. And because of that there’s no state that will carry forward from one transaction to another. But still it’s fundamentally follows the request response loop that many developers are used to thinking in as a development model.

[00:19:06]

Function as a service though also enables other types of development models. And I think the big one that we see really gaining in popularity right now what’s called an event driven model. In an event driven model you can subscribe your code, your functions, to listen to certain event triggers and respond when those triggers happen. So as an example, you could listen for when is a file uploaded to an object storage bucket. When a file gets uploaded I want my code to run. That’s commonly used for something like transcoding videos or generating thumbnails. So an image gets uploaded. When my function sees that event happen it will then generate a thumbnail and output that to another location.

This event driven model is interesting because it actually makes applications feel much more distributed.

[00:20:07]

For example, at, an enterprise organization you may have a team focused on marketing functionality that’s listening to events that are generated from say the e-commerce site. As an example, someone puts an object in a cart and then abandons the cart. Now you could write a function that looks for when this event happens. When there is this condition in the data, I want to run a function in response to that. Maybe, for example, trigger a marketing email that responds to cart abandonment.

And so in this way we start to see now only is the code itself distributed, running on multiple physical servers behind the scenes, but also the development model is distributed. You have different teams relying on and potentially reacting to both code and events that other teams may be ultimately responsible for managing.

[00:21:03]

Kishore Bhatia: That was a great example of you know how event driven models are being used in real functionality. You know you mentioned the file upload use case, and that was actually something where you don’t have to have a lot of state. You don’t have to worry about. So you can actually put a function behind that that does something based on an event happening. But more interestingly, you know this other model that you were talking about with an e-commerce card, for example. You know there’s development of micro services that is distributed and deployed and scaled independently based on, you know which way you see the most load coming in. But what is more interesting to me here, and maybe we should probably spend a couple minutes here, is how this also distributes the team’s attention on follow up business actions. Like you mentioned marketing emails should go out if the cart was abandoned.

[00:21:59]

How are you seeing these decisions being made? Are you saying that this is because we’ve got new ways of doing event driven designs, you know more and more creative ways of engaging a customer who’s coming in?

Nate Taggart: Yeah. So I think it’s worth pausing here and also calling out the fact that you know while event driven software development is really maybe much simpler with the serverless model, this isn’t the first time we’ve developed enterprise software that responds to distributed events. And for years we’ve had, you know frameworks that incorporate some kind of message _____. Which is in fact an event driven model. And many enterprise are used to developing this way. I think the difference here is that because these events are broadly distributed, developers can now build applications that are also broadly distributed. And it makes it much easier to manage if say functionality that you develop becomes much more popular –

[00:23:06]

Used much more heavily than you anticipated, you can do that very safely and very reliability with serverless infrastructure. And alternatively, if it becomes maybe less than you anticipated, your costs for running that functionality will be relatively that much lower because serverless generally runs on a pay per use type model.

Kishore Bhatia: So, Nate, you mentioned the cost versus the velocity on developing with serverless models. And then we did a pretty good example on functions as a service. And, again, you know when we do a deep dive we’ll talk about what functions are, how do we develop them. But let’s just back to the cost here. How does one measure the total cost of ownership or the return on the investment for investing in a serverless model? And what is pushing that from a cost perspective?

[00:24:02]

Nate Taggart: Yeah. So maybe it’s worth stepping back again and looking at the evolution of infrastructure and how cost has changed in that time. And I’m gonna talk about some numbers that I think serve as general guidelines. They’re normative to what see in the industry. But they may not be perfectly representative of every application.

So typically if you look back when we scaled servers vertically, you had to expend a large capital expense. This might be tens of thousands of dollars upfront to buy a powerful large server. And you did that with in mind the maximum scale that you would need over potentially a one year or multiyear lifespan of that server.

So in this way you had relatively low utilization rates. You might utilization rates of only say 5 percent of the capacity of that server. And what I mean by that is let’s say as an arbitrary example your server is able to handle 10,000 requests per minute.

[00:25:04]

You may typically on average only see something like, you know 500 request per minute. And in that way you have a lot of capacity that’s available for peaks in traffic, but that means you’ve over provisioned most of the time.

In the scale out model, this was the horizontal scaling, that you could get from say a cloud provider like AWS, I think a lot of times we would see that companies could drive somewhat higher utilization rate but still scaling out could sometimes take several minutes, and depending on the complexity of the release, maybe even longer than that .and so companies tended to look at scaling in terms of how much would they need over the course of say several hours. And while they might be quick to scale up, they would be potentially cautious to scale back.

[00:25:59]

And so in this case it’s not uncommon to see utilization rates of say 20 percent or 25 percent over the infrastructure.

When we introduced containers and orchestration, we had now two ways that we could scale. So you could cluster multiple servers together and create one large compute pool of shared resources. And then run applications on top of them. Because some applications may be flexing up while others are flexing down, you could , in theory, driven even higher utilization rates and it’s not uncommon to see utilization at 50 or even 60 percent on a model like this.

As an industry standard, I’ve heard kind of a normal best practice would be to scale up at somewhere between 60 and 70 percent capacity. You want to add another 10 percent of availability. So with numbers like that you would typically not see utilization rates higher than 70 percent. If you hit that level you would, of course, scale up.

[00:27:03]

Now with serverless, in theory you only pay for the time that you’re actually using. And instead of paying for infrastructure upfront by buying it or paying for it by the hour like on a cloud model, you’re now paying for it by fraction of a second. AWS Lambda meters throughput, meters pricing based on one-tenth of a second intervals. One hundred milliseconds. So you’re getting much closer to paying for exactly what you’re using. Now in theory that means you’re driving a 100 percent utilization rate. In practice that’s not really the case. Because you’re billed at 100 millisecond intervals, if you have a function that takes 99 milliseconds to run, you’re almost using exactly what you’re paying for. On the other hand, if you have a function that takes 101 milliseconds to run, you pay for 200 milliseconds even though you’re only getting 101.

[00:28:05]

And so, therefore, you may still have lower utilization rates than you might expect from this kind of billing model.

There’s an interesting dynamic here as well, which is that we’re shifting the model of who’s responsible for planning and manage infrastructure costs. In a cluster model or a cloud model or an on prem _____ infrastructure model, you have an operations team or an infrastructure team who’s responsible for capacity planning and then managing the cost. Developers build applications to take advantage of the resources that are provided to them by the infrastructure.

But what we’ve done here with serverless is we’ve changed who’s responsible for infrastructure. If developers write functions that are highly efficient, infrastructure of the application will run faster and so, therefore you’ll be billed for less infrastructure use.

[00:29:00]

On the other hand, if you write applications that are less efficient, you may, in fact, be billed for more infrastructure for the same level of functionality or traffic. Now capacity is responsive to applications instead of applications being responsive to available capacity.

So I think there’s a big shift that’s happening here in the mindset of how we think about infrastructure and developing and managing cost.

Kishore Bhatia: Yeah. That’s a pretty interesting point you made there about reversing the whole capacity model to being proactively scaling up or down based on the applications. And then also the fact that – and this was actually another question I had in mind. Where are you saying that serverless means, you know no operations at all or more and more lesser operations going forward? And then you kind of highlighted the point that, you know here developers are responsible for breaking down these services into say various function calls.

[00:30:01]

And the way they code them, the way they design them is now responsible for what kind of capacity utilization is driven. How are you seeing different models in the cloud providers, you know we definitely mentioned AWS Lambda, but are there other serverless infrastructure providers out there?

Nate Taggart: Yeah, so there are, in fact, multiple providers. Amazon is leading the way today in serverless. Although I say that Microsoft has a well-developed offering. And Google also has an offering. Although I believe Google’s is still officially in beta. In all three cases what we see is that in general enterprises are not picking a cloud based on a serverless offering. They’re taking their serverless compute functions to the cloud that already has their data, to the cloud that already has their events, to the cloud that already has their networks and authentications and they’re used to deploying to and that they already have skills developed for.

[00:31:05]

That said, I also want to step back and talk about the operations point that you mentioned. I think serverless is high in the hype cycle right now. You know Amazon has shown a lot of growth in their Lambda product. They feature it at reinvent in the past couple of years. And I think we’ve seen increasing investments from Amazon, Microsoft and Google in their cloud serverless offerings. And in response to that, I think we get a little bit swept up in some of the hype and some of the excitement and buzz around serverless.

And just to break some of that down, one of the trends that I’ve heard discussed is this idea of no ops. That as a managed service there’s no operational responsible for an enterprise consuming serverless or function as a service offering. That is simply not true.

[00:32:00]

Now there are parts of operations that in fact do get outsourced to the cloud provider. Availability is clearly outsourced. Scaling is outsourced. Orchestration is outsourced in a serverless model. But there’s huge parts of operations that are not being outsourced. So managing and maintaining the health of the application. That’s certainly not being outsourced. How you build and release an application with automation and reliability. That’s not being outsourced.

So I think If you step back and you look at this, you know the application lifecycle, if you look at the develops responsibilities and you say, which ones of these are we really getting rid of and which ones of these are we still fundamentally owning? I think you’ll find that there is a significant amount of operations responsibility that still resides with the company that’s running these applications.

Kishore Bhatia: So very true. I’m still seeing a lot of the move from like, you know the previous operations that we used to have for datacenters –

[00:33:04]

And, you know infrastructure, moving into more and more of either infrastructure as code or a lot of focuses, as you mentioned, you know getting into the automation pieces of managing health, proactively responding to various events from a self-healing perspective or just trying to make sure that there is more and more focus on security as code. You know there’s a pretty good point here around just the containers being closer to the more micro services or nano servicers like model where you can run something stateless for a quick single process in a container, you know manager, and then you can actually scale that on a cluster of available VMs. How do you compare that with where serverless is? And you mentioned functions earlier. So I’m just trying to see if something like containers as a service that a lot of cloud providers now have, like Amazon has ECS, Google has its own _____ _____ orchestrated for, you know running these containers _____ –

[00:34:05]

How does that match up to where serverless is? Is that better? Is it more like that is just one more step closer to serverless?

Nate Taggart: Yeah. So this is interesting because I think we’ve seen over, in particular over the last six months, the definition of serverless is starting to look more and more like a gradient and very less much less binary. I don’t think that there are offerings that are purely serverless and offerings that are absolutely not serverless and those are the only two categories. Amazon, for example, recently released a product called Fargate. Now Fargate is fundamentally a docker container that you can run on demand. So you develop your application. You put it in the docker container. Just like you might run on top of say kubernetes. Expect instead of managing a kubernetes cluster, you give it to Amazon and run it in a model that looks very much like Lambda.

[00:35:00]

So an event can trigger this docker container to run. Now with Lambda in particular you have some constraints on how long this function can run for. So at maximum Lambdas run today for 300 seconds. I think we might suspect that that number could change over time. But that’s five minutes. A maximum run time of five minutes.

So earlier we mentioned a use case of say video transcoding. Well, if you have a small 30 second video clip I think it’s safe to say you could transcode it in 30 seconds. But what if you have a four video clip? Suddenly you want something that can be long lived. Now it may still be transactional. And it may still not have multi transaction state requirements. And so in a case like that Fargate could potentially be a hybrid solution. It’s not constrained to run in only five minutes. You can run the docker container essentially as long as needed. But it still looks very serverless like.

[00:36:00]

It’s event driven. It’s stateless between transactions. And again, you don’t have to manage the orchestration or the scaling or the availability or the underlying cluster that it’s running on top of. You can also look at some other offerings like you mentioned, you know managed kubernetes. Amazon released EKS. Google has for a while now how to manage kubernetes. And, of course, there are companies like Platform Nine that run real kubernetes with kind of a managed service layer.

In each of these offerings you see various levels of outsource for the management of the underlying infrastructure. I think Amazon’s model is to give you the big nobs. How do you scale and what are your scaling rules, what type of infrastructure do you want it to run on? And then they try to abstract the actual running of the kubernetes cluster as much as possible, or at least simplify that management aspect.

[00:37:01]

And then there are offerings, of course, where you’re running kubernetes natively and you’re managing all of the cluster and you’re managing all of the orchestration and the service discovery and you know the availability and the health of the application. And now you’re looking at something that very much does not look serverless. It does not look like a managed service at all.

Kishore Bhatia: So, Nate, just diving into the architectural aspects of serverless, what are the components of serverless architecture?

Nate Taggart: Yeah. Kishore, that’s a great question. So this is actually where, you know we’ve talked about this name functions as a service as an alternative to serverless. And I think this is a point where that name starts to break down. It places a lot of focus on the compute side of the architecture. The function. The code running. And in reality, serverless applications use a diverse set of architectural resources just like other applications do. At a basic level, a serverless application will typically have at least three components.

[00:37:59]

So the first component is some kind of event trigger. Now that may be an API gateway. It may be an object store. It may be, you know listening to say a Q or a _____. And then it’s gonna have the compute component. The function itself with the code that runs. This could be something like a lambda or an Azure function. And then it’s also going to have some kind of state management. So almost all applications, and certainly not all, but most applications do, in fact, need some kind of state. And since serverless functions are stateless, typically what developers are doing is that they’re storing state in some other service. Like, for example, a database. It could be a traditional database or it could be something like a say dynamo DB or Amazon is now releasing something called Aurora Serverless.

[00:39:00]

Which is essentially a SQL like database that doesn’t require you to manage the underlying infrastructure.

So looks like I think a basic serverless architecture. _____ say an API endpoint, connected to a function that has access to a database. But in reality most serverless applications are even more complex. They may involve multiple event sources. They will frequently involve multiple functions or compute nodes. And then they may, of course, use multiple databases, multiple types of databases and other resources like message cues, event streams, network layers. You know all of the – a cache – all of the components that you might use for a traditional architecture.

I think again the difference in thinking here for serverless is that as much as possible those become managed services that a developer in theory at least does not need to configure or manage the scaling for.

[00:40:07]

This is actually probably a good point to also call out that functions in this case is possibly an overloaded term. There is, of course, the ability to deploy a function which is, you know a small granular piece of code that does one discrete task. And that’s I think how developers have used the word function for the last couple of decades. But in this case this function as a service, this compute node, can actually be complex. Your function can have _____ requirements so you can use libraries just like you might normally do. The function can also actually serve as a full application. You can have multiple files. You can have multiple functions. Again, we’ve overloaded that word. So your compute node could run an application which itself in the code has multiple functions.

[00:41:01]

You could even have some kind of application router, which say takes an event source and figures out what kind of event it is and then runs the correct code function to serve that event. And so in this way I think we’re also seeing a divergence of patterns, and over time this may consolidate. But there may, in fact, be multiple use cases where one or the other makes more sense for different applications.

Kishore Bhatia: That’s pretty interesting as an architecture. I come from the service oriented architecture world and it’s – it has been actually an interesting view on how a lot of the micro services architecture replicate some of the concept that you would earlier do just that, you know you wouldn’t have as commoditized or cheaper computing power and the fact that you’d have GRPC and like _____ and, you know more lighter weight ways of doing rest HTTP _____ today. And that’s why I think more and more applications back then in the early 2000s were communicating with each other on the protocol level differently. Being _____ in its own sense.

[00:42:07]

But I do have another question on the architecture here. Where is the application, you know structure growing in in terms of how do you actually accept traffic from the outside? So I’m just trying to get into the load balancing aspect of it or the scale aspect of it? Like how do you route traffic from Amazon’s internal, external and internal _____ rules and load balancers? And then how do you deal with storage when it comes to going outside of just the database? Is it all managed by the service provider or do you still have to worry about routine rules and thinking about database as a service and NS3 and things like that?

Nate Taggart: Yeah. Good question. And I’d say a complex question because, you know I don’t want to oversimplify and act like there’s only one solution here.

[00:43:04]

I think in truth there are as many architectures as there are companies developing them.

Kishore Bhatia: I see.

Nate Taggart: That said, there are some patterns that are pretty typical. So a typical interface to the outside world for a serverless application would probably be an API gateway type of service. Now Amazon has, in fact, an API gateway. There are also other API gateway providers. _____ Kong is an example of a third party API gateway. And, you know some companies could develop their own API gateway and might actually run on a server and just farm traffic to serverless applications.

I think typically though to be a traditional serverless application you’re probably looking to couple manage services with your compute layer. And in that way, you know Amazon’s API gateway sitting in front of a Lambda starts to look like a pretty typical model.

[00:44:02]

This load balancing aspect is, again, typically not going to be an actual problem for software developer to deal with if they’re using a managed server list product. So with AWS Lambda or Microsoft Azure Functions, Microsoft and Amazon are now responsible for that availability, that scaling, that load balancing. And the actual provisioning. So think about you’ve released an application. It has an API gateway connected to say a Lambda. And that’s the entire application. You get one request from that API gateway and that Lambda will, behind the scenes, run on a server inside of Amazon’s datacenter. Now what happens if you start getting multiple concurrent requests? Well, now that a behind the scenes as an implantation detail –

[00:44:59]

That Lambda could actually be running on multiple physical servers. And in fact, as you scale it will be cued up to run on even more physical machines. Each one of them running independently. They don’t share state. This is the part of serverless which is stateless, of course. And there are some other implementation details that may, in fact, matter into how you manage and run your application. So one of them is that each of those services will create or servers, I’m sorry, will create their own logs. Their own log streams based on the transactions that ran through that physical machine. So in this case if you’re trying to look at the log say for a serverless application, instead of looking through a single log stream or a single log file you may, in fact, have to aggregate through multiple log streams to find the transaction you’re concerned with.

Another consideration is what are known as cold starts. So the first time that a serverless function runs it will boot up the actual runtime. You can kind of think as like a micro container. Right.

[00:46:02]

And it’ll load that instance and the runtime and the code and, depending on the programming language, that could take anywhere from say half a second for maybe a lightweight node application, to several seconds for a heavier java application. Once that application runs it’ll be cached and each subsequent call to that function on that same machine will run much more quickly. It doesn’t have to reboot up the runtime.

So if you have an application that scales up very quickly, it’s Amazon’s job or Microsoft’s job to create that availability to distribute your code and your function on multiple machines. But you may, in fact, hit a scenario where because you’re scaling up quickly your function is being delivered on more and more machines which are not warmed. They’re not cached. You could, in fact, hit a bunch of cold starts. And have your application performance be degraded. Long term I would expect that this is a concern for serverless that will go away over time.

[00:47:06]

And for many applications it’s not even an issue today. If, for example, you’re running some kind of background task, this is not a customer facing solution and something that doesn’t have real time needs, these cold starts are annoying at worst. Right. But if you’re serving a customer facing API and some percentage of your traffic, probably some non-negligible percentage of your traffic is hitting these cold boots, it could potentially be a customer facing performance issue that you would want to be aware of and be addressing in your architecture and your application design.

This, again, can have implications in how you actually architect the application. So remember we talked about the model of having larger functions that do multiple needs. Multiple features. If you run one Lambda that has in its doe multiple functions –

[00:48:02]

You may, in fact, have a higher probability of being able to keep that Lambda warm over the entire application. Whereas if you decompose that into each code function has its own Lambda that it runs in, you have more chances of hitting cold starts.

And so I think there are two things that are happening here. One is this managed service does have some implementation details that a developer should probably be aware and that could impact the architecture. And two, the technology itself is still evolving and I would expect that this trend is slowly addressed by the service providers and will eventually become a nonissue. And for certain types of applications may already be a nonissue.

Kishore Bhatia: I see. That was a pretty interesting take on like a developer’s perspective of say architecting an application in the serverless model. I think you also covered the part where some of the cold starts, you know log streams, for example, how do you actually get them. How do you debug things. There was another set of questions around if I have to do a deep dive on architecting a serverless application –

[00:49:13]

From the ground up, we’re not even talking about migrating or transforming something, what kind of approach have you seen taken between a role that is more developer centric versus a role that is more say technology program manage or a CIO, CTO who’s trying to do, you know buying and evaluating decisions on going serverless? So let’s take the developer first and then we go into the technology decision maker roles and evaluate the same question in two different ways.

Nate Taggart: Yeah. So one way that we could discuss this is how companies are, in fact, choosing to serverless. What that adoption pattern looks like. Cause I think this is really telling on how those decisions are made and how the different stakeholders react and what drives them.

Kishore Bhatia: Right.

[00:50:01]

Nate Taggart: In every enterprise company we’ve worked with, we’ve seen that serverless was initially brought in by a single developer or a small development team. We call this the rogue developer. One person has a project that they need to ship and they discover that it’s quicker to do this by shipping a Lambda than it is by having an architecture review meeting and a provisioning meeting and planning the capacity and building and packaging up a container and ultimately provisioning it with the ops team and running it. They say, you know what? I can bypass that. I can write my function. I can have it running in an hour. And I’m just gonna do that.

So from a developer standpoint a lot of times Lambda can just be seen as a way to circumvent a heavy weight infrastructure process. Now there’s a lot of truth to that. It is faster and easier to ship code using Lambda in a lot of cases. But there are some concerns based on what the type of application is that’s being developed.

[00:51:04]

So if this, say, a background task, which is what we typically see these rogue developers starting with, it’s not mission critical. It’s not customer facing. Maybe it has low visibility in the organization. There’s relatively low risk to try a new development model. And because they’re able to ship quickly and get a quick win they may, in fact, be praised for making this decision where we’ve introduced the new technology. That can grow to other teams seeing the win and wanting to replicate that pattern and serverless kind of spreading throughout the organization in this way. It’s kind of a bottoms up way of distributing the new model.

That said, it can introduce some challenges to the organization as a whole. And this is where we see say a director of operations or a VP of engineering or, in fact, even a CIO or CTO getting involved as a stakeholder. As serverless proliferates through the organization there are some concerns that will emerge.

[00:52:04]

The use cases tend to evolve. The visibility raises. The criticality of the application may in fact rise. And as more use cases are discovered and more serverless is applied throughout the organization, it becomes increasingly important to standardize on how these functions are being released. To register them somewhere so that you have an inventory and you know what functions are available and have been developed. To create policies around how do we set up IM roles or permissions? What’s the security model? How do we standardize our release process? How do we roll back? How do we get error visibility? How do we recover from a health crisis in the application?

I think these are the standard questions that a CIO or CTO is gonna ask of any infrastructure, in any application throughout the organization. But with serverless a lot of these questions have historically been hard to answer. And that’s part of the reason why, you know I, for example, have focused our company on surveying operations needs in the serverless market.

[00:53:13]

Kishore Bhatia: That’s a perspective that I do want to go down and trying to understand the increasing importance, and maybe visibility into policies and standard approaches. Then a major part of that is also security. You know how do you manage security in a serverless world for both applications, you know _____ security. And then making sure that it’s not just feeling like it’s secure but it’s actually secure. But we’ll go down into that in the next section.

You mentioned something more about, you know how this small visible _____ application can then slowly start becoming more and more proliferated I’d say or getting into the visibility of the ops folks and then folks start trying to understand how do you standardize this for a larger scale application development effort.

[00:54:07]

When someone’s trying to do this from an architecture perspective, are there still benefits or concerns that are driving this adoption for a CIO? Like trying to do a new transformation on capacity, scale, availability, you know all those kind of abilities that serverless solves by default, is there any direct approach to someone saying, you know from the top down that we want to go serverless, let’s just go start doing that. Have you seen that kind of an approach? Or is it more like we’re already as part of our cloud journey doing some of these things, it doesn’t hurt to show some quick wins by trying something new?

Nate Taggart: Yeah. We’ve definitely seen two approaches to adopting serverless. The first, and, again, the most common by far, is this bottoms up approach. Where you have individual developers bringing it in as a technology that enables them to increase their velocity.

[00:55:00]
That said, if you read the headlines in the serverless market you’d probably be inclined to think that companies are saying, let’s cut our infrastructure costs by switching to serverless. I haven’t actually seen that in the market. Every enterprise that we talk to has had a top down approach to serverless. Has fundamentally been driven by some strategic need in the organization. And these typically fall into three categories. Either the company is looking to move to the cloud and they’re saying, if we’re moving to the cloud what are the cloud technologies that we should be embracing? For example, if we have to re-architect our application, or if we’re going to have touch legacy code in order to facilitate this move to the cloud, might as well, we have an opportunity now to modernize the application and take advantage of the cutting edge development model. So we should embrace serverless. We should do it intentionally. And we should use our migration to the cloud as a driver for that.

[00:55:59]

The second driver that we see is a move towards dev ops. And a lot of organizations they’ve been heaving about dev ops for a decade. And they’ve been slowly trending in a more agile model. But they’ve yet to fully embrace the dev ops model.

Serverless I think natively enforces dev ops as a model. Think about the idea behind dev ops. That developers and operations are interlinked. That there’s a team responsible for managing the entire lifecycle of the application. But with serverless if you want, for example, access to logs on the computer resources, you have to manage logging in advance. The serverless function will start up, run the code and then die off. There’s no server to SSH into and retroactively collect logs. Now this is a somewhat trivial example. But I think it illustrates how the model is changing. Developers have to be forward thinking about what the operational needs are of managing this application throughout its lifecycle.

[00:57:05]

Serverless enforces a dev ops pattern in the development cycle and then carries through to the operations side of the lifecycle of the application.

Finally, we see a trend towards micro services. A strategic role of moving to distributed architectures, distributed applications or a micro service pattern being a driver towards serverless. If you’re an organization who’s been managing monolithic applications for years, possibly for decades, you’re probably familiar with the fact that it’s difficult to onboard new engineers to the team. It takes them a while to ramp up and become proficient in the code base because there’s a lot to learn.

And it’s also difficult to manage that application. Monolithic applications tend to have higher infrastructure requirements, higher needs in terms of capacity of resources, and can be difficult to scale granularly.

[00:58:04]

And so there’s a lot of good compelling reasons from an engineering side, but also from a business side that might steer an enterprise organization towards embracing a micro service development pattern.

Now if you have a monolithic application and you’re saying, all right, we’re going to stop extended the monolith and we’re going to start augmenting with with micro services, let’s say we’re gonna take one component of our application. Say authentication. And we’re gonna break this out of the monolith and we’re gonna develop a micro service for this. Now you go and you look at the technology in the market and you say, what’s the easiest, quickest way for us to ship this new micro service? And you might be drawn to serverless because it’s a pretty compelling offer. You don’t have to learn kubernetes. You don’t have to manage orchestration. You don’t have to develop the team around scaling and availability of this critical service while you’re also focused on breaking apart your monolith and building the new service.

[00:59:05]

And meanwhile, this is not really new functionality your organization’s developing, so you may be under pressure to deliver this very quickly and get back to building roadmap work. And so these drivers a lot of times will steer a CIO, a CTO, a VP of engineering, a VP of operations or infrastructure towards intentionally embracing serverless and pushing this pattern throughout the organization.

Kishore Bhatia: That gives a broad view on how technology management and CIOs, CTOs are looking at adopting serverless. Going down towards the tool sets and, you know the variety of functions as a service, _____, you know models that are available, does one actually need as a developer say new languages or system design coding practices to take advantage of the serverless paradigm? Or is it something you can just directly jump in with existing experience?

[01:00:02]

Nate Taggart: That’s a great question. So I’d say serverless is relatively easy to get started with in a hello world kind of way. There are a number of frameworks that make it very easy to get your first function up and running. There are also some really fun easy projects that you can get started with. A great example would be to build something like an Alexa skill. Alexa skills actually run on Lambda. And so you could build a small function. You can make a little game or have a tool that you can extend your Alexa to do for you. And you could probably build that in a day or a weekend. And get up and running pretty quickly.

That said, once you extend beyond the hello world functionality, once you decide you’re going to build an enterprise scale application or a professional application, has to run in production, has to meet monitoring requirements and have some reliability, some health performance, an SLA –

[01:01:00]

Suddenly Lambda becomes a little trickier and it does require a learning for developers and for teams. Now on a team that has multiple disciplines, say a team that has some developers and also some operations experts on the team, you probably already, in fact, have the skills as a team to ship a serverless application and do it pretty reliably.

That said, if you’re just a developer and you’re trying to build serverless application, there’s probably gonna be a learning curve for you. For example, let’s say you’re building your first serverless application on Amazon and you’re gonna do it at work. This may be the first time you’d have to write cloud formation template. This may be the first time you’ve had to configure other services in AWS. Again, we talked about what a basic serverless architecture looks like. It’s not just Lambda. There are at some level there’s an event source. There’s probably some kind of data store.

[01:02:00]

And then there’s a bunch of services that you’d probably want to be using that aren’t very obvious. So, for example, each one of those resources is gonna need IM roles. Permissions. A security model. You’re gonna want to apply cloud watch logs and metrics collection. You’re probably gonna want an instrument for some kind of error visibility. Find a way to get health metrics out of your application. Find a way to get diagnostics out if you have a failure so that you can recover from it and correct that problem.

So in this way there is gonna be a learning curve. It’s not just write code and you’re good to go. And I think that’s where the hype cycle and the reality of serverless are diverging today. You will have to take an operational look, and there are some skills you’re gonna want to develop as you’re doing that.

Kishore Bhatia: Right. That sounds like there is also a learning curve for developers to start looking at some of the cloud operations and cloud services that are available from a serverless provider. And in some sense, you know is it fair to say that if you are looking at serverless platforms and programming in this model you would have to also start understanding the complete ecosystem from the cloud provider.

[01:03:11]

So for example, you mentioned AWS. You know various services we talked about IM, you know event sources, data store, you know API gateways. Would that mean that as a single developer trying to get a holistic understanding of the serverless model I would have to actually understand AWS’s existing services and then how do I make use of some of the offerings like cloud formation templates and cloud watch and things like that?

Nate Taggart: Yeah. So I think with a naïve approach the answer is yes. You will have to learn these other services. You will have to learn say a cloud formation or some other configuration tool. You are gonna have to learn some of the implementation details in how you configure a VPC or an API gateway or a data store. That said, I do think there’s another way to look at this.

[01:04:03]

Which is – and this is something developers have been doing for decades and shouldn’t be a new idea. This is the point of tools. So if you’re working in an area which requires some new skills, chances are someone has gone before you, figured out how to do that, developed some best practices and incorporated those into some kind of tool. A framework, a platform. In our case, you know we call it a console. But it’s a way to standardize and take the experience that others have developed and incorporate that into your work. Kind of in a plug and play kinda way.

Kishore Bhatia: I see. So that’s more like if you can have like an abstraction of all of these services under the hood and have a toolset ready for a complete development platform. And that’s actually something I wanted to understand more from examples that you have seen both in Stackery and generally in the serverless world.

[01:05:01]

What are some of the common tools for designing, architecting serverless applications or like even from a development world are there more like IBEs that you have to go or is it just the same like Lambda has its own way of writing a function, I can write it in java or – and then when it comes to deploying it on say running it on my local machine versus actually trying to deploy that at a testing or a production level, how do I go about doing that? So just trying to walk through the tool ecosystem in a serverless development model.

Nate Taggart: That’s great. So really quickly let’s start with what does it look like if you don’t have tools? So if you don’t have tools, and just for consistency we’ll talk about the AWS environment. So let’s say you don’t have tools. Amazon Lambda has four – actually today now five programming languages you can use.

[01:05:59]

You can use Python, Node, Java, C Sharp or Go. So you pick one of the languages, whichever one you like, and you write your code. Pretty freeform. There’s not really a way you have to write it or structure it. There is, of course, an input value that you’ll get from that event that triggers it to run. So you know you should know what that object looks like. But once you get the event you can do whatever you’d like with it.

So again, without tools you’ll log into AWS console. You’ll copy and paste your code into a form field. Hit submit. And you’ll have a Lambda running. Now if you like to use libraries, which I think most developers do, you’re gonna find that there is actually some packaging requirements. You have to download the dependencies. You’re gonna bundle up your code. You’re gonna create a zip file. You’ll want to upload that to AWS. It’s a little more complex release process.

[01:06:58]

And then, again, you’re gonna have to configure it to run based on a reaction of some event. So you’re gonna need to know how to set up at least one other service and configure that in AWS. And, again, Alexa skill is a great way to get started because Alexa will generate the event. There’s an actual Alexa skills kit that kinda walks you through this process. And it will let you focus mostly on just the code in the Lambda. So it’s an easy entry point.

But again, now let’s see where we’re breaking out of this and we’re gonna actually do this professionally. We need a more robust architecture. And we’re gonna start applying tools to the process. A lot of companies will start with some kind of tool. There’s typically _____ code frameworks. So Apex serverless framework. Sam, which is an Amazon framework. Are some different ways that you can structure your code and define your architecture and release. Most of those will help you with deployment at some level.

[01:07:58]

The difference here though is you know you do get some level of abstraction. But once you break out beyond say a function and beyond maybe an API endpoint, you’ll probably have to start writing the rest of your architecture in some version of cloud formation. Now it may be raw cloud formation. It may be a slightly modified syntax. But in any event, your ultimately going to have to learn how to describe your architecture.

There are a few other tools on the market that you may want to apply to the problem. So there are two leading monitoring tools. One of them is called IOpipe. That’s a great tool. That’ll give you visibility into the performance of your application. And another one is called Dashbird, and that will give you some ability to collect some performance and also I think it’s more focused on say debugging and testing. So it might be an early tool to try.

And then finally, Stackery, my company, has an operations tool.

[01:09:00]

So with Stackery you don’t actually write the cloud formation. You can drag and drop your resources onto a canvas. And then we’ll generate the cloud formation for you.

And you also want to think about how do you actually release the application. So think about a build pipeline. You’re going to store you code somewhere. Say it’s in GitHub. Some way you’re gonna have to get it from GitHub into AWS. Now if you’re using something like Sam Local or Serverless Framework of Stackery, there’s an actual build and release process that (Audio Skip) run. If you’re doing it without tools though you’re probably stuck with copy and pasting. And if you’re used to using other kinds of tools in your architecture, say terraform or something like that, you’re probably going to find that it’s not well suited to managing serverless specific applications. It’s more general purpose. So things like packaging up your functions, your libraries, you may have to develop your own process for.

A few years back the companies that were getting involved in serverless earliest, I’d say they developed a lot of this tooling in-house.

[01:10:02]

But today most companies aren’t doing that. They’re looking through the market. They’re finding what’s gonna work best to meet their needs, fit into their workflow, and they’re bringing a tool like that into their company, to again, take advantage of what the real benefits of serverless are, which is velocity. They don’t want to build tools. They want to ship product. And so they’re focusing on that with their team.

Kishore Bhatia: Interesting coverage there. There is a – it seems that there is an ever-growing list of tools that are also adding into the majority of serverless platforms.

Nate Taggart: Yeah. I mean serverless as a market is only a few years old at this point. And I think what we saw is a handful of companies emerged in the last few years. The earliest ones were really focused on how do I build the applications? And if you think about it, that makes a lot of sense. The first problems that you’re going to need to solve as you’re trying serverless is how do I build a serverless application? And I think what we’re finding is that that need is really shifting. So companies like Stackery and companies like IOpipe, have taken it from –

[01:11:03]

You know how do I build an application, to how do I operationalize an application? How do I run it in production? How do I get visibility? How do I get control? How do I standardize my release cycle? How do I integrate it into the rest of my team’s workflow? And I think what we’re gonna find is over the next few years more and more tooling will emerge that actually focuses on running applications and maintaining the health of applications.

Kishore Bhatia: Great examples there. So we did cover from a tool set example the development lifecycle using serverless application. You know development languages for Lambda, for example, in the runtime. We also looked at some of the options to actually look at monitoring and performance. I do want to like discuss quickly about your views from experiences, you know doing it live with Stackery’s platform. How do you go about debugging and testing something that goes wrong? Both in a developer’s local environment maybe with the cloud and then also in a test sort of production environment?

[01:12:05]

How do you actually go about like debugging or maybe even acting upon a monitoring event that just came in and then, you know trying to fix it.

Nate Taggart: Yeah. So I will say, you know I want to keep this from being too Stackery focused. We use Stackery to build Stackery. It’s very meta. And so I think our workflow might look a little bit different than somebody who’s just getting into serverless. The cloud providers each have slightly different models as well. So let’s kinda talk (Audio Skip) look like without tooling for each cloud provider. And then we can go back and think about what does it look like if you’re using something like Stackery or another tool.

So Amazon has a specific position, which they champion that you don’t do local development with serverless. You may in fact write your code locally, but you’re running it in the cloud.

[01:13:04]

Now there are some advantages to this. For one, you’re actually running it in a real production like environment. So there aren’t any inconsistencies between development and production. At least in theory. Of course it comes with some disadvantages too. You write your code. It takes a couple minutes to release that into the cloud and that’s, you know if you’ve gotten good at it. And in that time, in those couple minutes, you now have kind of a break in the (Audio Skip). You’ve maybe had, you know _____ switch contexts a little bit you know it can be slow. If you have to make a change, test it, make a change, test it, and each time you have to wait a few minutes, it can be tricky.

Still, that is the preferred model for Amazon. It’s what they really champion. And there are some tools, like Sam Local, that they’re releasing to try to make it a little bit easier. The problem with those tools is that while it’s true it may help you run your Lambda, again, these architectures are not just Lambda.

[01:14:02]

They’re distributive platforms that touch a bunch of different services. And it’s impossible to replicate all of EWS on your local machine. You probably don’t have, you know dynamo running locally and you don’t have API gateway running locally.

And so you get into a scenario where you know you’re maybe trying to, you know mock up, stub out some of these different services. And you’re spending a lot of time, you know trying to recreate the environment instead of just shipping it to the cloud, which is what Amazon champions.

Now Microsoft, and I will call out that while I’ve used Azure functions, I haven’t used them as extensively as AWS. But I will say that Microsoft has a history of building developer tools. You know Visual Studio and other IDEs that they’ve released. And so Microsoft I think takes a more developer centric approach. A less infrastructure centric approach.

[01:14:59]

And they provide tools that help you quickly test, debug, run applications and release them to the cloud. If you’re just getting started with serverless and you don’t already use a cloud, I think it’d be perfectly reasonable that you’d use Azure first. I think they have a great development centric process.

That said, if you’re already in AWS, if you have data there, if you have other resources there, chances are it’s not worth switching for this and you should just go to where your data exists.

So this is, you know kinda the model if you don’t have tooling. Let’s talk about if you do have tooling.

So if you’re using something like say Sam, Sam is specific to AWS. But it gives you a way to structure your code. Gives you a way to at least run Sam local and test or run your functions. If you want to do something like, you know unit testing, that might be a good way to do it.

[01:16:00]

But ultimately then you’re gonna release and run it in the cloud to connect it to all the other services that it’s working with. If you’re using something like Stackery, and now we’re getting into kinda how our team builds, Stackery manages multiple environments as part of the way that works. So we integrate with your version control. So someone on our team will create a branch in GitHub. They’ll make their changes. They’ll release that to their own environment. Which is kind of a sandbox around their code in our shared AWS account. They’re able to test it. Potentially break it. It doesn’t matter because it’s sandbox. They won’t step on anyone else’s toes. And when they’re satisfied that their code’s ready they can merge that back in. Go through the PR process. When it’s merged in to master, that’ll trigger an automatic release cycle which will promote that new master branch out into production.

And so in this way we have kind of an automated release cycle.

[01:17:02]

It doesn’t require developers to manually do anything other than what they’ve already done. They can continue to use their same IDEs, go through the same PR process. From a developer’s standpoint very little changes.

Kishore Bhatia: Yeah. That’s a great overview of the ecosystem that is available today. Both without tooling and with tooling. I do want to get down into the security aspects as we covered some of the things before on how does one developer go about creating a serverless application and then you know we talked about operations aspect of releasing it, deploying it and the monitoring part. What is the whole, you know services, access controls, services areas for attack, you know data isolation, what are those kind of concerns or even like benefits that you have seen coming in from serverless? And then we probably would have questions around the whole application architecture.

Nate Taggart: Yeah. So I’d say you know serverless security is becoming a more and more prevalent topic.

[01:18:06]

And I think the thinking around it is still emerging. So let me at least share my thoughts on the topic, but know that this is probably a moving target today. First off, I think some people would worry that the attack surface on serverless is relatively large. Right. Your functions may be running on any number of physical machines that you don’t have control over. And it can be difficult to enforce a security model on, you know N number of machines that aren’t yours.

That said, I would argue that the surface area for serverless application is relatively contained because there’s so state and each transaction is isolated in an environment. So it’s impossible for me to egress from my transaction to your transaction. So even in that case if there was, you know certain types of security vulnerabilities –

[01:19:00]

And I don’t think, you know a serverless application is immune to security vulnerabilities by any sense, but still it does isolate some concerns because you have these functions that are spinning up, running a transaction and then spinning down, shutting off.

Another way to think about security around serverless is in terms of security policies (Audio Skip) roles and permissions that can be applied to these functions. Now I’d say this is both a plus and a minus in the way that serverless architectures work. On the one hand, because you can decompose your application into individual functions, you can create very granular permission systems that isolate a function to doing only what it’s supposed to do. And in that case I would say, you know that’s a great security model. Instead of having to give a machine access to everything that the application is gonna do, you give a single function access to only what that function’s going to do.

[01:19:58]

On the other hand, the human element kinda throws a wrench into that security model. If you have a small team of operators on a maybe, you know working with security experts, they can control the policies on every single application release pretty centrally, right? But if you have hundreds of developers that are creating infrastructure changes as they change code, if those developers aren’t thinking about IM roles and permissions, if they’re not consistently applying them, if they’re not you know well versed in what the different permissions are and how they work, you do run a risk that you know as you have more cooks in the kitchen you potentially can make a mistake or have gaps or leave something out.

The last thing I’ll say around security for serverless is that because the infrastructure model has changed, because you don’t have access in the broad sense to the underlying machines, a lot of the security tools that you might be using, things you may have purchased or you know potentially even open sourced projects, will no longer work.

[01:21:08]

So you might find that you’re looking for new coverage or new ways to solve security problems that previously you’d already addressed. So if you think, for example, you know you’re used to running say some kind of a demon on a machine that’s gonna look for changes and report those up somewhere, you probably can’t run that anymore. And so you may have to just think about security from fresh eyes and a fresh perspective if you start moving to serverless.

One thing I do want to call out if security is a concern for your organization though, is that, again, early use cases for serverless tend to be background tasks. Non-mission critical work. Low visibility work. Not very sensitive projects. And so in that case you might be able to start with serverless. Build some competency, build some confidence.

[01:22:01]

And build your security model, your policies, your practices, and get that dialed in now so that as you scale up, as serverless spreads throughout the organization (Audio Skip) early on what the right approach is. And you’re being a little more forward looking in that way.

Kishore Bhatia: That’s a great suggestion on also thinking about building in some internal expertise and capabilities with serverless without falling down the ocean with say a large, you know e-commerce or customer facing application that has risks all over. So thanks for covering the various topics around tooling operations. You know how does a developer go about developing, you know and applying serverless applications, Nate. And it was also good to know that there is an evolving thought around, you know security. We kind of discussed the hype around why people think that serverless has more of an edge security or attack surface from what, you know there’s also things to look at from an access control data isolation perspective.

[01:23:02]

Comparing the adoption space between startups and enterprises, I’d like to now see if you’ve got any views and learnings from domains, you know verticals in the industry right now. As to what architectural patterns are evolving and which ones of those verticals like, you know IOT finance, healthcare, are evolving with serverless models.

Nate Taggart: Yeah. So I’d like to touch on this from a couple of different directions. First, I know this may be surprising a little counterintuitive, but we are seeing the general curve coming from the enterprise. Enterprise seems to be leading the in adoption of serverless. And in particular, it might not be the enterprises that you’re thinking of. These aren’t typically you know technology focused leading edge technology adopters. In fact, some of the earliest, most public adopters are coming from industries that have historically been technology laggards.

[01:23:57]

There are four industries that we see really pioneering serverless. And those are media, finance, retail and logistics. Think like shipping companies. So those might be kind of surprising industries. But if we step back and think about how a serverless is being used, what are some of the problems that it’s loving, I think you can see why lagging industries would be even more compelled to adopt serverless.

First off, those industries probably haven’t _____ on container sand orchestration heavily yet. So they’re going from you know maybe a model or two in older generations up to the latest trend and they have a higher ROI for making the switch. The other, of course, driver here is that there are use cases that serverless does really well. Tends to be great for very short lived transactional kinds of work. And that is exactly the kind of work that finance companies face or retail companies face.

[01:25:00]

So I think what we see here is a trend towards you know improved velocity, embraced dev ops, embraced micro services, but do it through a managed interface, a managed service provider. Maybe you know not have to build all of the orchestration skills in-house. And focus on the use cases of the organization that’s driving a lot of these industries.

One other thing I’d like to touch on, cause I think it’s really interesting fit, and maybe is a little bit more of a tech forward industry, is IOT use cases. So serverless has one really great advantage. Which is that it scales up very quickly and it’s relatively cost efficient at any scale. Whether it’s scaled up or scaled down. IOT really relies on that kind of mechanic to service the product’s lifecycle as the, you know interconnected device connects to the backend and shares data or downloads updates.

[01:26:03]

Serverless is a good fit for this, and so we’ve seen some of the leading adopters in serverless, like I Robot, the makers of Roomba, they’re a big serverless adopters. GE. GE has a lot of IOT devices that they’re starting to bet on serverless with. And so when we see, again, how the advantages of serverless apply not just to the engineering problems but also to the business’ goals. To managing costs, increasing velocity and being able to adapt to any scale or unpredictable scale, serverless tends to be a fit in those cases and that’s what’s driving the adoption. Not necessarily companies just trying to get on to the bleeding edge of technology.

Kishore Bhatia: Gotcha. Pretty interesting take on where and how the companies are applying that. And directly jumping into serverless even if they had had like a long period of datacenter driven development practices.

From a trend perspective and maturity level, where do you see more and more platform providers headed?

[01:27:06]

Where do you see more and more in the future where is serverless computing generally in terms of _____?

Nate Taggart: Yeah. So obviously serverless is still early. Amazon released (Audio Skip) Lambda about three years ago. And while I point to some examples of serverless models, existing prior to that, I’d say that was the first mainstream implementation of what we think of today as serverless.

That said, in only three years Microsoft has matched Amazon. Google has matched Amazon. IBM is betting on kind of an open source implementation. There’s a number of companies that have developed ways to run a function as a service type orchestrator on top of kubernetes. So I think we’re actually seeing lots of evolution in the technology very quickly.

Now a big part of that is how quickly it’s being adopted.

[01:28:00]

When you have a technology that’s predominantly adopted by startups and small businesses, even if you get a large number of companies to switch, their relative market size and scale means there’s still not a lot of money in the market. In this case, it’s being led by enterprises. And so I think when you have that much money, when you have enterprise scale budgets flooding the market, what you see is that cloud providers like Microsoft and Amazon are doubling down and increasing their investments in this space and in this technology very, very quickly.

Kishore Bhatia: Interesting where, you know the demand’s coming from. And how that helps enterprises actually again in return, you know be better at doing technology. And what’s your experience in Stackery as a platform team itself? Like what are your learnings? Would you be able to give us some best practices that have been helpful for your team?

Nate Taggart: Yeah. Absolutely. So first off, I think the model of who does what is changing.

[01:29:00]

And you have to be intentional about that. So while for a background task or for your first hello world project it may not matter very much if you have monitoring, if you have health visibility, if you have performance checks, if you have centralized logging, in order to get into production your developers need to be responsible for that and they need to know it and they need to be trained how to do it right. Or they need a tool like Stackery that they can standardize on that will do that automatically for them.

In either event, you want to make sure that the right people know what their responsibilities are on the new system. The new way of doing things. So I would say that’s number one.

Number two is that there are certain engineering tasks that are pretty easy to do in a long lived, you know traditional server model, that actually become harder to do in serverless models. And so it’s important to kinda rethink architecture, designs and be willing to learn again and find new ways of doing things.

[01:30:01]

Let me give you one quick example. Serverless is well designed to scale out very quickly. So you have a queue of events. You could spin up 1,000 instances or a million instances to run through that whole queue all at once asynchronously. And so you can complete a large amount of work very quickly, and then scale right back down to zero once you complete that workload. And it doesn’t cost any more to do that. So you might be tempted to use that model very commonly.

But say, for example, you’re using a third party API. You probably don’t want to asynchronously hit that API with a million requests. You may have some kind of rate limit you need to enforce. These are some design patterns that are actually harder to do with serverless. How do you meter and slow down transaction volume? Now you may not normally want to, but there are gonna be times when that’s an appropriate pattern. So coming with fresh eyes and thinking about serverless technology intentionally –

[01:31:01]

Thinking about what the implications are of how serverless is actually implemented under the seats and taking advantage of the parts that are extracted from you and the parts that, you know your service provider will manage, I don’t think that necessarily takes away the need to understand how under the covers it’s working. So be curious and try to learn more about serverless if you start embracing it.

Kishore Bhatia: Great. Very helpful. And what are your best references for learning more about serverless generally as you see the trends evolve?

Nate Taggart: Yeah. So, Kishore, I hope you can share some links with your audience. Sam Local is a great resource if you’re getting started on AWS. And I’d also recommend checking out a blog by a cloud guru. There are AWS and cloud provider training consultancy. It does a lot of working and collects a lot of the best thinking around serverless. It’s a great entry point. If you’re looking for providers in the space I’d also check out IOpipe. Their performance monitoring will give you insight into how serverless is actually working.

[01:32:05]

Which I think will help train your team and level up your skill a lot more quickly.

Kishore Bhatia: Great. Are you available on Twitter, LinkedIn? That folks can reach out. Like what’s the best way to reach out to you and your team?

Interviewee: absolutely. So you can find us at Stackery.io. You can also check us out on Twitter. We’re StackeryIO. And on medium. We have a blog called Stacks on Stacks that we share. The blog’s also replicated on our website. So you can go to Stackery.io/blog.

Kishore Bhatia: Amazing. I’ll include all these links and communication handles. Thanks so much, Nate, for spending some time and giving us this deep knowledge on serverless, also the ecosystem and where things are headed. And you know I’ll make sure that there is all the links included. Any last pointers that you want to mention?

Nate Taggart: No, thanks so much for having me, Kishore. And I hope your audience has fun playing with serverless technology.

Kishore Bhatia: Thank you. Thanks, Nate.

Male: Thanks for listening to SE Radio, an educational program brought to you by _____ Software Magazine.

[01:33:03]

For more about the podcast, including other episodes, visit our website at se-radio.net. To provide feedback you can comment on each episode on the website or reach us on LinkedIn, Facebook, Twitter or through our Slack channel at seradio.slack.com. You can also email us at [email protected]. This and all other episodes of SE Radio is licensed under creative comments license 2.5. Thanks for listening.

[End of Audio]

Join the discussion
1 comment

More from this show