Search
Diogo Monica

SE Radio 290: Diogo Mónica on Docker Security

Docker Security Team lead Diogo Mónica talks with SE Radio’s Kim Carter about Docker Security aspects. Simple Application Security, which hasn’t changed much over the past 15 years, is still considered the most effective way to improve security around Docker containers and infrastructure. The discussion explores characteristics such as Immutability, the copy-on-write filesystem, as well as orchestration principles that are baked into Docker Swarm, such as mutual TLS/PKI by default, secrets distribution, least privilege, content scanning, image signatures, and secure/trusted build pipelines. Diogo also shares his thoughts around the attack surface of the Linux kernel; networking, USB, and driver APIs; and the fact that application security remains more important to focus our attention on and get right.


Show Notes

Related Links

Holistic Info-Sec for Web Developers sections on Docker:

Transcript

Transcript brought to you by IEEE Software

Kim Carter 00:00:59 Welcome listeners to the software engineering radio. My name is Kim Carter. I’ll be hosting the show today and I’d like to introduce, uh, Diogo Mónica. Is that how you say your name Diogo?

Diogo Mónica 00:01:20 That’s exactly right.

Kim Carter 00:01:22 Cool. Diogo Mónica is the security lead at Docker and open platform for building shipping and running distributed applications. He was an early employee at square. We need lead the platform. Security team has a BRC, MSC and PhD degrees in computer science serves on the board of advisors of civil security startups. And as a long time, I Tripoli volunteer. Welcome to the show, Diogo.

Diogo Mónica 00:01:40 Thank you for having me.

Kim Carter 00:01:42 think it’s safe to say that DACA is here to stay. Um, at least for the next few years, there’s a lot of concepts. It seems that engineers need to get their heads around with DACA in order to understand basically how to set up a secure system or how to set up a secure systems and infrastructure. I’m also working on a book series called holistic conversing for web developers, which has several sections around DACA security in it. So I’ve kind of got some selfish motivation there to learn as much as I can. So, um, in order to great, uh, good content for others to benefit from yeah, software engineering host Charles Anderson interviewed Jane James Turnbull on January, 2015 for show two numbers, 217 on DACA. Uh, for those new to Docker, it’s probably worth listening to this previous show before diving into this one today. So let’s get started. So we’re going to be talking about DACA security today, but before we do that, I want to lay some foundations for, for the listeners who may not be as familiar with either, uh, containers or security. Can you give us a quick explanation of how Docker containers work for the listeners?

Diogo Mónica 00:03:00 Sounds good. Um, so Docker in itself is not just the continual run. It is really a platform, but what Docker is most known for is for the container execution runtime component. And so what Docker allows you to do is package everything that an application needs, all the dependencies, all the code, all the resources within this unit that we call an image, and then it allows you to run this image in any operating system you want. So you could run it on windows, you could run it on Linux, you can run it on your Mac. So effectively makes all of your applications portable, independent of the actual infrastructure is inside of it. And there’s obviously a lot of advantages for developers, ops and security people. And hence why Docker has been very popular.

Kim Carter 00:03:40 Can you explain the new docket enterprise and community additions you are just, I mentioned before the show, um, and what new security features are they bringing?

Diogo Mónica 00:03:51 Yeah, so that’s a good question. So Docker enterprise edition and Docker community edition are, uh, very recent. They were launched, um, couple of days ago. Um, the announcements were today effectively. It follows the normal mode or the normal model that people are used to on the open source community, which is there’s a open source community driven community led project that has the major components of the open source. It’s an open community. It includes everyone. Everything is done in the open. There’s an enterprise edition that effectively has other enterprise features in the case of the community edition. It’s exactly what Docker was yesterday. So nothing changes except from a rebranding, but from the commercial edition, we’re now adding to the commercial edition features such as content security scanning. We’re adding advanced features around digital signatures and the ability of someone creating a pipeline of images in a SEI system that has, for example, multiple signatures. So threshold signing, so on and so forth. So there’s a lot of advanced enterprise features such as authorization in our back that are only going to be able to be found in the enterprise software. And they’re also only in the majority of the cases useful for big companies and the community edition continues being what Docker is this tribe and community of developers, ops people and security people that are just coming together to build an orchestration and a container platform.

Kim Carter 00:05:16 The concept of isolation, um, is one that we discussed before the show. Um, I think it’s going to be really important to our discussion just to get some concepts basically out in the open before we carry on to some of the other points you raised beforehand. Yeah. So we’re going to address summit application security, uh, basically once we’ve covered some, uh, some isolation topics briefly. So can you define isolation in terms of, uh, DACA? Sounds good. Yeah. Cool

Diogo Mónica 00:05:44 Aspects of, of computer science. So let’s, let’s call it computer science. Isolation is the ability for you to ensure that an application that is running on your host virtual machine or in a more general sense system, there’s not a affect everything else around it. B C anything that it shouldn’t see, right? So there’s the ability of it having resources that it should be allocated and slotted a certain amount of resources, and it should not be able to consume resources of everything that is co located with it. And it should not be able to see any kind of, um, systems characteristics such in the Linux systems, such as it processes or any other information about system itself. It should effectively feel like it’s running on its own individual operating system instead of on a shared operating system. And Docker uses several mechanisms, such as namespaces control groups, capabilities, SecOps, so on and so forth to actually to actually limit applications and make applications feel like they’re actually running their own operating system. Even though, as we know, Docker containers are being run, ran in a parallel fashion and multiple applications share the same host as a VM or the same bare metal.

Kim Carter 00:06:58 So why do we care about isolation? And like, what’s the importance of it?

Diogo Mónica 00:07:02 So isolation as a PR as a fundamental feature is provides effectively the capability of an application that is misbehaving. Imagine that an application is compromised. There’s some kind of bug or remote code execution, so on and so forth, or even malicious code. If, if an Apple application is compromised, then it will, isolation will limit the abilities that this application has and that this malicious code or person has within the system. So if you’re running two Docker containers on the same bare metal host, and one of your applications has a vulnerability, you really don’t want anyone with control over this application to affect the other application that is also being co-located side by side. And isolation gives you this guarantee. Not only they can’t, they, they effectively, they can’t affect anything else outside of their own boundaries outside of their own system. So they’re effectively isolated.

Kim Carter 00:07:51 Cool. So, so now that we’ve got a baseline on isolation, if you were an attacker looking to compromise DACA, knowing what the weakest areas are, where would you start and what would be your first targets in terms of other surrounding technologies?

Diogo Mónica 00:08:06 I think as an attacker, I would follow the path that I would also follow to try to escape from anything else, such as a virtual machine or anything like this, which would be, try to find vulnerabilities and the systems that are ensuring this isolation for Docker itself. I would say that the thing that I would search for would be a kernel vulnerability that would allow me to do what we call an escalation of privileges. So some kind of, kind of vulnerability on a system call of Linux that since Docker shares multiple Docker containers share the same kernel, then it would allow me to effectively escalate privileges outside of my container to actually gain ownership as way as we call it on Lennox route over the whole system. And so this would be my first focus, the Linux kernel.

Kim Carter 00:08:52 Okay. So I’m going to address each of the areas and turn that you mentioned before, the show that we talked about. So one of the things you mentioned was the application security is so much more important than container in VMI isolation. Can you give us some more detail around what you mean by that?

Diogo Mónica 00:09:08 Yeah, absolutely. So when containers came out two years ago, there was a lot of focus on comparing containers with virtual machines. And even though the comparison is pretty obvious, just because they both feel similar in the sense that they both make an application feel like it has its own operating system. And it’s isolated from that perspective, they’re actually fundamentally different. They’re very different. And so in the beginning, people started comparing them and by comparing them, they immediately started going and doing security comparison. So in my field, they started comparing the characteristics of ma memory isolation of VTD and VTX on a virtual machines where the capabilities of isolation of Docker, which runs on a shared carnal. And this distinction is important because the fact that Docker runs on a shared kernel means that any vulnerability on the kernel that allows a privilege, escalation allows what we call a container escape.

Diogo Mónica 00:10:02 And for virtual machines, you also can have vulnerabilities that allow you to escape. But since you have hardware virtualization support, it becomes a lot harder. You’re not depending on sharing kernels, but on the other end, a virtual machine is a lot slower. It has to boot a lot of different things, memory management, so on and so forth things that containers don’t need. So containers are one or two orders of magnitude faster than what virtual machines are. So these capabilities and these systems that we currently have to actually provide security are there and are specific for Linux. So when the, when, when Docker came out are specific for Linux, the systems were already there. Docker did not reinvent anything new. We just made it incredibly easy for, for you to use as a developer and as an ops person to use these Linux security capabilities that were in the Colonel for dozens of years, but now in an easy fashion for you to do packaging of your applications,

Kim Carter 00:10:57 Then your blog posts increasing a tech, a cost using immutable infrastructure. The overarching theme is that application security is still the lowest hanging fruit for an attacker and near the end of your blog posts. Do you have a link to a Docker security features, which seems to be mostly focused on the isolation features? I just mentioned, why has doc isolation a much less important than Epsy or application security?

Diogo Mónica 00:11:23 So the, the point that people miss a lot, when they think about Docker is that Docker can actually help with their application security. So there’s this thing that I, that I’ve been calling isolation myopia. We talk about container security. People focus on the virtual machine versus container comparison when they’re actually should be focusing on what actually matters, which is the security of their applications. If your application is compromised, it doesn’t really matter. If you are running on a virtual machine or you’re running on a container. If a malicious attacker has access to your application, they can seal all the credentials of your application, connect your database and simply exfiltrate all of your data. So it didn’t really have to do anything else. It already had access to your network and all of your credentials, because that’s what the application does. It did not have to escape the container, the virtual machine, the hardware bare metal didn’t have to do anything else, but simply compromising your application.

Diogo Mónica 00:12:14 So this isolation myopia is the focus of people on isolation of the containers. When in fact they should be focusing on what do containers do to help my application be more secure. And so this blog post, what I talk about is there’s a lot of features in Docker that actually allow you to prevent classes of vulnerabilities, web application vulnerabilities. In particular, the blogpost that you mentioned, I’m actually showing that by using read only containers. So by allowing a container to not be written to, by running a completely immutable container and not allowing an attacker to be able to write to any sort of health system, even an application that has remote code execution becomes a lot harder to be exploited. So we just significantly increase the attacker costs by simply using the characteristic of the platform of Docker itself. And this is what we should be focusing on is how can Docker help your application to be more secure, not necessarily the low components and the fundamentals of how it’s actually doing it.

Kim Carter 00:13:14 Yeah. Applications over the past 15 years in general, don’t seem to be getting much more secure. Um, we’ve been trying to educate developers around the issues, but it doesn’t seem to be working any ideas on how we can improve the situation. So

Diogo Mónica 00:13:31 That’s, that’s a, that’s an excellent question. I really feel that the major thing that we can do is ship security by default, which is actually one of the models of the security team here at Docker. So by shipping security, by default, what this means is that by the virtue of you deploying your application side of a Docker container, you’re actually getting a lot of security features for free. You’re getting the isolation features that we mentioned, but you’re getting the a lot more right now. If you’re deploying on an orchestrator like Docker swarm, you’re getting a mutual TLS certificates that distribute all of your applications and deploy them in a secure fashion. You’re getting an outer rotating BKI. You’re getting secrets management that are encrypted at rest. So you have you’re inheriting from the platform, a lot of secure foundations that would have to be built by your engineering team. And if they don’t come by default, and if they are not easy to use and easy to operate, then nobody will use them. So I think the major way to go about this problem is how can we put into the platform and turn it on by default security features that help our applications and help our developers.

Kim Carter 00:14:37 And they appreciate our discussions. You mentioned that you can inspect behavior of an application inside of a container, but you can’t inside of a virtual machine. What do you mean by that?

Diogo Mónica 00:14:50 So what I mean is imagine that you’re running an application instead of a virtual machine. So at this point you have a process running your ha you have your web app application, but now your, your inspection unit is the virtual machine. So you’re looking at the traffic that comes in and out of the virtual machine at the memory consumption over the virtual machine at the CPU of the virtual machine and that the processes, the aggregate behavior of this unit, the problem is that there’s a lot of other things go located inside of your virtual machine that are not an application. You have a lot of just operating system processes. You have memory being used for X, Y, and Z. That is not related to your application at all, except the operatings except that it’s needed by the operating system. It’s the fundamentals of the us running inside the virtual machine to support your application.

Diogo Mónica 00:15:36 But when you’re talking about containers, what are you actually talking about is you have the smallest possible wrapper around your application. The Colonel is the same. The OS is the same. And so by inspecting the running process, that is effectively what the Docker container that is going to be running is you can have a way smaller unit of inspection. You have this application, the behavior of this application, the memory consumption of this application, this system calls, or this application, not the system calls it the whole, the whole OOS, no justice system calls that this application is doing that actually changes the game when it comes to security tools and secure NIDS is intrusion detection systems and IPS is intrusion protection systems and so on and so forth because the number of false positives that are going to have, or the machine learning training that it can have when you’re just training on this specific application, instead of the combination of what the application does, plus the operating system, it allows you to have way better tools, and it’s just a way better source of data.

Kim Carter 00:16:34 My thoughts around that comment with it in virtual machines or our VP pieces in general, we have application logging instrumenting from within and instrumentation externally. Is there any reason why we shouldn’t use the same tools or are there offerings are more specific to containers or that we can use to inspect application behavior? And if so, what are they?

Diogo Mónica 00:16:57 So there’s definitely startups now that are focusing on containers and they’re taking advantages of the fact that it’s very easy for you to now see network traffic in system called behavior on a per application level instead of per host. So there’s definitely companies that are using these new facts and these new capabilities. And the fact that at the end of the day, a Docker container is a process. And in the Linux world, we know exactly how to inspect and monitor processes. And so this smaller unit is being used by companies to create better products and especially with, um, with the explosion of machine learning and the ability to use, um, deeper neural networks or deep networks, or even, um, the ability of using just simple things like clustering and k-means and all of that allows people to just tackle the problem from a different angle. So ignore the past 15 years of IDSS and Looker and metric traffic. And now coming to the world where what actually matters is monitoring system calls and monitoring it at the smallest possible unit. So yes, we are seeing different companies use these new tools and use new, new monitoring tools that are actually reusing the past 25 years of Linux monitoring, but they’re definitely being targeted. And they’re being used by these companies to provide what hopefully is going to be better security products.

Kim Carter 00:18:16 So what are some of these talks?

Diogo Mónica 00:18:19 So you can think of is a tool that runs on any Linux operating system and allows to inspect processes. And it’s not perfect to inspect the behavior of the application because a container actually now is one application and, uh, multiple containers that have the same image running on multiple systems across your infrastructure should have the same pattern of behavior because effectually effectively have the same function. The other thing is that there are tools that now allow you to monitor not just this specific application, but how applications are actually operating together. So if you look at something like Docker stacks or Docker compose, where you no longer say my application a is running in isolation, you’re actually saying no application names should be talking to database B and should be talking to this side, a key value store such as redness. And these are the three components of my application.

Diogo Mónica 00:19:14 Now a security tool can actually look at this compose file or this Docker stack and understand that there’s some communications that should be allowed. Some ports that should be allowed, some kind of patterns of communications that should be allowed in our natural, in other patterns of communications that make no sense in this architecture that should be alerting. So the normal tools that you’ve had in the past allow you to monitor this, but there’s new tools. there’s new security companies, such as Aqua sec and stack rocks coming into the game and delivering solutions that are container specific and are taking advantage of this new, more fundamental, you know,

Kim Carter 00:19:52 So one of the other, I appreciate comments that you mentioned was that containers are wind due to the observation side are due to observation and immutability. Can you explain the immutable copy on light file system and how it helps us and, um, and how we can take maximum advantage from this? Yeah, absolutely.

Diogo Mónica 00:20:13 Immutable copy on the right file system is actually pretty interesting. So the base concept is that the file system itself is immutable. And whenever you write anything else to the file system, what it actually does is it creates a layer on top of the current existing file system and writes it to that. So these file systems are layer. They have multiple layers and each layer effectively adds a remove something towards the, to the file system, such that what the container at the end sees is the combination of all these layers in aggregate. The reason why this is really cool is because you can have shared layers. So you could your base operating system image on one of the layers and then share them across all of your containers and then just add different applications on top of this. So Docker, one of the reasons why it’s so efficient is because the chairs, these layers, and these operating system, and base layers across all of your containers, and so allows you to not have to either download them across the network.

Diogo Mónica 00:21:07 And just simply when they’re built, they can
already be reused and included in your containers. The other reason why this is really exciting is in terms of immutable infrastructure, per se, is that you can actually disable writing altogether. So you can say that this container has, or should have no ability to be written to. And this is something that I’ve alluded before, which is by running a completely read only file system. You have the ability of not having the application, right? Which gives you really interesting guarantees when it comes to security. But even if you don’t run it in that mode, the fact that it is copy on, right, allows you to do something really interesting, which is an application runs with a specific layer and a hash, a secure hash that, you know, what it is when you’re running it. And then every modification you can actually inspect.

Diogo Mónica 00:21:54 And at runtime do something that in the Docker world is called a Docker dif it’s a Docker diff command to see what files were changed since I ran my container. And so this is actually allows you to do super cool things like incident response in the case that your application has compromised or simply just see what the behavior of the application is, what is it writing to will lock Pfizer? Does it need, is it uploading something that it shouldn’t do need to modify some kind of file on disc that it shouldn’t have so on and so forth?

Kim Carter 00:22:22 Okay. So you mentioned pre show that you can’t run a VM with a Reed with the radar on your flag about with DACA it’s trivial. Um, can you explain for listeners the context of that conversation and what you meant by that?

Diogo Mónica 00:22:36 So there’s a lot of things that he can do in the virtual machine world, but I was just not easy. It is not easy right now for you to set up a virtual machine and just, um, set up the whole virtual machine such that nothing can be written to. And there’s several reasons for that. Number one, that that’s not a feature that just comes built in as far as I know, but regardless of that, the operating system, there’s a lot of things that it needs to do that might be actually writing to files. It’s a lot easier for you to have one specific application that doesn’t need to write to disk than to have a whole operating system that doesn’t need to write to disk. So it’s, it’s really about, since we’re talking about different concepts, since we’re not running a full S we are only running an application, which is effectively a sh on the, on a shared kernel, and it’s effectively a process on that same kernel that is running only the application component that it needs, where the Sheraton laying by the file system and a shared underlying kernel. We can do this in a virtual machine world. We really can’t.

Kim Carter 00:23:31 So can you go into a little bit more depth as to why running and read only is so important for security?

Diogo Mónica 00:23:38 Yeah. The read only is just one example. A read only file system is just a proof that you can increase the attacker cost in a trivial manner by simply passing one common line flag to a Docker container. So I use it as an example because it’s, it’s a visceral one. It’s an easy one. It’s one that is, uh, it could use this to very cool demos. So you can effectively had have a compromised PHB application would remote code execution. And you can show that by simply turning it into read only now an attacker can’t download a PHB shell, and it can’t just download an exploit that then it would compile it and use it to escape or, or to, um, effectively gain more privileges on the system. So it’s just like a demo. There’s a lot of other things that you can do a sec on profiles being one of them.

Diogo Mónica 00:24:23 So you can limit all the system calls that our application is allowed to do inside of the container. This allows you to effectively limit the surface of exposure that the Colonel has. You have, uh, Lennox security modules, uh, that you can use such as app armor and se Linux that allow you to have extra guarantees around access to any kind of resource outside of the container namespace. Um, do you have the ability to, for example, controlling processes, such that an application, um, or some see group properties, for example, application, can’t do a fork bomb, uh, to your whole host, which is something that on a unique system where we’re very familiar with for the past 30 years.

Kim Carter 00:25:03 Uh, my thoughts around those comments were that are you can run anything that has a file system that has to be mounted as read only. Now, can you explain the fundamental difference of running a container as read only versus running a VM or any VPs with granular read only file system mounts? My thoughts are basically that it’s just around simplicity and ease of use with our Docker containers. Would that be correct?

Diogo Mónica 00:25:26 That’s absolutely right. I mean, we’ve described this before Docker was built originally on top of the fundamental security features that Linux already provides. Namespaces control groups, capabilities, Linux security, module, set comp, all of these things were already there. Docker is about creating a nice, easy way and a format, a package format that everybody agrees on and an easy way to run this package format across multiple platforms. That’s how it was created. That was the original purpose. And in security, we usually say that if it’s non-usable security is not secure, if it doesn’t come out by default, nobody’s going to turn it off. And if it’s not trivial to use, then definitely nobody’s going to use it. So the fact that features are accessible and easy to use is effectively night and day.

Kim Carter 00:26:13 So what does your logging strategy look like when you’re running a container as read only,

Diogo Mónica 00:26:19 Uh, you can do a lot of things, but the best practices are to use SIS log or to log to a remote location, especially the location. That is a, should probably be append only such that an attacker can’t really go and modify locks, but that’s a strategy that I’ve seen commonly used

Kim Carter 00:26:34 The concept of container orchestration. I will be important to our discussion, what is container orchestration? So w when we’re talking

Diogo Mónica 00:26:44 About Docker containers, what we’re really talking about, as I mentioned is effectively a process running on one host, but running an application on one host is not particularly useful in today’s world. What do you really need is you need to run this application and have the ability to resist failures of the node, resist downtime, have it replicated across multiple data centers. Uh, you obviously want the ability to have it scale in an easy fashion, such that if you have a spike in traffic, you can effectively deal with the extra traffic so on and so forth. So just to say that there’s a lot of need to run applications as a distributed system, not as an isolated unit running on this particular host, there’s also the matter of management and effectively resource efficiency. If you have a hundred nodes, you don’t want to manage manually what applications are running on, what notes you want a centralized authority that is effectively managing the allocation of hosts tells you whether you have too many hosts and you have some Heidel hosts that he could shut down, or if you’re actually almost oversubscribed and you need more hosts.

Diogo Mónica 00:27:45 So all of this logic of managing multiple applications running on your data centers is, is controlled by what we call orchestrators. And there’s already been orchestrators for other things, but with the beauty of the packaging format and the ease of use, and the speed of Docker container orchestrators are becoming really popular. So having the ability of having containers be deployed across your infrastructure, any cloud, uh, local infrastructure, public, um, public clouds, private clouds, so on and so forth, and being managed and orchestrated in a secure fashion from a centralized orchestrator is something that is incredibly powerful.

Kim Carter 00:28:22 Can you give us some examples of some of the orchestrators that are, that are available to us?

Diogo Mónica 00:28:29 Absolutely. I think the most popular orchestrators right now out there are well, Docker swarm that comes built in and swarm, obviously Kubernetes has gained a lot of traction recently. It was a project out of Google originally. Uh, mezzos was the first player in, um, the orchestrator sort of the more modern orchestrators there’s also habitat. And a couple of others. I would say that the three most common ones would be Coobernetti’s mezzos and Docker swarm.

Kim Carter 00:28:57 So you mentioned an appreciated discussion or that you thought the orchestration layers were a lot more interesting and impactful company security than isolation concepts. So I’m, I’m quite keen to, um, address each of these layers and turn. So basically we talked about our mutual TLS, PKI secrets, distribution, least privileged orchestration, content, scanning, image signatures, uh, in secure, trusted build pipelines. Um, do you want to give us a bit of a rundown on a mutual TLS PKI by default to start with, so, so just for the listeners at TLS transport layer security and PKI is a public key infrastructure. Yeah. Um, I,

Diogo Mónica 00:29:39 Oh, I’m so excited about this is before that Docker, I worked at a company called square and square was a payments company that was a startup created by the founder of Twitter, Jack Dorsey. And then it effectively became very successful, full, and is now moving, um, $40 billion a year or something of the sort. So over four years, while working at square, we had to come up with solutions for problems such as how do I securely deploy an application? How do I know that the code that my developer committed is the code that is running in production? How do I know that the internet connection between my production server and my version control system is not compromised? How do I know that my built system is building the right things, um, on another, on another side, how do I actually distribute secrets throughout all of this infrastructure?

Diogo Mónica 00:30:27 How do my systems communicate with each other? Do I have a secure way of communication? And do I have identities? How do I know I’m deploying to the right node? Uh, am I just SSH to a random idea? Does it have characteristics? So there’s a lot of infrastructure, fundamental infrastructure problems that we have to solve to be able to simply deploy one application on an infrastructure all the way from the developer commit to it, being built by the pipeline to being securely deployed, to, to being securely sends through the host and all the hosts, communicating with each other and the secrets and resources shared with it. So one thing that we realized that these things are really hard and security teams all over the world, we’re rebuilding the same things over and over again. These are the things that actually fundamentally affect your security. There’s a lot of things that have to do insecurity, but these are the foundations, which if you don’t get right at first, you’re effectively doomed to have forever.

Diogo Mónica 00:31:17 It took Facebook years to be able to, well, to actually deploy TLS across the infrastructure. So once you actually have technical debt in this area, it becomes incredibly hard to fight and to actually find time and to change things, to actually do the right thing. So one of the things that we focus the most on is security of containers, and the fact that containers are becoming so popular, gives us the chance of making or getting it right the first time. So orchestrators should be able to deploy containers and manage containers in a secure fashion, and it should provide everything that a container needs such that the application instead of that container can be more secure. And in particular, you mentioned BKI and TLS. So it turns out that a lot of people use mutual TLS. So mutually authenticated, transport layer, security for security and communication between applications and nodes and for solving the identity problem, the identity problem being, how do I know that no day is no day.

Diogo Mónica 00:32:15 And the answer is you issue a certificate to note a C that is no day, but now you have a problem with it who wishes a certificate who’s this certificate authority. And so everyone, every company out there ends up with their own bespoke solution to certificate authorities, into issuing certificates and having to deal with rotation and revocation and all of this huge mess that really wants to deal with, but it’s fundamental for the security of our infrastructure. So what we built into Docker swarm our orchestrator is we take care of those fundamental things for you. And it’s super easy fashion that comes secure by default, and you can’t turn it off. So exactly what we discussed, how can we actually improve the security of applications? Well, when you create a Docker swarm, you effectively have a BKI built for you. So you have a certificate authority that gets generated the managers of the system, which are privileged nodes on this Docker swarm on this orchestrator concept effectively get issued certificates.

Diogo Mónica 00:33:13 And then every time, and, you know, joins this cluster, it gets security issued a certificate that identifies it in all of the communications across all of the nodes use mutually authenticated, TLS for authorization, for authentication, and obviously for encryption of the payload. And then on top of this secure infrastructure, then we build things like secrets management. So you can very easily would one Docker command Docker secret create, add a secret, share it securely with a cluster and have it distributed throughout all of your applications in a least privileged fashion. And when I mentioned at least privileged the importance of this is that a specific container should only have access to the, exactly the resources that it needs to do its purpose no more, no less. And the same goes for the, no, that is running that container. If that node does not need to run container X, then it should not have access to the privileges that container X has or any of the resources key stoke ans so on and so forth. So by having the orchestration orchestrator help you, we can actually get to a world where you have all the substrate made in a secure fashion and follow the principle of least privilege and use best practices such as mutual TLS and automatic certificate rotation. And, uh, short-lived certificates all then for you for free.

Kim Carter 00:34:32 So, so the least privilege orchestration that’s is that set up by default? I mean, can you just go into that a little bit more as to what it actually means? Yeah, absolutely

Diogo Mónica 00:34:42 Sad or unfortunate reality is that a lot of orchestrators that came out in the past few years were, were built in a way that, for example, doesn’t come with, um, security enabled by default. So everything is HTTP instead of being at GPS. So they’re vulnerable to spoofing and man in the middle of attackers and, uh, passive attackers and so on and so forth. Another unfortunate thing that started happening was people started deploying key value stores, uh, to effectively have service discovery. And so kind of a centralized key value store that they would rely on for the majority of the state, uh, or the things that kept the state and their orchestrator. But they did that in a way that it’s not encrypted. And he did that in a way that is not authenticated and that is not secure. So when you effectively go down that path, you have the fundamentals be done in a way that when any node joins this kind of orchestrator, they access to all the resources.

Diogo Mónica 00:35:38 So you have a thousand node system and you could actually have access to every single secret of the cluster and every single resource and everything is effectively. It’s effectively like within the cluster, everyone is stressed it, which is, as we know, it’s the wrong approach to take to this. So we, what we did was when we were designing Docker, swarm we on the drawing board, we said, no, this can’t be this. Can’t be like this. We have to follow a principle of least privilege. And what we mean by this is if a note is compromised, an attacker will only have access to the resources that that node has at that specific moment, no more, no less. And so this allows the manager of the cluster to, if it has a thousand nodes to define 10 nodes, to run one application that is very sensitive to define a hundred nodes, to run another application that is less sensitive.

Diogo Mónica 00:36:27 And maybe to define some other set of nodes that actually run an application that is a front end system and therefore is more vulnerable. And now if your system that is running your front ends, that is the most vulnerable one gets compromised and somebody gets access to credentials or gets execution on this application. Now those nodes effectively only have access to the things that that web application has access to, and they can’t simply go in access everything else. So the sensitive applications are still secure, and this is effectively following the principle of least privilege when they compromise that note, they only have access to what that node has access to no more, no less. And to do this, we obviously had to build a lot of things and to swarm itself, but at the end of the day, we also wanted it to be completely trivial.

Diogo Mónica 00:37:13 So this is the default model in which swarm runs. You cannot disable the PKI. You cannot disable mutual TLS. Secret management comes encrypted at rest by default, and it comes enabled by default. And so you just have to do Docker secret, create to effectively use a secrets management in all of these things already operated at least privileged model, which effectively means that the manager chooses what secrets are in memory of which host at which time. And it only sends the secrets if an application that needs them is running. And once the application stops running, it effectively removes the secrets from memory from the host, ensuring that compromise of the note only compromises what the note has access to at that particular moment.

Kim Carter 00:37:54 So what are the secrets? I mean, I mean, are you talking about secrets? What exactly would you call a secret? And what does, what does the orchestrator define as a secret?

Diogo Mónica 00:38:04 So from the point of view of the orchestrator is secret is just a blob of data, but very common secrets at every infrastructure out there right now needs are things like TLS certificates are things like Amazon, AWS, tokens, usernames, and passwords, encryption keys, signing keys. So all of these sensitive things that for the longest time, developers have been managing by putting on their version and control or directly putting on the hosts or discs of the hosts that they are running. So there’s been a of mismanagement of secrets. I mean, how many times have we seen secret AWS tokens be committed to get hub and explore by attackers to mind Bitcoin, right? So these AWS credentials, this is on a daily basis being scanned, get hub for credentials that allows attackers to access or compromise infrastructures. And this has come out of the fact that there has been so far good mechanisms for people to manage their secrets and in an integrated fashion with a place where they’re deploying it. And so this is why we believe this is so important because by coming by default swarm Docker secrets allow you to effectively have a good decent management that is secure out of the box and avoids people from committing these things into, um, into get hub.

Diogo Mónica 00:39:56 So what about the other orchestration tools like, um, I like Kubernetes and, um, some of the old ones are, they are doing, uh, similar sorts of things by default. So right now there’s definitely a lot of talk on some of those projects to effectively catch up to what swarm has been doing and to catch up on this posture of secure by default. But the reality is that it’s really hard to bolt on security after the fact. I mean, this is, this is a known motto that we know in the security world. And so for us, and for our perspective, we’re always willing to help any open source project. And all of them mostly are open source projects because at the end of the day, what we care about is the security of the users of our users. So it doesn’t really matter if you’re using swarm or Kubernetes IDs or mezzos or other orchestrators.

Diogo Mónica 00:40:45 What we really want is at the end of the day, that the users of the companies that are using these orchestrators get something that makes them safer. And we think that the usability of our orchestrator tied with the fact that is incredibly simple and that it comes integrated into Docker is effectively going to be the best way that people are going to deploy their applications. But we’re absolutely fine with people deploying applications, however they like. So we’d just say swarms, a little bit of heat on most of the other orchestrators in terms of its security stature and the different things that it’s implementing around security. I would definitely say that’s warm from the design process and from the early beginning, consider security as one of the three biggest, strongest, and most fundamental pillars of the development of it. And I think that shows and the features that have been coming out and what it enables.

Kim Carter 00:41:38 Yeah. So you also mean mentioned content scanning or when we discussed orchestration, can you go into that? I’m going a little bit.

Diogo Mónica 00:41:47 Absolutely. So that’s another component that is incredibly important on the software development life cycle of any company. If you’re developing content and you are, um, effectively committing code and developing applications and putting them in containers and deploying in our production systems, you want as part of your CIP pipeline, and as part of the exact same process, that leads a piece of code to go from a developer’s computer to a production hosts, running and production. You want that same pipeline to ensure integrity, but to also ensure security of the content itself. So scan for vulnerabilities, look at the content and see if there’s, um, particular pieces of software that are out of date. So what we did was we created something called Docker security scanning that effectively builds a bill of materials of all of your containers layer by layer. So using those layers that we talked about, about the copy on right pals systems, darker security scanning, effectively scans every layer and tells you what vulnerabilities might have been their scans, all of the binary’s flattens jars, and really does a bill of materials to tell you, this is the current status of your container.

Diogo Mónica 00:43:01 This is what’s in it. These are the vulnerabilities that are in it. You should probably update them before you deploy. And the reason why this is so important is because for the first time really ever, we have a unit that represents the image that is going to be ran, and we know how to read this unit and we know how to scan it for security so we can do super cool things like rejecting a deploy. If this container has vulnerabilities and automatically scan this container, whenever it gets built, spin it up and do some dynamic testing and some static analysis on the containers contents and see if it’s actually fit for production. We’ve, we’ve really never had that, but this rise of what they’re calling Bev sec ops is incredibly important. And when done, right, it really increases the security of all of your software development life cycle, and therefore your whole organization,

Kim Carter 00:43:50 DACA enterprise in community, is that coming with the content scanning out of the box. So for the community edition,

Diogo Mónica 00:43:58 The community edition effectively. A lot of what people do is they pull the official images from the Docker store and every single Docker store official image actually has been validated and scanned by Docker security scanning. And we’re responsible to actually verify their integrity, signed them with notary, which is a tool that we use for content integrity, and then effectively allow you to have access to the highest quality images out there on the internet for Docker. And we give you as part of the Docker store for the enterprise side, what do we give you extra is the fact that it can run on your own images. So you can deploy either on Docker store or on the Docker hub, but it can also deploy it on your own infrastructure. So it can buy Docker data center and Docker data center will effectively do the same things for you. So every deployment, every image that you push to Docker data center, it will effectively run these scans, build this bill of materials and alert you of these vulnerabilities. That’s effectively what’s being added on the enterprise edition.

Kim Carter 00:44:54 So the content scanning, is it, is it still paid for, I think it may have used to have been that way. And that’s why there’s quite a few free tools in the same space. Is that correct?

Diogo Mónica 00:45:05 So content scanning for us is definitely a paid feature. It’s a paid feature of the private repositories. The reason why it’s a paid feature is because this content scanning is a lot more than just looking at, um, list of CVS. So there’s a lot of open source tools that allow you to do the basics. So look at whether the package manager is telling me if I have a package that it hasn’t been updated, for example. So this is, this is useful and it definitely catches things such as a package that should have been updated and it hasn’t been, but what we do with Docker security scanning is, as I mentioned, is we actually build a bill of materials. So we go through every binary of their system. It doesn’t really matter where it came from. If it came from a package manager or not from a package manager. And we find things doesn’t matter if there’s a vulnerable open SSL version, for example, or open SSL library that was statically linked or dynamically linked, we’re going to find this and we’re going to alert you.

Kim Carter 00:45:58 So in one of our previous discussions, are you mentioned how Intel security guard extensions, or also known as SGX, um, along the secure container environment, also known as scone. I was going to make an impact on how are we employ security in our Docker environments, let’s start with an examination of SGX and, and then we can proceed into a scone if we get time. So, so SGX depends on the concept of an enclave. What is an enclave?

Diogo Mónica 00:46:28 So I’m not by all means an SCS expert, but effectively what is happening with, with STX sexually might be the big thing in, uh, coming to the security industry. Uh, at least since the introduction of, um, Intel VTD and VTX and DXC technologies, which have been around for awhile. So the, the, the promise of SGX and going to your point is, is the ability to create this secure enclave, even within a potentially compromised operating system. So effectively, what it allows you to do is it allows you to run code and start a trusting enclave in half secure inputs in secure outputs, and allow this allow the operating system itself, because these are hardware enforced guarantees from actually inspecting what’s happening. And what’s running inside of the enclave and guaranteeing that all the inputs from the enclave are secure signs and all the outputs of the enclave come from the enclave and cannot be spoofed.

Diogo Mónica 00:47:22 So effectively allows us to not have TPMS or biases to trust and allows us to effectively have this super cool product property, which is having an STX enclave means that it can run code and an untrusted operating system, which can be used for a lot of things, Docker containers being one of them. And so the post that I mentioned was actually something that is pretty interesting. It’s something called scone. Um, I think it was presented at, um, OSDI last year, 2016, and effectively looked at Linux containers that are being managed by, by Docker and try to find a way of running them inside an SGX, such that they could have trusted execution. So they were, they were successful. It has some of the limitations still, but the early results are really promising.

Kim Carter 00:48:10 So do you know how the SGX chain of trust works? And can you explain that?

Diogo Mónica 00:48:14 Um, no, absolutely not. If you want to know details, and if you want to know about STX, I would recommend that you go and, uh, maybe go to the invisible things. That blog is incredibly good by Joanna. She’s incredibly good at what she does. She explains SGX in a lot of detail and she’s been covering it. She also is a part of the teams that, uh, builds cubes and, uh, does a lot of good work with, with, with isolation. There’s also the original papers and there’s obviously the references from Intel itself and the hardware components. So I recommend if you’re interested in it to go check those out

Kim Carter 00:48:47 A monolithic kernel, I containing tens of millions of lines of code, which are reachable from untrusted applications via all sorts of networking, a USB and drive API is, has a huge attack surface. It seems that adding DACA into the max exposes all of these vulnerabilities to each and every running container that’s making the attack surface. So I grow exponentially. Can, can you explain how the security of lib container, which is now the default container format layer works and what is to stop attackers, bypassing it and attacking the underlying huge attack surface of the shared kernel? So that’s

Diogo Mónica 00:49:25 Great question. So lip container is actually came as a modification to Docker to replace Aleksey. And now it’s actually in the form of run seat. So Runcie is effectively an open source implementation of a container execution engine. I’m actually on the TOB, um, the technology, uh, technological, uh, oversight bar, uh, board of, uh, the OCI project and OCI itself is effectively a body that is standardizing this tool that allows you to span and run containers. And so now what you’re referring to as live container is actually all built in, into Runcie itself. So it’s a battle hardened embedded an opinionated runtime is if I recall correctly, what we actually call it. And I think this I’ve explained this before, but all of these tools and all these security mechanisms have been in Linux, and I have been in the Linux kernel for a long time.

Diogo Mónica 00:50:19 So you were, if you weren’t running applications on your Linux systems, you were already exposing your Linux systems to these applications that you are just not making use of these security features. So what Docker does is it adds yet another layer of security, and it adds a protective shell around your application that allows you to configure things like second policies and se Linux. So in all of these LSMs and capabilities and remove capabilities in app capabilities and configure C groups. So by running an application set of Docker, you’re only adding more security. You’re almost adding no other exposure. If you were already running an application side of a Lennox system, adding it inside of a Docker container only makes it more secure.

Kim Carter 00:51:00 I’ve seen a good number of reports stating a high number of security vulnerabilities with an images on Docker hub, even up to 90% of official images. Can you talk about a case where a registry, a consumer has been compromised due to a vulnerability and an image that they pulled down and spun up?

Diogo Mónica 00:51:19 It’s an excellent question. So we actually built something called Docker content trust, which uses an open source software called notary, and that, that does image signing. And the whole concept there is that you do not need to trust the registry. So you can have offline signing keys as a developer and you sign your content, and then it doesn’t matter what the content is hosted, where it’s hosted. If it’s coming via HTPs or HTP, any running engine that pulls this image can validate that cryptographic key and can validate that the content has been signed and validated by the actual original developer. And you can actually have cool things like threshold signings. So you can sign with multiple keys. And only if these keys multiple keys were validated, then you actually execute the container. So on the side of the trust itself, we sign all of the official images and we sign it with keys that are, have the route components offline.

Diogo Mónica 00:52:10 So we follow the best practices. So pulling from the Docker store and from the Docker hub content is actually the best way for you to get the best content in terms of containers out there, because we scan it every day because we actually manually fix the vulnerabilities and we sign it with the keys. But if you don’t want to trust the hub with your own content, then you should use Docker content trust. And that is the way to go to effectively remove all of that attacker model that you’re describing gets removed by the fact that you’re only running, signing content.

Kim Carter 00:52:41 So can you go into a little bit more detail around the other Docker content trust?

Diogo Mónica 00:52:46 Yeah, absolutely. So Docker content trust was released with Docker one eight. If I’m not mistaken, it effectively is an opinion implementation of an open source framework that we call notary that we also released at Docker. But what we did was we released no to notary independent in a completely independent fashion, so that other projects can also benefit from this level of security and from the, these guarantees around signing an integrity. And then we included into Docker and our point was, again, how can we make it trivial for people to verify the signatures of an image? And so we integrated it in such a way that if you do a Docker push, it gets signed with a key. And if you use Docker poll, it gets with a key and that’s it by turning it on, you effectively are validating every single piece of content. And if that content is not signed and it’s not fresh, and it’s not valid, then it doesn’t get rent.

Diogo Mónica 00:53:36 And we went a little bit above and beyond of what the actual status quo was. And instead of using something like GPG, that has a lot of issues when use directly or apply directly to a content software update mechanism, we use the framework called the update framework. And this framework was actually designed originally by the people that created the Tor browser. They have a very stringent attacker model, which is obviously nation, state attackers, trying to get code in there trying to effectively compromise nodes of people using tour. And so they had to design a model for their software update that resisted against compromise of online keys that had properties around freshness guarantees, so on and so forth. And then there was a group of researchers that picked up these original ideas and created it, an extract, extracted it into a framework called tough to U F the updated framework. And that was the framework that we based ourselves to create notary. And that ultimately went into Docker content trust. So it’s a very sophisticated way for you to do updates of software, and it’s totally integrated into Docker and incredibly easy to use.

Kim Carter 00:54:44 So, so does notary help us with, uh, knowing where an image originated from? So, so if we’re pulling an image down from one other week,

Diogo Mónica 00:54:52 Correct, if you sign well, you know, if you’re pulling the image from a registry and you’re trusting that registry with that image. So if you already pulled from that, you know, that it came from that registry. So content ownership is at the level of who signed it originally. So in a way, yes, you can have Providence guarantees by trusting a specific key for your content. And then, you know, that that content came from the organization or entity that has control over that key. And therefore, you know, what the Providence of the image is. So you could use nobody for provenance.

Kim Carter 00:55:24 So I’m just thinking also, um, so DACA uses secure hashes or the digest around image provenance, doesn’t it? So can you go into a little bit, um, around how that works?

Diogo Mónica 00:55:35 Absolutely. So we use effectively shot two 56 hatches to describe what the contents of an image is. So what we actually doing and technical terms is we have a Merkle dag, which is a Merkel directed, a cyclic graph that effectively has all of these pieces of content, all these layers of the file system of the container image, so on and so forth, and all of them get hashed and aggregated into one final hash. That is the top of the tree. And that effectively changes if any bit of any of the other underlying Leafs change and no, do we actually, what it does is translates names like Alpine or Diogo Monica slash open VPN into secure hashes. That’s actually what the job of notary it, validates signatures that tell you that open VPN or Alpine actually corresponds to this hash over here. And so then what Docker does is it does what we call a pull by digest.

Diogo Mónica 00:56:33 So we use a content addressable system. And what we do is we download content from a remote host, but we tell the remote host, what is the shot, two 56 hash of the content that we want. And when the content arrives locally, we know how to verify its integrity because no other contents could produce the same hash. Once we hash the content that had got downloaded. So we use hashes to effectively have a content addressable system that then these hashes get signed by notary itself to provide the Providence guarantees. And all of the things that we were talking about before.

Kim Carter 00:57:04 So many of our Dockers defaults are, seem to be designed to allow our DevOps and developers to get up and running with the least amount of friction. And in minimal time in adopting a Docker, we are, are we trading off security for the other benefits of containerization?

Diogo Mónica 00:57:22 That’s an excellent question. What I feel is exactly the opposite. I think there are good ways of implementing new technologies and there are obviously bad ways of implementing new technologies. But as I mentioned before, by the fact that you’re running an application side of a Docker, you’re only adding another layer of security. You’re only adding more abilities for you to control the behavior of the application for you to inspect the container or the behavior of the application, an easy way for you to inspect the contents of this, of this particular image for you to scan it, um, a lot easier way and faster way of deploying updates and patching things, which is critical in the world of security. So effectively when you go and start using Docker, and there’s obviously right, ways of doing things in wrong ways of doing things, but by and large, if you start using Docker and you go towards a world where your CIO is building Docker containers and towards the world where you’re using an orchestrator like swarm to orchestrate all of your containers and to share all of your secrets, you’re going to be in a way better place than you were beforehand.

Diogo Mónica 00:58:20 You’re going to be in a place where you have the ability to monitor inspects, rotate, update all of your content in all of your applications, and you can restrict their behavior and you can restrict or improve their security by the underlying platform where before you didn’t have. So it’s like everything. There’s always a double-sided edge on a blade. And with Docker, there’s wrong ways of doing things. But for the majority of the use cases, it’s absolutely adding on more security and not detracting from security

Kim Carter 00:58:50 Images, derived from other images, inherit the same user defined in the parent image explicitly or implicitly. So in most cases, this will default to route a Docker’s default is to run containers in all commands processes within the container is route, was this a decision made with the aim of making things just work

Diogo Mónica 00:59:11 Well, that wasn’t as much of a decision. It wasn’t as much of a decision as it was just an historical artifact. I think the thing that we have to realize there is we have 25 years of experience of managing unique systems. And throughout all of these 25 years, nobody has ever said to any developer or ops person that they should run things as this has never been the mechanism. All of these applications already draw root privileges. All of these applications in tutorials already say that running is what is wrong. So Docker didn’t actually add anything there. It only allowed you to do exactly what you were already doing before. If you had an application that was already correctly implemented, incorrectly, configured to run as non route, then it’s very easy for you to run it. As non-routine inside of Docker, actually it’s incredibly trivial to drop the whole privileges of the whole container by using the user directive to actually make sure you do so.

Diogo Mónica 01:00:01 So the fact that we didn’t add anything there does mean that we detracted from it. The security of applications continues in a way. And even though we want to help you, your responsibility, what do we did do is we added username spaces that allows you to even for lazy developers and ops people that don’t want to do the dropping of the capabilities or sorry, the dropping of the actual UID. So from root to another user, such as nobody, we added username spaces that effectively makes root inside of the container, not be rude outside of the container. And this is a demon site setting that you can enable and once enabled all of the root inside of your containers, no longer have root privileges and get mapped to different IDs outside of a container. So you can actually in a very easy way, sweepingly say every application that has been wrongly configured or wrongly made to run as root now, no longer has those privileges.

Diogo Mónica 01:00:56 Plus the fact that you’re running applications in Docker actually allows you to understand if they’re following best practices that are not because you can inspect a Docker file and you can inspect the image and you can inspect the actual code and the entry point that tells you how this application is going to be executed. So it becomes very obvious for a security team to create automated scans, to say, Oh, this Docker image is insecure or should have these added extra parameters or should drop privileges in X, Y, or Z way, which effectively makes you have more visibility and ultimately more security.

Kim Carter 01:01:29 So what do you actually use for those types of scans to the, are that your configurations or a meeting, I guess, best practices? So we,

Diogo Mónica 01:01:38 Something called Docker bench that effectively follows the best practices of the says Docker benchmark. Uh, it’s been a couple of years it’s been run and pulled, uh, millions of times. So a lot of people are using this and this organizations and effectively tells you in a very easy fashion, instead of you having to read the 150 page best practices, recommendation doc, it allows you in a very easy fashion to just show you yes or no. Do you pass this test? Do you not pass this test? Should you be running or changes configuration yes or no, as much as we can. We are turning everything in terms of security, into a no brainer choice. And we’re turning into the more secure option to becoming turned on by default. And that is effectively the path of Docker. So in terms of

Kim Carter 01:02:20 The six items that we discussed that are based around Lenox. So, um, we’ve got, uh, namespaces kernel, namespaces are control groups, um, Linux security, modules, capabilities, secure computing mode, which is sick, um, and file system mounts. Which of those six do you think are the most important that people need to look at and, uh, scale up on in order to make their environments more secure?

Diogo Mónica 01:02:45 I don’t think people should be looking at the fundamental and the little nitty gritty details. I think there has to be a way for people to know that by deploying an application inside of a Docker container, it makes it safer. And we already do that because we already ship defaults for all of these things that you mentioned. So we have defaults for set comp. We have deep fault, less than half of the capabilities on Linux come enabled. By default, we have C groups that come on by default. We have default profiles of se Linux, Andy, full profiles of app armor. So the reality is that if you want to harden your application further, by all means you can go tweak the knobs, but for the normal user, we already shipped security folks. And we’re on a path where we continue shipping and hardening these security faults and continue shipping software, just, just come secure by default,

Kim Carter 01:03:32 I was looking at capabilities and it looked like there was quite a few that could have been turned off that a wound that you sort of like still look out, still looking at reducing that seat.

Diogo Mónica 01:03:43 We’re always looking at reducing the set, but if you look at the NCC group report on all of the orchestration, excuse me, on all of the container runtimes, we’re actually going to see that the NCC group that did a security evaluation of Docker versus a few other executer engines concluded that Docker was the safest one. So we’re actually leading the pack in terms of security and in terms of dropping privileges and in terms of security by default, and we’re going to continue to do so, who are the NTC group NCC group it’s effective? It’s an independent yeah. Independent devalue organization that does a lot of evaluations of different software. So they’ve done a lot of open source software pro bono, but they usually obviously charge for people to look at it and they have an excellent crypto group. Um, that’s done some very publicly facing, um, evaluations of products out there.

Kim Carter 01:04:31 What interesting personal projects or events do you have on the go currently Diego?

Diogo Mónica 01:04:37 So right now I’m really excited about something that we’re calling service identities, which is it’s incredibly painful right now for any infrastructure engineer that wants to have mutual TLS between applications in their data center to do it in an easy way. So we’re going to give you the ability of creating certificate authorities for yourself and the super trivial manner and give identities for every single one of your containers running in production, obviously taking care of the rotation for you. So these core fundamental infrastructure problems that would take you towards the 0.1% of most secure companies or companies with the best security of the infrastructure. Those are kinds of projects that we’re trying to put into the orchestration layer with Docker swarm, and we’re trying to give it for everyone else.

Kim Carter 01:05:24 So you’re working on that currently. Are you, or is it actually, um, nearing completion or,

Diogo Mónica 01:05:28 Um, we’re working on it right now? We would like to have some, uh, really cool demos of it for Docker con that is coming up in April. It’s going to be in Austin. Um, definitely recommend you be there. It’s always incredibly exciting to see the vibrant community and the excitement around Docker. It always, I always leave that conference really, really pumped about the stuff that we’re working on. Um, and then I’m also working on a lot of other things right now. We’re focusing, focusing a lot on hardening default, um, operating system that comes with, um, uh, with Docker additions. We are working on intricate and securing the, um, the communication and securing the integration between infra kit and Docker swarm itself, which is yet another component of Docker. And there’s a lot of other juicy stuff that, um, I think I shouldn’t talk about because it’s going to take a little bit away from the surprise and we love to surprise our users would really cool, easy features.

Kim Carter 01:06:25 So what have we not discussed that we really should have for our listeners?

Diogo Mónica 01:06:30 I think we, we covered, we covered a good amount. I think, um, I think it’s pretty obvious that at this point, Docker has matured when it comes to security that we are providing a lot of cool features for people’s infrastructure that people get it for free. And so there’s really no reason for you not to be using Docker. It’s interesting that that in two years, Docker went from people using it, despite of security to now people are using Docker because of security. So I think that’s incredibly exciting. And I think the team here at Docker security team has been fantastic and they’ve all done a tremendously good job.

Kim Carter 01:07:11 So whereabouts can people find out more about you Diego and what you do

Diogo Mónica 01:07:16 Mostly on my personal website, Diogo monica.com. And I’m also on Twitter on twitter.com/yoga, Monica. Those would be the two biggest ones.

Kim Carter 01:07:25 And do you have any, uh, recent podcasts or conference talks that you think would help our listeners understand? I guess the most important aspects to focus on around security? Yeah,

Diogo Mónica 01:07:36 There’ve been, there’ve been a series of really good talks lately from not just me, but all of the fantastic people on my team that I would recommend. I usually post them on my Twitter feed, but you also have some on my personal website under, um, the author link. And you can see some of the, the best ones are the most recent ones or at least the ones with the best video quality, because there’s always recordings. There are not that good to put online because you barely can understand what people are saying, but there’s a lot of videos there. There’s a lot of good resources. The Docker blog is definitely a resource that I recommend everybody going to and following the people on the team.

Kim Carter 01:08:12 Cool. There you go. Thanks for joining us today. Really appreciate it.

[End of Audio]

This transcript was automatically generated. To suggest improvements in the text, please contact [email protected].

Join the discussion
3 comments
  • Great episode, but I was disappointed that there was no mention of docker for windows.

  • Interesting stuff, but holy smokes, someone is obsessed with asking about read-only containers…
    Otherwise it was great, keep it up.

  • This was a disappointing episode. Diogo basically did nothing but shout “docker has security by default” every time the interviewer asked about security, or “docker enterprise does that for you” when he asked about some other technical thing. This basically turned the episode into an advertisement for docker enterprise and made this episode lack technical depth.

More from this show