Search
luca-casonato

SE Radio 553: Luca Casonato on Deno

Luca Casonato joins SE Radio’s Jeremy Jung for a conversation about Deno and Deno Deploy. They start with a look at JavaScript runtimes and their relation to Google’s open source JavaScript and WebAssembly engine V8, and why Deno was created. They discuss the WinterCG W3C group for server-side JavaScript, why it’s difficult to ship new features in Node, and the benefits of web standards. From there they consider the benefits of creating an all-inclusive toolset like Rust and Go rather than relying on separate solutions, Deno’s node compatibility layer, use cases for WebAssembly, benefits and implementation of Deno Deploy, reasons to deploy on the edge, and what’s coming next.


Show Notes

Related Episodes

Resource Links

  1. Luca Casonato
  2. Deno
  3. Deno Deploy
  4. Deno Showcase
  5. Deno Subhosting
  6. Fresh web framework
  7. Cache Web API
  8. V8
  9. TC39
  10. WinterCG – Web-interoperable Runtimes Community Group
  11. The Anatomy of an Isolate Cloud
  12. Supabase Edge Functions
  13. Netlify Edge Functions
  14. Slack releases platform open beta powered by Deno
  15. GitHub Flat Data
  16. Shopify Oxygen
  17. Cloudflare Workers (Competing product to Deno Deploy)
  18. How Cloudflare KV works
  19. CockroachDB
  20. XKCD Standards comic

Transcript

Transcript brought to you by IEEE Software magazine.
This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number and URL.

Jeremy Jung 00:00:16 Today I’m talking to Luca Casonato. He’s a member of the Deno Core team and a TC39 delegate. Luca, welcome to Software Engineering Radio.

Luca Casonato 00:00:25 Hey, thanks for having me.

Jeremy Jung 00:00:27 So today we’re going to talk about Deno, and on the website it says Deno is a runtime for JavaScript and TypeScript. So, I thought we could start with defining what a runtime is.

Luca Casonato 00:00:39 Yeah. That’s a great question. I think this question actually comes up a lot. Sometimes we also define Deno as a headless browser, or I don’t know, a JavaScript execution tool. What actually defines a runtime? I think what makes a runtime a runtime is that it is a) it’s implemented in native code: It cannot be self-hosted — like, you cannot self-host a JavaScript runtime. And it executes JavaScript or TypeScript or some other scripting language without relying on — well, yeah, I guess it’s the self-hosting thing. Like, it’s essentially a JavaScript execution engine, which is not self-hosted. So yeah, it maybe has IO bindings, but it doesn’t necessarily need to. Maybe it allows you to read from the file system or make network calls, but it doesn’t necessarily have to. It’s, I think the, the primary definition is something which can execute JavaScript without already being written in JavaScript.

Jeremy Jung 00:01:30 And when we hear about JavaScript runtimes, whether it’s Deno or Node or BUN or anything else, we also hear about it in the context of V8. Could you explain the relationship between V8 and a JavaScript runtime?

Luca Casonato 00:01:47 Yeah, so V8 and JavaScript core and SpiderMonkey, these are all JavaScript engines. So, these are the low-level virtual machines that can execute or that can parse your JavaScript code, turn it into byte code, maybe turn it into compiled machine code, and then execute that code. But these engines do not implement any IO functions. They implement the JavaScript spec as is written, and then they provide extension hooks for, they call these host environments — like, environments that embed these engines to provide custom functionalities to essentially poke out of the sandbox out of the virtual machine. And this is used in browsers, like browsers have these engines built in; this is where they originated from. And then they poke holes into this sandbox virtual machine to do things like, I don’t know, writing to the DOM, or console logging, or making Fetch calls and all these kinds of things.

Luca Casonato 00:02:39 And what a runtime essentially does, a JavaScript runtime is it takes one of these engines and it then provides its own set of host APIs, like essentially its own set of ‘holes’ it pokes into the sandbox. And depending on what the runtime is trying to do, the wya it will do this is going to be different, and the sort of API that is ultimately exposed to the end user is going to be different. For example, if you compare Deno and Node, like Node is very loosey-goosey about how it pokes holes into the sandbox; it sort of just pokes them everywhere, and this makes it difficult to enforce things like runtime permissions, for example. Whereas Deno is more strict about how it pokes holes in the sandbox. Like, everything is either a web API or it’s bind in this Deno namespace, which means that it’s really easy to find places where you’re poking out of the sandbox.

Luca Casonato 00:03:24 And really you can also compare these to browsers. Like browsers are also JavaScript runtimes. They’re just not headless JavaScript runtimes, but JavaScript runtimes that also have a UI. And yeah, there’s a whole bunch of different kinds of JavaScript runtimes. And I think we’re also seeing a lot more like embedded JavaScript runtimes. Like for example, if you’ve used React Native before, you may be using Hermes as a JavaScript engine in your Android app, which is like a custom JavaScript engine written just for React native. And this also is embedded within a like React native runtime, which is specific to React native. So, it’s also possible to have runtimes, for example, that can be where the backing engine can be exchanged, which is kind of cool.

Jeremy Jung 00:04:05 So it sounds like V8’S role — one way to look at it is it can execute JavaScript code but only pure functions. I suppose.

Luca Casonato 00:04:13 Pretty much, yeah.

Jeremy Jung 00:04:15 You can do anything that doesn’t interact with IO. So you think about browsers, you were mentioning, you need to interact with the DOM or if you’re writing a server side application, you probably need to receive or make HTTP requests, that sort of thing. All of that is not handled by V8. That has to be handled by an external runtime.

Luca Casonato 00:04:39 Exactly. There’s like some exceptions to this. For example, JavaScript technically has some IO built in within its standard library — like math.random, its random number generation is technically an IO operation. So technically V8 has some IO built in, right? And like, getting the current date from the user, that’s also technically IO. So, there’s some very limited edge cases. It’s not that it’s purely pure, but V8 for example has a flag to turn it completely deterministic, which means that it really is completely pure. And this is not something which runtimes usually have. This is something, like the feature of an engine because the engine is like so low level that it can essentially, there’s so little IO that it’s very easy to make deterministic where a runtime higher level has io, much more difficult to make deterministic.

Jeremy Jung 00:05:30 And for things like, when you’re working with JavaScript, there’s asynchronous programming and so you have concurrency and things like that. Is that a part of V8 or is that the responsibility of the runtime?

Luca Casonato 00:05:44 That’s a great question. So, there’s multiple parts to this. There’s the part, there’s JavaScript promises and sort of concurrent JavaScript execution, which is sort of handled by v8 — like v8. You can in, in pure v8, you can create a promise and you can execute some code within that promise. But without IO there’s actually no way to defer time, which means that with pure v8, you can either, you can create a promise which executes right now, or you can create a promise that never executes, but you can’t create a promise that executes in 10 seconds because there’s no way to measure 10 seconds asynchronously. What runtimes do is they add something called an event loop on top of this, on top of the base engine. And that event loop, for example, like a very simple event loop for example, might have a timer in it, which every second looks at if there’s a timer schedule to run within that second.

Luca Casonato 00:06:36 And if it does, if that timer exists, it’ll go call out to V8 and say you can now execute that promise. But V8 is still the one that’s keeping track of like which promises exists and the code that is meant to be invoked when they resolve, all that kind of thing. But the underlying infrastructure that actually invokes which promises to get resolved at what point in time — like the asynchronous IO is what this is called — this is driven by the event loop which is implemented by a runtime. So Deno for example, it uses Tokyo for its event loop. This is an event loop written in Rust. It’s very popular in the Rust ecosystem. Node uses libuv, this is a relatively popular runtime event loop implementation for C++. And libuv was written for Node; Tokyo was not written for Deno. But yeah… Chrome has its own event loop implementation. BUN has its own event loop implementation.

Jeremy Jung 00:07:27 We might go a little bit more into that later, but I think what we should probably go into now is why make Deno? because you have Node that’s currently very popular. The co-creator of Deno, to my understanding, actually created Node. So maybe you could explain to our audience what was missing, or what was wrong with Node where they decided I need to create a new runtime?

Luca Casonato 00:07:55 Yeah, so the primary point of concern here was that Node was slowly diverging from browser standards with no real path to reconverging. Like there was nothing that was pushing Node in the direction of standards-compliance, and there was nothing that was like sort of forcing Node to innovate. And we really saw this because in the time between, I don’t know, 2015 and 2018, like Node was slowly working on ESM while browsers had already shipped ESM for like three years. Node did not have Fetch; Node only got Fetch last year, right? Six, seven years after browsers got Fetch. Node’s stream implementation is still very divergent from standard web streams. Node was very reliant on callbacks. It still is — like, promises in many places of the node API are an afterthought, which makes sense because Node was created in a time before promises existed, but there was really nothing that was pushing Node forward, right?

Luca Casonato 00:08:59 Like nobody was actively investing in improving the API of Node to be more standards-compliant. And so what we really needed was a new like greenfield project, which could demonstrate that actually writing a new server-side runtime is A) viable and B) is totally doable with an API that is more standards-compliant. Like essentially you can write a browser — like a headless browser — and have that be an excellent-to-use JavaScript runtime, right? And then, there were some things that were added on top of that like a TypeScript support because TypeScript was incredibly, or is still incredibly popular — even more so than it was four years ago when Deno was created or envisioned this permission system like Node really poked holes into the V8 sandbox very early on with like, it’s going to be very difficult for Node to ever reconcile this, especially because some of the APIs that it exposes are just so incredibly low-level that like, I don’t know, you can mutate random memory within your process — which, like if you want to have a secure sandbox, like that just doesn’t work, not compatible.

Luca Casonato 00:10:04 So there really needed to be a place where you could explore this direction and see if it worked. And Deno was that. Deno still is that. And I think Deno has outgrown that now into something which is much more usable as like a production-ready runtime. And many people do use it in production, and now Deno is on the path of slowly converging back with Node from both directions. Like, Node is slowly becoming more standards-compliant, and depending on who you ask, this was done because of Deno. And some people said it had already been going on and Deno just accelerated it. But that’s not really relevant because the point is that, like, Node is becoming more standards-compliant, and the other direction is Deno is becoming more Node-compliant. Like, Deno is implementing Node compatibility layers that allow you to run code that was originally written for the Node ecosystem in the standards-compliant runtime. So through those two directions, the runtimes are sort of going back towards each other. I don’t think they’ll ever merge, but we’re getting to a point here pretty soon I think where it doesn’t really matter what runtime you write for because you’ll be able to write code written for one runtime in the other runtime relatively easily.

Jeremy Jung 00:11:14 So if you’re saying the two are becoming closer to one another, becoming closer to the web standard that runs in the browser, if you’re talking to someone who’s currently developing in Node, what’s the incentive for them to switch to Deno versus continue using Node and then hope that eventually they’ll kind of meet in the middle?

Luca Casonato 00:11:37 Yeah. So, I think like Deno is a lot more than just a runtime, right? Like a runtime executes JavaScript, Deno executes JavaScript, it executes TypeScript. But Deno is so much more than that. Like Deno has a built-in format or it has a built-in Linter, it has a built-in testing framework, a built-in benching framework, it has built-in bundler, it like can create self-hosted executables. Yeah. Like bundle your code and the Deno executable into a single executable that you can ship off to someone. It has a dependency analyzer, it has editor integrations, it has — like, I could go on for hours about all of the auxiliary tooling that’s inside of Deno that’s not a JavaScript runtime. And also, Deno as a JavaScript runtime is just more standards-compliant than any of the other server-side runtimes right now. So, if you’re really looking for something which is standards-compliant, which is going to like live on forever, then it’s Deno. Like you cannot kill off the Fetch API ever. The Fetch API is going to live forever because Chrome supports it. And the same goes for local storage and like I don’t know the Blob API I and all these other web APIs like they have shipped in browsers, which means that they will be supported until the end of time and yeah, maybe Node has also reached that with its API probably to some extent. But yeah, don’t underestimate the power of like 3 billion Chrome users that would scream immediately if the Fetch API stopped working. Right?

Jeremy Jung 00:12:56 Yeah. I think maybe what it sounds like also is that because you’re using the API that’s used in the browser, places where you deploy JavaScript applications in the future, you would hope that those would all settle on using that same API so that if you were using Deno you could host it at different places and not worry about do I need to use a special API? Like, maybe that you would in Node.

Luca Casonato 00:13:26 Yeah, exactly. And this is actually something which we’re specifically working towards. So I don’t know if you’ve heard of WinterCG? It’s a community group at the W3C that CloudFlare and and Deno and some others, including Shopify, have started last year where essentially we’re trying to standardize the concept of what a server-side JavaScript runtime is and what APIs it needs to have available to be standards-compliant and essentially making this portability sort of written down somewhere and like write down exactly what code you can write and expect to be portable. And we can see like that all of the big players that are involved in building JavaScript runtimes right now are actively engaged with us at WinterCG and are actively building towards this future. So, I would expect that any code that you write today, which runs in Deno, runs in CloudFlare workers, runs on Netlify Edge functions, runs on Vercel Edge runtime, runs on Shopify oxygen, is going to run on the other four of those within the next couple years here.

Luca Casonato 00:14:28 Like I think the APIs of these is going to converge to be essentially the same. There’s obviously going to always be some, some nuances like, I don’t know, Chrome and Firefox and Safari don’t perfectly have the same API everywhere, right? Like Chrome has some web Bluetooth capabilities that Safari doesn’t, or Firefox has some, I don’t know, non-standard extensions to the error object, which none of the other runtimes do. But overall, you can expect these runtimes to mostly be aligned. And I think that’s really, really excellent. And that’s I think really one of the reasons why one should really consider building for this standard runtime because it, it just guarantees that you’ll be able to host this somewhere in five years’ time and 10 years’ time with very little effort. Like even if Deno goes under or CloudFlare goes under or I don’t know, nobody decides to maintain Node anymore, it’ll be easy to run somewhere else. And also, I expect that the big cloud vendors will ultimately provide managed offerings for the standards-compliant JavaScript runtime as well.

Jeremy Jung 00:15:28 And this WinterCG group, is Node a part of that as well?

Luca Casonato 00:15:33 Yes, we’ve invited Node to join. Due to the complexities of how Node’s internal decision-making system works, Node is not officially a member of WinterCG. There is some individual members of the Node technical steering committee which are participating. For example, James M. Snell is my co-chair on WinterCG. He also works at Cloud Liberty is also a Node TSC member, Matteo Colina who has been instrumental to getting Fetch landed in Node is also actively involved. So, Node is involved, but because Node is Node and Node’s decision-making process works the way it does Node is not officially listed anywhere as a member. But yeah, they’re involved. And maybe they’ll be a member at some point. But yeah. Let’s see.

Jeremy Jung 00:16:20 Yeah, so it sounds like you’re thinking that’s more of a governance or an organizational aspect of Node than it is a technical limitation. Is that right?

Luca Casonato 00:16:32 Yeah, like I obviously can’t speak for the Node technical steering committee, but I know that there’s a significant chunk of the Node technical steering committee that is very favorable towards standards-compliance. But parts of the Node technical steering committee are not; they are either indifferent or are actively — I don’t know if they’re still actively working against this, but have actively worked against standards-compliance in the past. And because the Node governance structure is very, is so open and lets all these voices be heard, that just means that decision making processes within Node can take so long. Like this is also why the Fetch API took eight years to ship — like this was not a technical problem. And it is also not a technical problem that Node does not have URL pattern support or defile Global or the web crypto API was not on this, on the global object until like late last year, right? Like these are not technical problems, these are decision making problems. And yeah, that was also part of the reason why we started Deno as like a separate thing because you can try to innovate Node from the inside, but innovating Node from the inside is very slow, very tedious, and requires a lot of fighting. And sometimes just showing somebody from the outside like look, this is the bright future you could have makes them more inclined to do something.

Jeremy Jung 00:17:54 Do you have a sense for, you gave the example of Fetch taking eight years to get into Node. Do you have a sense of what the typical objection is to something like that? Like, I understand there’s a lot of people involved, but why would somebody say, I don’t want this in?

Luca Casonato 00:18:09 Yeah, so for Fetch specifically, there was many different kinds of concerns. I can maybe list two of them. One of them was for example that the Fetch API is not a good API and as such, Node should not have it. Which is sort of missing the point of, because it’s a standard API, how good or bad the API is is much less relevant because if you can share the API, you can also share a wrapper that’s written around the API, right? And then the other concern was Node doesn’t need Fetch because Node already has an HTTP API. So, these are both kind of examples of concerns that people had for a long time, which it took a long time to either convince these people or to push the change through anyway. And this is also the case for other things like, for example, web crypto, why do we need web crypto? We already have Node crypto.

Luca Casonato 00:18:59 Or why do we need yet another streams implementation? Node already has four different streams implementations. Like, why do we need web streams? Like I don’t know if you know this XKCD of there’s 14 competing standards, so let’s write a 15th standard to unify them all and then at the end we just have 15 competing standards. So I think this is also the kind of concern that people were concerned about, but I, I think what we’ve seen here is that this is really not a concern that one needs to have because it ends up that — or it turns out in the end that if you implement web APIs, people will use web APIs and will use web APIs only for their new code. It takes a while, but we’re seeing this with ESM versus Require — like new code written with Require much less common than it was two years ago and new code now using like XHR, whatever it’s called, form request or you know, the one, I mean compared to using Fetch, like nobody uses that anymore. Everybody uses Fetch.

Luca Casonato 00:19:59 And like in Node, if you write a little script, like you’re going to use Fetch; you’re not going to use like Node’s HTTP.get API or whatever. So yeah, we’re going to see the same thing with Readable Stream. We’re going to see the same thing with web crypto. We’re going to stay, see the same thing with Blob. Like I think one of the big ones where Node is still — I don’t think this is one that’s ever going to get solved — is the buffer global in Node. Like we have the uint8, this uint8 global, and like all the runtimes including browsers and Buffer is like a superset of that, but it’s in global scope. So, it’s sort of this non-standard extension of unit8 array that people in Node like to use. And it’s not compatible with anything else, but because it’s so easy to get at, people use it anyway. So those are also kind of problems that, that we’ll have to deal with eventually. And maybe that means that at some point the buffer global gets deprecated and I don’t know, it probably can never get removed, but these are kinds of conversations that the Node TSC is going to have to have internally in, I don’t know, maybe five years.

Jeremy Jung 00:20:57 Yeah. So, at a high level, what’s shipped in the browser, it went through the ECMAScript approval process. People got it into the browser. Once it’s in the browser, probably never going away. And because of that, it’s safe to build on top of that for these server runtimes because it’s never going away from the browser. And so, everybody can kind of use it into the future and not worry about it. Yeah,

Luca Casonato 00:21:24 Exactly. Yeah. And that’s excluding the benefit that also if you have code that you can write once and use in both the browser and the server-side runtime, like that’s really nice. That’s the other benefit.

Jeremy Jung 00:21:35 Yeah, I think that’s really powerful and that right now when someone’s looking at running something in Cloudflare workers versus running something in the browser versus running something in Node, I think a lot of people make the assumption it’s just JavaScript so I can use it as is, but there are at least currently differences in what APIs are available to you. Earlier you were talking about how Deno is more than just the runtime, it has a linter, formatter, file watcher, there’s all sorts of stuff in there. And I wonder if you could talk a little bit to the reasoning behind that versus having them all be separate things.

Luca Casonato 00:22:17 Yeah, so the reasoning here is essentially if you look at other modern languages — like Rust is a great example; Go is a great example. Even though Go was designed around the same time as Node, it has a lot of these same tools built in. And what it really shows is that if the ecosystem converges — is essentially forced to converge — on a single set of built-in tooling, A) that built-in tooling becomes really, really excellent because everybody’s using it. And also it means that if you open any project written by any Go developer, any Rust developer, and you look at the tests, you immediately understand how the test framework works, and you immediately understand how the assertions work, and you immediately understand how the build system works, and you immediately understand how the dependency imports work, and you immediately understand like, I want to run this project and I want to restart it when my file changes.

Luca Casonato 00:23:04 Like you immediately know how to do that because it’s the same everywhere. And this kind of feeling of having to learn one tool and then being able to use all of the projects, like being able to con contribute to open source, when you’re moving jobs, whatever — like between personal projects that you haven’t touched in two years, you know — being able to learn this once and then use it everywhere. Such an incredibly powerful tool. Like people don’t appreciate this until they’ve used a runtime or language which provides this to them. Like you can go to any Go developer and ask them if they would like, there’s the saying in the Go ecosystem that Go FMT is nobody’s favorite, but oh wait, no, I don’t remember how the saying goes. But the saying essentially implies that the way that Go FMT formats code, maybe not everybody likes, but everybody loves Go FMT anyway because it just makes everything look the same.

Luca Casonato 00:23:54 And like you can read your friend’s code, your colleagues’ code, your new job’s code the same way that you did your code from two years ago. And that’s such an incredibly powerful feeling, especially if it’s like well-integrated into your IDE. You clone a repository, open that repository and like your testing panel on the left-hand side just populates with all the tests, and you can click on them and run them. And if an assertion fails, it’s like the standard output format that you’re already familiar with. And it’s a really great feeling. And if you don’t believe me, just go try it out and then you will believe me.

Jeremy Jung 00:24:26 Yeah, no, I’m totally with you. I think it’s interesting because with JavaScript in particular, it feels like the default in the community is the opposite, right? There’s so many different ways, or so many different build tools and testing frameworks and formatters, and it’s very different than like you were mentioning a Go or a Rust that are more recent languages where they just include that all bundled in.

Luca Casonato 00:24:59 Yeah. And I think you can see this as well in the time that an average JavaScript developer spends configuring their tooling compared to a Rust developer. Like, I write Rust like all day every day and I spend maybe 2, 3% of my time configuring Rust tooling, like doing dependency imports, opening a new project, creating a formatter config file, I don’t know, deleting the build directory, stuff like that. Like that’s essentially what it means for me to configure my Rust tooling. Whereas if you compare this to like a front edge JavaScript project, you have to deal with making sure that your React version is compatible with your React version, it’s compatible with your next version, is compatible with your Vite version, is compatible with your whatever version, right? This is all not automatic, making sure that you use the right — as a front-end developer, you don’t have just NPM installed, no, you have NPM installed, you have yarn installed, you have PMPM installed, you probably have like BUN installed, and I don’t know to use any of these, you need to have Core Pack enabled in Node and like you need to have all of their global bin directories SIM linked into your, included in your path.

Luca Casonato 00:26:03 And then if you install something and you want to update it, you don’t know did I install it with yarn, did I install it with PMPM? Like this is significant complexity, and you tend to spend a lot of time dealing with dependencies, and dealing with package management, and dealing with like tooling configuration, setting up ESLint, setting up prettier. And I think that like, especially Prettier for example, really showed what was one of the first things in the JavaScript ecosystem, which was like, no, we’re not going to give you a config where you that you can spend like six hours configuring; it’s going to be like seven options and here you go. And everybody used it because nobody likes configuring things, it turns out.

Luca Casonato 00:26:47 And even though there’s always the people that say, oh, well I won’t use your tool unless — like we get this all the time — like, I’m not going to use Deno FMT because I can’t, I don’t know, remove the semicolons or use single quotes or change my tab width to 16, right? Like, okay, wait until all of your coworkers are going to scream at you because you set the tab to 16 and then see what they change it to. And then you’ll see that it’s actually the exact default that everybody uses. So it’ll, it’ll take a couple more years. But I think we’re also going to get there. Like note is starting to implement a, a test runner and I think over time we’re also going to converge on, on, on, on like some standard build tools. Like I think Vite for example, is a great example of this, like doing a front-end project nowadays, like building new front-end tooling that’s not built on Vite? Yeah, don’t. like Vite’s become the standard. And I think we’re going to see that in a lot more places.

Jeremy Jung 00:27:38 Yeah. Though I think it’s tricky, right? Because you have so many people with their existing projects — you have people who are starting new projects and they’re just searching the internet for what they should use. So, you’re going to have people on webpac, you’re going to have people on Vite, I guess now there’s going to be Turbopack I think is another one that’s coming. There’s all these different choices, right? And I think it’s hard to really settle on one I guess, but yeah.

Luca Casonato 00:28:09 Yeah. Like I think this is, in my personal opinion, also a failure of the node technical steering committee for the longest time to not decide that yes, we’re going to bless this as the standard formatter for Node, and this is the standard package manager for Node. And they sort of did like they for example, Node Blessed NPM as the standard package manager for Node, but it didn’t innovate on NPM. Like no, the tech Node technical steering committee did not force NMPM to innovate. NPM’s a private company ultimately bought by GitHub, and they had full control over how the NPM CLI evolved, and nobody forced NPM to make sure that package install times are six times faster than they were three years ago. Like nobody did that. So, it didn’t happen. And I think this is really a failure of the Node technical steering committee and also the wider JavaScript ecosystem of not being persistent enough with, with like focus on performance, focus on user experience, and focus on simplicity. Like, things got so out of hand, and I’m happy we’re going in the right direction now, but yeah, it was terrible for some time.

Jeremy Jung 00:29:14 So, I want to talk a little bit about how we’ve been talking about Deno in the context of you just using Deno, using its own standard library, but just recently last year you added a compatibility shim where people are able to use Node libraries in Deno. And I wonder if you could talk to, like earlier you had mentioned that Deno has different permissions model, on the website it mentions that Deno is standard HTTP server is two times faster than Node in a Hello World example. And I’m wondering what kind of benefits people will still get from Deno if they choose to use packages from Node?

Luca Casonato 00:30:01 Yeah, that’s a great question. So, I think again, this is sort of a — like, so just to clarify, what we actually implemented, like what we have is we have support for you to import NPM packages. So, you can import any NPM package from NPM, from your TypeScript or JavaScript, ECMAScript module that you already have for your Deno code. And we will under the hood make sure that is installed somewhere in some directory globally, like PPM does; there’s no local node modules folder you have to deal with. There’s no package adjacent you have to deal with, and there’s no package Json, like versioning things you need to deal with. Like, what you do is you do import cows from NMPM colon cowsay.1 and that will import Cal with like these tag one. And it’ll like do the SIM resolution the same way Node it does — or the same way NPM does, rather.

Luca Casonato 00:30:53 And what you get from that is that essentially it gives you like this back door to a callout to all of the existing Node code that has been written. Like you cannot expect that Deno developers write like, I don’t know, there was this time when Deno did not really have that many third-party modules yet; it was very early on, and you either, if you wanted to connect to Postgres and there was no Postgres driver available, then the solution was to write your own Postgres driver. And that is obviously not great. So, the better solution here is to let users for these packages where there’s no Deno native or web native or standard native package for this yet that is importable with URL specifiers, you can import this from NPM. So, it’s sort of this like back door into the existing NPM ecosystem. And we explicitly for example, don’t allow you to create a package Json file or import Bayer nodes specifiers because we want to stay standards compliant here, but to make this work effectively we need to give you this little backdoor.

Luca Casonato 00:31:56 And inside of this backdoor, all hell is like — or like everything is terrible inside there, right? Like inside there you can do beer specifier, is it inside there you can like there’s package JSON, and there’s crazy node resolution and underscore and or D name, and common JS and like all of that stuff is supported inside of this back door to make all the NPM packages work. But on the outside it’s exposed as this nice ESM-only NPM specifiers. And the reason you would want to use this over like just using Node directly is because again, like you want to use TypeScript, no config like necessary you want to use, you want to have a formatter, you want to have a linter, or you want to have tooling that like does testing and benchmarking and compiling or whatever, all of that’s built in, you would’ve run this on the edge like close to your users and like 35 different points of presence.

Luca Casonato 00:32:47 It’s like okay, push it to your Git repository, go to this website, click a button two times and it’s running in 35 data centers. Like this is the kind of like developer experience that you can, you do not get you — I will argue that you cannot get with Node right now. Like even if you’re using something like TSnode, it is not possible to get the same level of developer experience that you do with Deno. The same like speed at which you can iterate on your projects, like create new projects, iterate on them is like incredibly fast. And you know, like I can open a folder on my computer, create a single file, may.TS put some code in there and then called Deno Run may not TS and that’s it. Like I don’t, I did not need to do NPM install and I did not need to do NPM init dash Y and remove the license and version fields from the generated package JSOM and like set private to true and whatever else, right? It just all works out of the box, and I think that’s what a lot of people come to Deny for and then ultimately stay for And also, yeah, standards compliance. So, things you build in Deno now are going to work in five, 10 years with no hassle.

Jeremy Jung 00:33:53 And so with this compatibility layer, or this shim, is it where the node code is calling out to Node APIs and you’re replacing those with Deno-compatible equivalents?

Luca Casonato 00:34:07 Yeah, exactly. Like for example, we have a shim in place that shims out the Node crypto API on top of the web crypto API. Like sort of, some people may be familiar with this in the form of browserify shims if anybody still remembers those. It’s essentially your front end tooling you were able to import from like Node crypto in your front end projects and then behind the scenes your web packs or your browser IES or whatever would take that import from Node crypto and would replace it with like this shim that was essentially exposed the same API as Node crypto but under the hood wasn’t implemented with native calls but was implemented on top of web crypto or implemented in user lang even. And you know, there’s something similar, there’s a couple edge cases of APIs that we do not expose the underlying thing that we shim to to end users outside of the Node shim.

Luca Casonato 00:34:58 So like there’s some APIs that I don’t know if I have a good example like Node next tick for example. Like to properly be able to shim Node next tick you need to like implement this within the event loop in the runtime and you don’t need this an Deno because in Demo you use the web standard Q microtask to do this kind of thing, but to be able to shim it correctly and run node applications correctly, we need to have this sort of like back door into some ugly APIs, which, which natively integrate in the runtime. But yeah.

Jeremy Jung 00:35:27 Any, anytime you’re replacing a component with a shim, I think there’s concerns about additional bugs or changes in behavior that can be introduced. Is that something that you’re seeing and how are you accounting for that?

Luca Casonato 00:35:43 Yeah, it’s an excellent question. So, this is actually a great concern that we have all the time, and it’s not just even introducing bugs; sometimes it’s removing bugs. Like sometimes there’s bugs in the node standard library which are there and people are relying on these bugs to be there for the applications to function correctly. And we’ve seen this a lot, and then we implement this and we implement from scratch and we don’t make that same bug and then the test fails, or then the application fails. So, what we do is we actually run nodes test suites against Deno shim layer. So, Node has a very extensive test suite for its own standard library and we can run this suite against, against our shims to find things like this. And there’s still edge cases obviously which Node like there was, maybe there’s a bug which Node was not even aware of existing, or maybe this like it’s now like intended behavior because somebody relies on it, right?

Luca Casonato 00:36:32 Like the second somebody relies on some non-standard or some buggy behavior, it becomes intended, but maybe there was no test that explicitly tests for this behavior. So, in that case we’ll add our own tests to ensure that, but overall we can already catch a lot of these by just testing against Node’s test. And then the other thing is we run a lot of real code like we’ll try run Prisma and we’ll try run Vite and we’ll try run nextJS and we’ll try run like, I don’t know a bunch of other things that people throw at us and check that they work, and if they work and there’s no bugs, then we did our job well and our shims are implemented correctly. And then there’s obviously always the edge cases where somebody did something absolutely crazy that nobody thought possible, and then they’ll open an issue on the Deno repo and we scratch our heads for three days and then we’ll fix it, and then in the next release there’ll be a new bug that we added to make the compatibility with Node better. So yeah, running tests is the main thing. Running Node’s test.

Jeremy Jung 00:37:30 Are there performance implications if someone is running an Express app or an XJS app and Deno? Will they get any benefits from the Deno runtime and performance?

Luca Casonato 00:37:42 Yeah, it’s actually there is performance implications, and they’re usually the opposite of what people think they’re like, usually when you think of performance implications, it’s always a negative thing, right? It’s always okay. It’s like, it’s like a compromise. Like the shim layer must be slower than the real Node, right? It’s not. Like, we can run express faster than Node can run express. And obviously not everything is faster in Deno than it is in Node, and not everything is faster in Node than it is in Deno. It’s dependent on the API, dependent on, on what each team decided to optimize. And this also extends to other runtimes. Like, you can always cherry pick results like, I don’t know, to make your runtime look faster in certain benchmarks, but overall what really matters is that you do not — the first important step for good node compatibility is to make sure that if somebody runs your code, or runs their Node code in Deno or your other run type or whatever, it performs at least the same.

Luca Casonato 00:38:33 And then anything on top of that great cherry on top perfect, but make sure the baselines is at least the same. And I think, yeah, we have very few APIs where like there’s a significant performance degradation in Deno compared to Node, and like we’re actively working on these things. Like Deno is not a project that’s done, right? Like we have, I think at this point like 15 or 16 or 17 engineers working on Deno spanning across all of our different projects. And like, we have a whole team that’s dedicated to performance and a whole team that’s dedicated to node compatibility. So, like these things get addressed, and we make patch releases every week and a minor release every four weeks. So yeah, it’s not a standstill. It’s constantly improving.

Jeremy Jung 00:39:16 Another thing I’ve seen with Deno is it supports running web assembly binaries, so you can export functions and call them from TypeScript. I was curious if you’ve seen practical uses of this in production within the context of Deno?

Luca Casonato 00:39:35 Yeah, there’s actually a bunch of really practical use cases. So probably the most executed bit of web assembly inside of Deno right now is actually ES build. Like ES build has a web assembly build, like ES build is something that’s written in Go, you have the choice of either running it natively in machine code as like an ELF process on Linux or on Windows or whatever. Or you can use the web assembly build and then it runs in web assembly and the web assembly build is maybe 50% slower than the native build, but that is still significantly faster than rollup or whatever else people use nowadays to do JavaScript bundling, I don’t know, I just use ES build always. So, for example, the Deno website is running on Deno deploy and Deno deploy does not allow you to run sub-processes because it’s like this ed runtime, which has certain security permissions that it’s, that are not granted one of them being sub-processes.

Luca Casonato 00:40:28 So it needs to execute ES build and the way it executes ES build is by running them inside a web assembly because web assembly is secure, web assembly is something which is part of the JavaScript sandbox, it’s inside the JavaScript sandbox, it doesn’t poke any holes out. So it’s able to run within like very strict security contexts. And then other examples are, I don’t know, you want to have a HTML sanitizer, which is actually built on the real html parser in a browser. Like we have an html sanitizer called Ammonia? I don’t remember. There’s an HTML sanitizer library on digital land slash X, which is built on the HTML parser from Firefox, which like ensures essentially that your HTML, like if you do HTML sanitization, you need to make sure your html parse is correct because if it’s not, your browser might parse some HTML one way and your sanitizer pauses it another way and then it doesn’t sanitize everything correctly. So there’s like the Firefox, HTML parser compiled to web assembly. You can use that to do HTML sanitization or the Deno documentation generation tool, for example, Deno doc, there’s a web assembly build for it that allows you to programmatically like generate documentation for your TypeScript modules. And also, like Deno FMT is available as a web assembly module for programmatic access and a bunch of other internal Deno programs as well — like components, not programs.

Jeremy Jung 00:41:48 What are some of the current limitations of web assembly and Deno? For example, from web assembly? Can I make HTTP requests? Can I read files? That sort of thing.

Luca Casonato 00:42:02 Yeah. So, web assembly, like when you spawn a web assembly — they’re called instances, web assembly instances — it runs inside of the same VM, like the same V8 isolate is what they’re called. But it’s like a completely fresh sandbox sort of — in the sense that I told you that between a runtime and like an engine essentially implements no IO calls, right? And a runtime does; like a runtime pokes holds into the engine. Web assembly by default works the same way that there’s no holes poked into its sandbox. So, you have to explicitly poke some holes if you want to do HTTP calls, for example. When you create the web assembly instance, it gives you, or you can give it something called imports, which are essentially JavaScript function bindings, which you can call from within the web assembly. And you can use those function bindings to do anything you can from JavaScript. You just have to pass them through explicitly.

Luca Casonato 00:42:50 And yeah, depending on how you write your web assembly, like if you write it in rust for example, the tooling is very nice and you can just call some JavaScript code from your Rust. And then the build system will automatically make sure that the right function bindings are passed through with the right names. And like you don’t have to deal with anything. And if you’re writing Go, it’s slightly more complicated. And if you’re writing like raw web assembly, like the web assembly text format and compiling that to a binary, then like you have to do everything yourself. Right? It’s sort of the difference between writing C and writing Javascript. Like what level of abstraction do you want? It’s definitely possible though. And as for limitations. The same limitations as exist in browsers apply, like the web assembly support in Deno is equivalent to the web assembly support in Chrome.

Luca Casonato 00:43:34 I say you can do many things like multi-threading and stuff like that already, but especially around like shared mutable memory and having access to that memory from JavaScript, that’s something which is a real difficulty with web assembly right now. Yeah, growing web assembly memory is also rather difficult right now. There’s, there’s a, there’s a couple inherent limitations right now with Web assembly itself, but those will be worked out over time and, and Deno is like very up to date with the version of the standard it, it implements through V8. We’re up to date with Chrome beta essentially all the time. So yeah, anything you see in Chrome beta is going to be in Deno already.

Jeremy Jung 00:44:12 So you talked a little bit about this before, the Deno team, they have their own hosting platform called Deno Deploy. So, I wonder if you could explain what that is.

Luca Casonato 00:44:26 Yeah, so Deno has this really nice, this really nice concept of permissions which allow you to — sorry, I’m going to start somewhere slightly unrelated. Maybe it sounds like it’s unrelated, but you’ll see in a second it’s not unrelated: Deno has this really nice permission system which allows you to sandbox Deno programs to only allow them to do certain operations. For example, in Deno, by default, if you try to open a file, it’ll error out and say you don’t have read permissions to read this file. And then what you do is you specify dash-dash-allowread, and maybe you have to give it, you can either specify allowread and then it’ll grant you read access to the entire file system. Or you can explicitly specify files or folders or any number of things. Same goes for write permissions, same goes for network permissions, same goes for running subprocesses, all these kinds of things.

Luca Casonato 00:45:08 And by limiting your permissions just a little bit — like for example, by just disabling sub-processes and foreign function interface, but allowing everything else, allowing reads and allowing network access and all that kind of stuff, we can run Deno programs in a way that is significantly more cost effective to you as the end user than and, and like we can coldstart them much faster than like you may be able to with a more conventional container based system. So, what you know Deploy is, is a way to run JavaScript or Deno code on our data centers all across the world with very little latency. Like, you can write some JavaScript code which serves HTTP requests, deploy that to our platform, and then we’ll make sure to spin that code up all across the world and have your users be able to access it through some URL or some customer domain or something like that.

Luca Casonato 00:45:59 And this is very similar to CloudFlare workers for example, and it’s like Netlify edge functions is built on top of Deploy, like Netlify Edge functions is implemented on top of d Oy through our subhosting product. Yeah. Essentially Deloy is a cloud hosting service for JavaScript, which allows you to execute arbitrary JavaScript. And there’s a couple like different directions we’re going there. One is like more end user focused where like you link your GitHub repository and like well have a nice experience like you do with Netlify and Vercel, that word like your commits automatically get deployed and you get preview deployments and all that kind of thing for your backend code though, rather than for your front-end websites. Although you could also write front-end websites in Deno, obviously. And the other direction is more like business focused, like you’re writing a SaaS application and you want to allow the user to customize the checkout.

Luca Casonato 00:46:48 Like you’re writing a SaaS application that provides users with the ability to write their own online store and you want to give them some ability to customize the checkout experience in some way. So, you give them a little like text editor that they can type some JavaScript into. And then when your SaaS application needs to hit this code path, it sends a request to us with the code, we’ll execute that code for you in a secure way in a secure sandbox. You can like tell us this code only has access to like my API server and no other networks to like prevent data exfiltration, for example. And then you can have all this like super customizable code inside of your SaaS application without having to deal with any of the operational complexities of scaling arbitrary code execution or even just doing arbitrary code execution, right? Like it’s, this is a very difficult problem and give it to someone else and we deal with it and you just get the benefits. That’s Deno deploy, and it’s built by the same team that builds the Deno CLI. So, all of your favorite, like Deno CLI or Deno APIs are available in there. It’s just as web standard is Deno, like you have Fetch available, you have blob available, you have crypto available, that kind of thing.

Jeremy Jung 00:47:51 So when someone ships you their code and you run it, you mentioned that the cold start time is very low. How is the code being run? Are people getting their own process? It sounds like it’s not using containers. I wonder if you could explain a little bit about how that works.

Luca Casonato 00:48:09 Yeah, yeah. I can, I can give a high-level overview of how it works. So, the way it works is that we essentially have a pool of Deno processes. Well it’s not quite Deno processes — it’s not the same Deno CLI that you download; it’s like a modified version of the Deno CLI based on the same infrastructure that we have spun up across all of our different regions across the world, across all over different data centers. And then when we get a request, we’ll route that request the first time we get a request for that, we call them deployments — that like code right? — We’ll take one of these Deno processes and we’ll assign that code to run in that process and then that process can go serve the requests and these process, they’re isolated and they’re, it’s essentially a V8 isolate and it’s a very, very slim; it’s a much, much, much slimmer version of the Deno CLI essentially, which the only thing it can do is JavaScript execution.

Luca Casonato 00:48:56 And like, it can’t even execute TypeScript for example, like TypeScript is we pre-process it up front to make the cold start faster. And then what we do is if you don’t get a request for some amount of time will uh, spin down that isolate and we’ll spin up a new idle one in its place. And then if you get another request an hour later for that same deployment, we’ll assign it to a new isolate. And yeah, that’s a cold start, right? If you have an isolate which receives — or a deployment, rather which receives a bunch of traffic, like let’s say you receive a hundred requests per second, we can send a bunch of that traffic to the same isolate and we’ll make sure that if that one isolate isn’t able to handle that load, we’ll spin it out over multiple isolates and we’ll sort of load balance for you, and we’ll make sure to always send to the point of presence that’s closest to the user making the request so they get very minimal latency.

Luca Casonato 00:49:48 We’ve these like layers of load balancing in place and I’m glossing over a bunch of like security related things here about how these, these processes are actually isolated and how we monitor to ensure that you don’t break out of these processes. And for example, Deno Deploy does, it looks like you have a file system because you can read files from the file system. But in reality Deno Deploy does not have a file system. Like the file system is a global virtual file system, which is implemented completely differently than it is in Deno CLI. But as an end user you don’t have to care about that because the only thing you care about is that it has the exact same API as the Deno CLI, and you can run your code locally and if it works there, there it’s also going to work in Deploy. Yeah. So that’s kind of a high level of Deno Deploy. If any of this sounds interesting to anyone by the way, we’re like very actively hiring on Deno Deploy. I happen to be the tech lead for a Deno deploy product. So I’m always looking for engineers to join our ranks and build cool distributed systems, uh, Deno.com/jobs.

Jeremy Jung 00:50:47 For people who aren’t familiar with V8 isolates, are these each run in their own processes, or do you have a single process and that has a whole bunch of isolates inside?

Luca Casonato 00:51:00 So we run most isolates in a single pro. In the general case you can say that we run uh, one isolate per process, but there’s many asterisks on that because it’s very complicated. I’ll just say it’s very complicated. In the general case though, it’s one isolate per process. Yeah.

Jeremy Jung 00:51:20 One of the things you mentioned about Deno Depoy is it’s centered around deploying your application code to a bunch of different locations. And you also mentioned the, the cold start times are very low. Could you kind of give the case for wanting your application code at a bunch of different sites?

Luca Casonato 00:51:38 Yeah. So, the main benefit of this is that when your user makes a request your application, you don’t have to roundtrip back to wherever centrally hosted your application would otherwise be. Like if you are a startup, even if you’re just in the US for example, it’s nice to have points of presence not just on one of the US coasts, but on both of the US coasts because that means that your roundtrip time is not going to be a hundred milliseconds, but it’s going to be 20 milliseconds. This sort of relies on, there’s obviously always the problem here that if your database lives in only one of the two coasts, you still need to do the roundtrip. And there’s solutions to this, which is 1) caching, that’s the obvious sort of boring solution. And then there’s the solution of using databases which are built exactly for this. For example, CockroachDB is a database which is Postgres-compatible, but it’s really built for global distribution and built for being able to shard data across regions and have different primary regions for different shards of your tables.

Luca Casonato 00:52:40 Which means, for example, you could have your users on the East coast, their data could live on a database in the East coast and your users on the West coast, their data could live on a database on the West coast. And your like admin panel needs to show all of them as an aggregate view over both coasts, right? Like this is something which, which something like CockroachDB can do and it can be a really great thing here. And we acknowledge that this is not something which is very easy to do right now. And Deno tries to make everything very easy. So, you can imagine that we’re, this is something we’re working on and we’re working on database solutions. And actually I should more generally say persistent solutions that allow you to persist data in a way that makes sense for an edge system like this where the data is persisted close to users that need it and data is cached around the world and you still have sort of semantics, which are consistent with the semantics that you have when you’re locally developing your application.

Luca Casonato 00:53:37 Like you don’t want, for example, your local application development. You don’t want there to be like strong consistency there, but then in production you have eventual consistency where suddenly, I don’t know, all of your code breaks because you didn’t, your US West region didn’t pick up the changes from US East because it’s eventually consistent, right? I mean, this is a problem that we see with a lot of the existing solutions here. Like specifically CloudFlare KV for example. CloudFlare KV is a single primary, is a system with single primary write regions where there’s just a bunch of caching going on. And this leads to eventual consistency, which can be very confusing for end user developers, especially because if you’re using this locally, the local emulator does not emulate the eventual consistency, right? So, this can become very confusing very quickly. And so, anything that we build in this persistence field, for example, we very seriously weigh these trade-offs and make sure that if there’s something that’s eventually consistent, it’s very clear and it works the same way, the same eventually consistent way, in the CLI.

Jeremy Jung 00:54:38 So for someone, let’s say they haven’t made that jump yet to use a Cockroach, they just have their database instance in AWS East or whatever. Does having the code at the edge where it all ends up needing to go to East, is that actually better than having the code be located next to the database?

Luca Casonato 00:55:03 Yeah, yeah, it totally does. There’s trade-offs here, right? obviously. If you have an admin panel for example, or a like user dashboard, which is very, very reliant on data from your database and for every single request needs to fetch fresh data from the database, then maybe the trade-off isn’t worth it. But most applications are not like that. Most applications are, for example, you have a landing page and that landing page needs to do AB tests and those AB tests are based on some heuristic that you can Fetch from the database every five seconds. That’s fine. Like it doesn’t need to be perfect, right? So, you have caching in place, which like by doing this caching locally to the user and still being able to programmatically control this like based on, I don’t know, the user’s user agent or the IP address of the user or the region of the user or the past browsing history of that user as measured by their cookies or whatever else, right? Being able to do these highly user customized actions very close to the user means that like latency is like, this is a much better user experience than if you have to do the roundtrip, especially if you’re a startup or a service which is globally distributed and serves not just users in the US or the EU, but like all across the world.

Jeremy Jung 00:56:16 And when you talk about caching in the context of Deno Deploy, is there a cache native to the system, or are you expecting someone to have a Redis or a memcache, that sort of thing?

Luca Casonato 00:56:29 Yeah, so Deno Deploy actually has, there’s a web cache API, which is also the web cache API that’s used by service workers and others. And CloudFlare also implements this cache API. And this is something that’s implemented in Deno CLI, and it’s going to be coming to Deno Deploy this quarter, which is, that’s the native way to do caching. And otherwise you can also use Redis, you can use services like Upstache, or even like primitive in-memory cache where it’s just an LU that’s in memory, like a JavaScript beta structure, right? Or even just a JavaScript map or JavaScript object with a time on it. And you automatically, and like every time you read from it and the time is above some certain threshold, you delete the cache and go Fetch it again, right? Like this is, there’s many things that you could consider a cash that are not like Redis or like the webcache API. So, there’s ways to do that. And there’s also a bunch of like modules in the standard library, or not in the standard library, sorry, in the the third party module registry, and also on NPM that you can use to implement different cache behaviors.

Jeremy Jung 00:57:30 And when you give the example of in-memory cache, when you’re running in Deno Deploy you’re running in these isolates, which presumably can be shut down at any time. So, what kind of guarantees do users have that whatever they put into memory will still be there?

Luca Casonato 00:57:49 None. Like it’s a cache, right? The cache can be evicted at any time. Your isolate can be restarted at any time. It can be shut down; you can be moved to a different region. The data center could go down for maintenance. Like this is something your application has to be built in a way that it is tolerant to restarts, essentially. But because it’s a cache, that’s fine. Because if the cache expires or the cache is cleared through some external means, the worst thing that happens is that you have a cold request again, right? And if you’re serving like a hundred requests a second, I can essentially guarantee to you that not every single request will invoke a cold start. Like I can guarantee to you that probably less than 0.1% of requests will cause a cold start. This is not like sl8 anywhere because it’s like totally up to however the system decides to scale you. But yeah, like it would be very wasteful for us, for example, to spin up a new isolate for every request. So, we don’t reuse isolates wherever possible. It’s in our best interest to not cold start you because it’s expensive for us to do all the CPU work to cold start an isolate, right?

Jeremy Jung 00:58:52 And if I understand correctly, Deno Deploy, it’s centered around applications that take HTTP requests. So, it could be a website, it could be an API, that sort of thing. And sometimes when people build applications, they have other things surrounding them. They’ll need scheduled jobs, they may need some form of message queue, things like that — things that don’t necessarily fit into what Deno Deploy currently hosts. And so, I wonder for things like that, what you recommend people would do while working with Deno Deploy?

Luca Casonato 00:59:30 Great question. Unfortunately, I can’t tell you too much about that without like spoiling everything . But what I’m going to say is you should keep your eyes peeled on our blog over the next two to three months here. I consider message queues and like especially message queues, they are a persistence feature, and we are currently working on persistence features. So yeah, that’s all I’m going to say. But you can expect Deno Deploy to do things other than just HTTP requests in the not-so-distant future — and like chronic jobs and stuff like that also at some point. Yeah.

Jeremy Jung 01:00:04 All right. We’ll look out for that. . . I guess as we wrap up, maybe you could give some examples of who’s using Deno and what types of projects do you think are ideal for Deno?

Luca Casonato 01:00:18 Yeah, like Deno — as in all of Deno — or Deno Deploy?

Jeremy Jung 01:00:21 I mean, I guess either either or, but yeah.

Luca Casonato 01:00:25 Okay. Yeah, yeah. Let’s do it. So one really cool use case for example, for Deno is Slack. Slack has this app platform that they’re building, which allows you to execute arbitrary Javascript from within inside of Slack in response to like slash commands and like actions. I don’t know if you’ve ever seen like those little buttons you can have in messages if you press one of those buttons like that can execute some Deno code, and Slack has built like this entire platform around that and it makes use of Deno’s like security features and built-in tooling and all that kind of thing. And that’s really cool. And Netlify has built Edge functions like, which is like a really, really awesome primitive they have for being able to customize outgoing requests to even come up with completely new requests on the spot as part of their CDN layer. Also built on top of Deno.

Luca Casonato 01:01:08 And GitHub has built like this platform called Flat, which allows you to like sort of on chron schedules, like pull data into Git repositories and process that and postprocess that and do things with that. And it’s integrated with Git actions and all kinds of things. It’s kind of cool. Superb Base also has some Edge has like an edge functions product that’s built on top of Deno. A bunch of cool things like that. We have like a, a really active Discord channel and there’s always people showcasing what kind of stuff they built in there; we have a showcase channel. I think that’s like, if, if you’re really interested in what cool things people are building with Deno, that’s like, that’s a great place to look. I think actually we maybe also have a showcase Deno.com/showcase, which is a page of like a bunch of, yeah, projects built with Deno or products using Deno or other things like that.

Jeremy Jung 01:01:57 Cool. If people want to learn more about Deno or see what you’re up to, where should they head?

Luca Casonato 01:02:03 Yeah, if you want to learn more about Deno CLI, head to Deno.land. If you want to learn more about Deno Deploy, head to Deno.com/deploy. If you want to chat to me or you can hit me up on my website, lcas.dev. If you want to chat about Deno, you can go to DiscordTG slash Deno. Yeah and if you’re interested in any of this and thought that maybe you have something to contribute here, you can either become an open source contributor on our open source project, or if this is really something you want to work on and you like distributed systems or systems engineering or fast performance, head to deno.com/jobs and send in your resume. We’re very actively hiring and be super excited to work with you.

Jeremy Jung 01:02:40 All right, Luca, well thank you so much for coming on Software Engineering Radio.

Luca Casonato 01:02:43 Thank you so much for having me.

Jeremy Jung 01:02:45 Cool. This has been Jeremy Jung for Software Engineering Radio. Thanks for listening.

[End of Audio]


SE Radio theme: “Broken Reality” by Kevin MacLeod (incompetech.com — Licensed under Creative Commons: By Attribution 3.0)

Join the discussion

More from this show