- 230 – Shubhra Kar on NodeJS
- 372 – Aaron Patterson on the Ruby Runtime
- 413 – Spencer Kimball on CockroachDB
- Luca Casonato
- Deno Deploy
- Deno Showcase
- Deno Subhosting
- Fresh web framework
- Cache Web API
- WinterCG – Web-interoperable Runtimes Community Group
- The Anatomy of an Isolate Cloud
- Supabase Edge Functions
- Netlify Edge Functions
- Slack releases platform open beta powered by Deno
- GitHub Flat Data
- Shopify Oxygen
- Cloudflare Workers (Competing product to Deno Deploy)
- How Cloudflare KV works
- XKCD Standards comic
Transcript brought to you by IEEE Software magazine.
This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number and URL.
Jeremy Jung 00:00:16 Today I’m talking to Luca Casonato. He’s a member of the Deno Core team and a TC39 delegate. Luca, welcome to Software Engineering Radio.
Luca Casonato 00:00:25 Hey, thanks for having me.
Luca Casonato 00:04:13 Pretty much, yeah.
Jeremy Jung 00:04:15 You can do anything that doesn’t interact with IO. So you think about browsers, you were mentioning, you need to interact with the DOM or if you’re writing a server side application, you probably need to receive or make HTTP requests, that sort of thing. All of that is not handled by V8. That has to be handled by an external runtime.
Luca Casonato 00:06:36 And if it does, if that timer exists, it’ll go call out to V8 and say you can now execute that promise. But V8 is still the one that’s keeping track of like which promises exists and the code that is meant to be invoked when they resolve, all that kind of thing. But the underlying infrastructure that actually invokes which promises to get resolved at what point in time — like the asynchronous IO is what this is called — this is driven by the event loop which is implemented by a runtime. So Deno for example, it uses Tokyo for its event loop. This is an event loop written in Rust. It’s very popular in the Rust ecosystem. Node uses libuv, this is a relatively popular runtime event loop implementation for C++. And libuv was written for Node; Tokyo was not written for Deno. But yeah… Chrome has its own event loop implementation. BUN has its own event loop implementation.
Jeremy Jung 00:07:27 We might go a little bit more into that later, but I think what we should probably go into now is why make Deno? because you have Node that’s currently very popular. The co-creator of Deno, to my understanding, actually created Node. So maybe you could explain to our audience what was missing, or what was wrong with Node where they decided I need to create a new runtime?
Luca Casonato 00:07:55 Yeah, so the primary point of concern here was that Node was slowly diverging from browser standards with no real path to reconverging. Like there was nothing that was pushing Node in the direction of standards-compliance, and there was nothing that was like sort of forcing Node to innovate. And we really saw this because in the time between, I don’t know, 2015 and 2018, like Node was slowly working on ESM while browsers had already shipped ESM for like three years. Node did not have Fetch; Node only got Fetch last year, right? Six, seven years after browsers got Fetch. Node’s stream implementation is still very divergent from standard web streams. Node was very reliant on callbacks. It still is — like, promises in many places of the node API are an afterthought, which makes sense because Node was created in a time before promises existed, but there was really nothing that was pushing Node forward, right?
Luca Casonato 00:10:04 So there really needed to be a place where you could explore this direction and see if it worked. And Deno was that. Deno still is that. And I think Deno has outgrown that now into something which is much more usable as like a production-ready runtime. And many people do use it in production, and now Deno is on the path of slowly converging back with Node from both directions. Like, Node is slowly becoming more standards-compliant, and depending on who you ask, this was done because of Deno. And some people said it had already been going on and Deno just accelerated it. But that’s not really relevant because the point is that, like, Node is becoming more standards-compliant, and the other direction is Deno is becoming more Node-compliant. Like, Deno is implementing Node compatibility layers that allow you to run code that was originally written for the Node ecosystem in the standards-compliant runtime. So through those two directions, the runtimes are sort of going back towards each other. I don’t think they’ll ever merge, but we’re getting to a point here pretty soon I think where it doesn’t really matter what runtime you write for because you’ll be able to write code written for one runtime in the other runtime relatively easily.
Jeremy Jung 00:11:14 So if you’re saying the two are becoming closer to one another, becoming closer to the web standard that runs in the browser, if you’re talking to someone who’s currently developing in Node, what’s the incentive for them to switch to Deno versus continue using Node and then hope that eventually they’ll kind of meet in the middle?
Jeremy Jung 00:15:28 And this WinterCG group, is Node a part of that as well?
Luca Casonato 00:15:33 Yes, we’ve invited Node to join. Due to the complexities of how Node’s internal decision-making system works, Node is not officially a member of WinterCG. There is some individual members of the Node technical steering committee which are participating. For example, James M. Snell is my co-chair on WinterCG. He also works at Cloud Liberty is also a Node TSC member, Matteo Colina who has been instrumental to getting Fetch landed in Node is also actively involved. So, Node is involved, but because Node is Node and Node’s decision-making process works the way it does Node is not officially listed anywhere as a member. But yeah, they’re involved. And maybe they’ll be a member at some point. But yeah. Let’s see.
Jeremy Jung 00:16:20 Yeah, so it sounds like you’re thinking that’s more of a governance or an organizational aspect of Node than it is a technical limitation. Is that right?
Luca Casonato 00:16:32 Yeah, like I obviously can’t speak for the Node technical steering committee, but I know that there’s a significant chunk of the Node technical steering committee that is very favorable towards standards-compliance. But parts of the Node technical steering committee are not; they are either indifferent or are actively — I don’t know if they’re still actively working against this, but have actively worked against standards-compliance in the past. And because the Node governance structure is very, is so open and lets all these voices be heard, that just means that decision making processes within Node can take so long. Like this is also why the Fetch API took eight years to ship — like this was not a technical problem. And it is also not a technical problem that Node does not have URL pattern support or defile Global or the web crypto API was not on this, on the global object until like late last year, right? Like these are not technical problems, these are decision making problems. And yeah, that was also part of the reason why we started Deno as like a separate thing because you can try to innovate Node from the inside, but innovating Node from the inside is very slow, very tedious, and requires a lot of fighting. And sometimes just showing somebody from the outside like look, this is the bright future you could have makes them more inclined to do something.
Jeremy Jung 00:17:54 Do you have a sense for, you gave the example of Fetch taking eight years to get into Node. Do you have a sense of what the typical objection is to something like that? Like, I understand there’s a lot of people involved, but why would somebody say, I don’t want this in?
Luca Casonato 00:18:09 Yeah, so for Fetch specifically, there was many different kinds of concerns. I can maybe list two of them. One of them was for example that the Fetch API is not a good API and as such, Node should not have it. Which is sort of missing the point of, because it’s a standard API, how good or bad the API is is much less relevant because if you can share the API, you can also share a wrapper that’s written around the API, right? And then the other concern was Node doesn’t need Fetch because Node already has an HTTP API. So, these are both kind of examples of concerns that people had for a long time, which it took a long time to either convince these people or to push the change through anyway. And this is also the case for other things like, for example, web crypto, why do we need web crypto? We already have Node crypto.
Luca Casonato 00:18:59 Or why do we need yet another streams implementation? Node already has four different streams implementations. Like, why do we need web streams? Like I don’t know if you know this XKCD of there’s 14 competing standards, so let’s write a 15th standard to unify them all and then at the end we just have 15 competing standards. So I think this is also the kind of concern that people were concerned about, but I, I think what we’ve seen here is that this is really not a concern that one needs to have because it ends up that — or it turns out in the end that if you implement web APIs, people will use web APIs and will use web APIs only for their new code. It takes a while, but we’re seeing this with ESM versus Require — like new code written with Require much less common than it was two years ago and new code now using like XHR, whatever it’s called, form request or you know, the one, I mean compared to using Fetch, like nobody uses that anymore. Everybody uses Fetch.
Luca Casonato 00:19:59 And like in Node, if you write a little script, like you’re going to use Fetch; you’re not going to use like Node’s HTTP.get API or whatever. So yeah, we’re going to see the same thing with Readable Stream. We’re going to see the same thing with web crypto. We’re going to stay, see the same thing with Blob. Like I think one of the big ones where Node is still — I don’t think this is one that’s ever going to get solved — is the buffer global in Node. Like we have the uint8, this uint8 global, and like all the runtimes including browsers and Buffer is like a superset of that, but it’s in global scope. So, it’s sort of this non-standard extension of unit8 array that people in Node like to use. And it’s not compatible with anything else, but because it’s so easy to get at, people use it anyway. So those are also kind of problems that, that we’ll have to deal with eventually. And maybe that means that at some point the buffer global gets deprecated and I don’t know, it probably can never get removed, but these are kinds of conversations that the Node TSC is going to have to have internally in, I don’t know, maybe five years.
Jeremy Jung 00:20:57 Yeah. So, at a high level, what’s shipped in the browser, it went through the ECMAScript approval process. People got it into the browser. Once it’s in the browser, probably never going away. And because of that, it’s safe to build on top of that for these server runtimes because it’s never going away from the browser. And so, everybody can kind of use it into the future and not worry about it. Yeah,
Luca Casonato 00:21:24 Exactly. Yeah. And that’s excluding the benefit that also if you have code that you can write once and use in both the browser and the server-side runtime, like that’s really nice. That’s the other benefit.
Luca Casonato 00:22:17 Yeah, so the reasoning here is essentially if you look at other modern languages — like Rust is a great example; Go is a great example. Even though Go was designed around the same time as Node, it has a lot of these same tools built in. And what it really shows is that if the ecosystem converges — is essentially forced to converge — on a single set of built-in tooling, A) that built-in tooling becomes really, really excellent because everybody’s using it. And also it means that if you open any project written by any Go developer, any Rust developer, and you look at the tests, you immediately understand how the test framework works, and you immediately understand how the assertions work, and you immediately understand how the build system works, and you immediately understand how the dependency imports work, and you immediately understand like, I want to run this project and I want to restart it when my file changes.
Luca Casonato 00:23:04 Like you immediately know how to do that because it’s the same everywhere. And this kind of feeling of having to learn one tool and then being able to use all of the projects, like being able to con contribute to open source, when you’re moving jobs, whatever — like between personal projects that you haven’t touched in two years, you know — being able to learn this once and then use it everywhere. Such an incredibly powerful tool. Like people don’t appreciate this until they’ve used a runtime or language which provides this to them. Like you can go to any Go developer and ask them if they would like, there’s the saying in the Go ecosystem that Go FMT is nobody’s favorite, but oh wait, no, I don’t remember how the saying goes. But the saying essentially implies that the way that Go FMT formats code, maybe not everybody likes, but everybody loves Go FMT anyway because it just makes everything look the same.
Luca Casonato 00:23:54 And like you can read your friend’s code, your colleagues’ code, your new job’s code the same way that you did your code from two years ago. And that’s such an incredibly powerful feeling, especially if it’s like well-integrated into your IDE. You clone a repository, open that repository and like your testing panel on the left-hand side just populates with all the tests, and you can click on them and run them. And if an assertion fails, it’s like the standard output format that you’re already familiar with. And it’s a really great feeling. And if you don’t believe me, just go try it out and then you will believe me.
Luca Casonato 00:26:47 And even though there’s always the people that say, oh, well I won’t use your tool unless — like we get this all the time — like, I’m not going to use Deno FMT because I can’t, I don’t know, remove the semicolons or use single quotes or change my tab width to 16, right? Like, okay, wait until all of your coworkers are going to scream at you because you set the tab to 16 and then see what they change it to. And then you’ll see that it’s actually the exact default that everybody uses. So it’ll, it’ll take a couple more years. But I think we’re also going to get there. Like note is starting to implement a, a test runner and I think over time we’re also going to converge on, on, on, on like some standard build tools. Like I think Vite for example, is a great example of this, like doing a front-end project nowadays, like building new front-end tooling that’s not built on Vite? Yeah, don’t. like Vite’s become the standard. And I think we’re going to see that in a lot more places.
Jeremy Jung 00:27:38 Yeah. Though I think it’s tricky, right? Because you have so many people with their existing projects — you have people who are starting new projects and they’re just searching the internet for what they should use. So, you’re going to have people on webpac, you’re going to have people on Vite, I guess now there’s going to be Turbopack I think is another one that’s coming. There’s all these different choices, right? And I think it’s hard to really settle on one I guess, but yeah.
Jeremy Jung 00:29:14 So, I want to talk a little bit about how we’ve been talking about Deno in the context of you just using Deno, using its own standard library, but just recently last year you added a compatibility shim where people are able to use Node libraries in Deno. And I wonder if you could talk to, like earlier you had mentioned that Deno has different permissions model, on the website it mentions that Deno is standard HTTP server is two times faster than Node in a Hello World example. And I’m wondering what kind of benefits people will still get from Deno if they choose to use packages from Node?
Luca Casonato 00:30:53 And what you get from that is that essentially it gives you like this back door to a callout to all of the existing Node code that has been written. Like you cannot expect that Deno developers write like, I don’t know, there was this time when Deno did not really have that many third-party modules yet; it was very early on, and you either, if you wanted to connect to Postgres and there was no Postgres driver available, then the solution was to write your own Postgres driver. And that is obviously not great. So, the better solution here is to let users for these packages where there’s no Deno native or web native or standard native package for this yet that is importable with URL specifiers, you can import this from NPM. So, it’s sort of this like back door into the existing NPM ecosystem. And we explicitly for example, don’t allow you to create a package Json file or import Bayer nodes specifiers because we want to stay standards compliant here, but to make this work effectively we need to give you this little backdoor.
Luca Casonato 00:31:56 And inside of this backdoor, all hell is like — or like everything is terrible inside there, right? Like inside there you can do beer specifier, is it inside there you can like there’s package JSON, and there’s crazy node resolution and underscore and or D name, and common JS and like all of that stuff is supported inside of this back door to make all the NPM packages work. But on the outside it’s exposed as this nice ESM-only NPM specifiers. And the reason you would want to use this over like just using Node directly is because again, like you want to use TypeScript, no config like necessary you want to use, you want to have a formatter, you want to have a linter, or you want to have tooling that like does testing and benchmarking and compiling or whatever, all of that’s built in, you would’ve run this on the edge like close to your users and like 35 different points of presence.
Luca Casonato 00:32:47 It’s like okay, push it to your Git repository, go to this website, click a button two times and it’s running in 35 data centers. Like this is the kind of like developer experience that you can, you do not get you — I will argue that you cannot get with Node right now. Like even if you’re using something like TSnode, it is not possible to get the same level of developer experience that you do with Deno. The same like speed at which you can iterate on your projects, like create new projects, iterate on them is like incredibly fast. And you know, like I can open a folder on my computer, create a single file, may.TS put some code in there and then called Deno Run may not TS and that’s it. Like I don’t, I did not need to do NPM install and I did not need to do NPM init dash Y and remove the license and version fields from the generated package JSOM and like set private to true and whatever else, right? It just all works out of the box, and I think that’s what a lot of people come to Deny for and then ultimately stay for And also, yeah, standards compliance. So, things you build in Deno now are going to work in five, 10 years with no hassle.
Jeremy Jung 00:33:53 And so with this compatibility layer, or this shim, is it where the node code is calling out to Node APIs and you’re replacing those with Deno-compatible equivalents?
Luca Casonato 00:34:07 Yeah, exactly. Like for example, we have a shim in place that shims out the Node crypto API on top of the web crypto API. Like sort of, some people may be familiar with this in the form of browserify shims if anybody still remembers those. It’s essentially your front end tooling you were able to import from like Node crypto in your front end projects and then behind the scenes your web packs or your browser IES or whatever would take that import from Node crypto and would replace it with like this shim that was essentially exposed the same API as Node crypto but under the hood wasn’t implemented with native calls but was implemented on top of web crypto or implemented in user lang even. And you know, there’s something similar, there’s a couple edge cases of APIs that we do not expose the underlying thing that we shim to to end users outside of the Node shim.
Luca Casonato 00:34:58 So like there’s some APIs that I don’t know if I have a good example like Node next tick for example. Like to properly be able to shim Node next tick you need to like implement this within the event loop in the runtime and you don’t need this an Deno because in Demo you use the web standard Q microtask to do this kind of thing, but to be able to shim it correctly and run node applications correctly, we need to have this sort of like back door into some ugly APIs, which, which natively integrate in the runtime. But yeah.
Jeremy Jung 00:35:27 Any, anytime you’re replacing a component with a shim, I think there’s concerns about additional bugs or changes in behavior that can be introduced. Is that something that you’re seeing and how are you accounting for that?
Luca Casonato 00:35:43 Yeah, it’s an excellent question. So, this is actually a great concern that we have all the time, and it’s not just even introducing bugs; sometimes it’s removing bugs. Like sometimes there’s bugs in the node standard library which are there and people are relying on these bugs to be there for the applications to function correctly. And we’ve seen this a lot, and then we implement this and we implement from scratch and we don’t make that same bug and then the test fails, or then the application fails. So, what we do is we actually run nodes test suites against Deno shim layer. So, Node has a very extensive test suite for its own standard library and we can run this suite against, against our shims to find things like this. And there’s still edge cases obviously which Node like there was, maybe there’s a bug which Node was not even aware of existing, or maybe this like it’s now like intended behavior because somebody relies on it, right?
Luca Casonato 00:36:32 Like the second somebody relies on some non-standard or some buggy behavior, it becomes intended, but maybe there was no test that explicitly tests for this behavior. So, in that case we’ll add our own tests to ensure that, but overall we can already catch a lot of these by just testing against Node’s test. And then the other thing is we run a lot of real code like we’ll try run Prisma and we’ll try run Vite and we’ll try run nextJS and we’ll try run like, I don’t know a bunch of other things that people throw at us and check that they work, and if they work and there’s no bugs, then we did our job well and our shims are implemented correctly. And then there’s obviously always the edge cases where somebody did something absolutely crazy that nobody thought possible, and then they’ll open an issue on the Deno repo and we scratch our heads for three days and then we’ll fix it, and then in the next release there’ll be a new bug that we added to make the compatibility with Node better. So yeah, running tests is the main thing. Running Node’s test.
Jeremy Jung 00:37:30 Are there performance implications if someone is running an Express app or an XJS app and Deno? Will they get any benefits from the Deno runtime and performance?
Luca Casonato 00:37:42 Yeah, it’s actually there is performance implications, and they’re usually the opposite of what people think they’re like, usually when you think of performance implications, it’s always a negative thing, right? It’s always okay. It’s like, it’s like a compromise. Like the shim layer must be slower than the real Node, right? It’s not. Like, we can run express faster than Node can run express. And obviously not everything is faster in Deno than it is in Node, and not everything is faster in Node than it is in Deno. It’s dependent on the API, dependent on, on what each team decided to optimize. And this also extends to other runtimes. Like, you can always cherry pick results like, I don’t know, to make your runtime look faster in certain benchmarks, but overall what really matters is that you do not — the first important step for good node compatibility is to make sure that if somebody runs your code, or runs their Node code in Deno or your other run type or whatever, it performs at least the same.
Luca Casonato 00:38:33 And then anything on top of that great cherry on top perfect, but make sure the baselines is at least the same. And I think, yeah, we have very few APIs where like there’s a significant performance degradation in Deno compared to Node, and like we’re actively working on these things. Like Deno is not a project that’s done, right? Like we have, I think at this point like 15 or 16 or 17 engineers working on Deno spanning across all of our different projects. And like, we have a whole team that’s dedicated to performance and a whole team that’s dedicated to node compatibility. So, like these things get addressed, and we make patch releases every week and a minor release every four weeks. So yeah, it’s not a standstill. It’s constantly improving.
Jeremy Jung 00:39:16 Another thing I’ve seen with Deno is it supports running web assembly binaries, so you can export functions and call them from TypeScript. I was curious if you’ve seen practical uses of this in production within the context of Deno?
Jeremy Jung 00:41:48 What are some of the current limitations of web assembly and Deno? For example, from web assembly? Can I make HTTP requests? Can I read files? That sort of thing.
Jeremy Jung 00:44:12 So you talked a little bit about this before, the Deno team, they have their own hosting platform called Deno Deploy. So, I wonder if you could explain what that is.
Luca Casonato 00:44:26 Yeah, so Deno has this really nice, this really nice concept of permissions which allow you to — sorry, I’m going to start somewhere slightly unrelated. Maybe it sounds like it’s unrelated, but you’ll see in a second it’s not unrelated: Deno has this really nice permission system which allows you to sandbox Deno programs to only allow them to do certain operations. For example, in Deno, by default, if you try to open a file, it’ll error out and say you don’t have read permissions to read this file. And then what you do is you specify dash-dash-allowread, and maybe you have to give it, you can either specify allowread and then it’ll grant you read access to the entire file system. Or you can explicitly specify files or folders or any number of things. Same goes for write permissions, same goes for network permissions, same goes for running subprocesses, all these kinds of things.
Jeremy Jung 00:47:51 So when someone ships you their code and you run it, you mentioned that the cold start time is very low. How is the code being run? Are people getting their own process? It sounds like it’s not using containers. I wonder if you could explain a little bit about how that works.
Luca Casonato 00:48:56 And like, it can’t even execute TypeScript for example, like TypeScript is we pre-process it up front to make the cold start faster. And then what we do is if you don’t get a request for some amount of time will uh, spin down that isolate and we’ll spin up a new idle one in its place. And then if you get another request an hour later for that same deployment, we’ll assign it to a new isolate. And yeah, that’s a cold start, right? If you have an isolate which receives — or a deployment, rather which receives a bunch of traffic, like let’s say you receive a hundred requests per second, we can send a bunch of that traffic to the same isolate and we’ll make sure that if that one isolate isn’t able to handle that load, we’ll spin it out over multiple isolates and we’ll sort of load balance for you, and we’ll make sure to always send to the point of presence that’s closest to the user making the request so they get very minimal latency.
Luca Casonato 00:49:48 We’ve these like layers of load balancing in place and I’m glossing over a bunch of like security related things here about how these, these processes are actually isolated and how we monitor to ensure that you don’t break out of these processes. And for example, Deno Deploy does, it looks like you have a file system because you can read files from the file system. But in reality Deno Deploy does not have a file system. Like the file system is a global virtual file system, which is implemented completely differently than it is in Deno CLI. But as an end user you don’t have to care about that because the only thing you care about is that it has the exact same API as the Deno CLI, and you can run your code locally and if it works there, there it’s also going to work in Deploy. Yeah. So that’s kind of a high level of Deno Deploy. If any of this sounds interesting to anyone by the way, we’re like very actively hiring on Deno Deploy. I happen to be the tech lead for a Deno deploy product. So I’m always looking for engineers to join our ranks and build cool distributed systems, uh, Deno.com/jobs.
Jeremy Jung 00:50:47 For people who aren’t familiar with V8 isolates, are these each run in their own processes, or do you have a single process and that has a whole bunch of isolates inside?
Luca Casonato 00:51:00 So we run most isolates in a single pro. In the general case you can say that we run uh, one isolate per process, but there’s many asterisks on that because it’s very complicated. I’ll just say it’s very complicated. In the general case though, it’s one isolate per process. Yeah.
Jeremy Jung 00:51:20 One of the things you mentioned about Deno Depoy is it’s centered around deploying your application code to a bunch of different locations. And you also mentioned the, the cold start times are very low. Could you kind of give the case for wanting your application code at a bunch of different sites?
Luca Casonato 00:51:38 Yeah. So, the main benefit of this is that when your user makes a request your application, you don’t have to roundtrip back to wherever centrally hosted your application would otherwise be. Like if you are a startup, even if you’re just in the US for example, it’s nice to have points of presence not just on one of the US coasts, but on both of the US coasts because that means that your roundtrip time is not going to be a hundred milliseconds, but it’s going to be 20 milliseconds. This sort of relies on, there’s obviously always the problem here that if your database lives in only one of the two coasts, you still need to do the roundtrip. And there’s solutions to this, which is 1) caching, that’s the obvious sort of boring solution. And then there’s the solution of using databases which are built exactly for this. For example, CockroachDB is a database which is Postgres-compatible, but it’s really built for global distribution and built for being able to shard data across regions and have different primary regions for different shards of your tables.
Luca Casonato 00:52:40 Which means, for example, you could have your users on the East coast, their data could live on a database in the East coast and your users on the West coast, their data could live on a database on the West coast. And your like admin panel needs to show all of them as an aggregate view over both coasts, right? Like this is something which, which something like CockroachDB can do and it can be a really great thing here. And we acknowledge that this is not something which is very easy to do right now. And Deno tries to make everything very easy. So, you can imagine that we’re, this is something we’re working on and we’re working on database solutions. And actually I should more generally say persistent solutions that allow you to persist data in a way that makes sense for an edge system like this where the data is persisted close to users that need it and data is cached around the world and you still have sort of semantics, which are consistent with the semantics that you have when you’re locally developing your application.
Luca Casonato 00:53:37 Like you don’t want, for example, your local application development. You don’t want there to be like strong consistency there, but then in production you have eventual consistency where suddenly, I don’t know, all of your code breaks because you didn’t, your US West region didn’t pick up the changes from US East because it’s eventually consistent, right? I mean, this is a problem that we see with a lot of the existing solutions here. Like specifically CloudFlare KV for example. CloudFlare KV is a single primary, is a system with single primary write regions where there’s just a bunch of caching going on. And this leads to eventual consistency, which can be very confusing for end user developers, especially because if you’re using this locally, the local emulator does not emulate the eventual consistency, right? So, this can become very confusing very quickly. And so, anything that we build in this persistence field, for example, we very seriously weigh these trade-offs and make sure that if there’s something that’s eventually consistent, it’s very clear and it works the same way, the same eventually consistent way, in the CLI.
Jeremy Jung 00:54:38 So for someone, let’s say they haven’t made that jump yet to use a Cockroach, they just have their database instance in AWS East or whatever. Does having the code at the edge where it all ends up needing to go to East, is that actually better than having the code be located next to the database?
Luca Casonato 00:55:03 Yeah, yeah, it totally does. There’s trade-offs here, right? obviously. If you have an admin panel for example, or a like user dashboard, which is very, very reliant on data from your database and for every single request needs to fetch fresh data from the database, then maybe the trade-off isn’t worth it. But most applications are not like that. Most applications are, for example, you have a landing page and that landing page needs to do AB tests and those AB tests are based on some heuristic that you can Fetch from the database every five seconds. That’s fine. Like it doesn’t need to be perfect, right? So, you have caching in place, which like by doing this caching locally to the user and still being able to programmatically control this like based on, I don’t know, the user’s user agent or the IP address of the user or the region of the user or the past browsing history of that user as measured by their cookies or whatever else, right? Being able to do these highly user customized actions very close to the user means that like latency is like, this is a much better user experience than if you have to do the roundtrip, especially if you’re a startup or a service which is globally distributed and serves not just users in the US or the EU, but like all across the world.
Jeremy Jung 00:56:16 And when you talk about caching in the context of Deno Deploy, is there a cache native to the system, or are you expecting someone to have a Redis or a memcache, that sort of thing?
Jeremy Jung 00:57:30 And when you give the example of in-memory cache, when you’re running in Deno Deploy you’re running in these isolates, which presumably can be shut down at any time. So, what kind of guarantees do users have that whatever they put into memory will still be there?
Luca Casonato 00:57:49 None. Like it’s a cache, right? The cache can be evicted at any time. Your isolate can be restarted at any time. It can be shut down; you can be moved to a different region. The data center could go down for maintenance. Like this is something your application has to be built in a way that it is tolerant to restarts, essentially. But because it’s a cache, that’s fine. Because if the cache expires or the cache is cleared through some external means, the worst thing that happens is that you have a cold request again, right? And if you’re serving like a hundred requests a second, I can essentially guarantee to you that not every single request will invoke a cold start. Like I can guarantee to you that probably less than 0.1% of requests will cause a cold start. This is not like sl8 anywhere because it’s like totally up to however the system decides to scale you. But yeah, like it would be very wasteful for us, for example, to spin up a new isolate for every request. So, we don’t reuse isolates wherever possible. It’s in our best interest to not cold start you because it’s expensive for us to do all the CPU work to cold start an isolate, right?
Jeremy Jung 00:58:52 And if I understand correctly, Deno Deploy, it’s centered around applications that take HTTP requests. So, it could be a website, it could be an API, that sort of thing. And sometimes when people build applications, they have other things surrounding them. They’ll need scheduled jobs, they may need some form of message queue, things like that — things that don’t necessarily fit into what Deno Deploy currently hosts. And so, I wonder for things like that, what you recommend people would do while working with Deno Deploy?
Luca Casonato 00:59:30 Great question. Unfortunately, I can’t tell you too much about that without like spoiling everything . But what I’m going to say is you should keep your eyes peeled on our blog over the next two to three months here. I consider message queues and like especially message queues, they are a persistence feature, and we are currently working on persistence features. So yeah, that’s all I’m going to say. But you can expect Deno Deploy to do things other than just HTTP requests in the not-so-distant future — and like chronic jobs and stuff like that also at some point. Yeah.
Jeremy Jung 01:00:04 All right. We’ll look out for that. . . I guess as we wrap up, maybe you could give some examples of who’s using Deno and what types of projects do you think are ideal for Deno?
Luca Casonato 01:00:18 Yeah, like Deno — as in all of Deno — or Deno Deploy?
Jeremy Jung 01:00:21 I mean, I guess either either or, but yeah.
Luca Casonato 01:01:08 And GitHub has built like this platform called Flat, which allows you to like sort of on chron schedules, like pull data into Git repositories and process that and postprocess that and do things with that. And it’s integrated with Git actions and all kinds of things. It’s kind of cool. Superb Base also has some Edge has like an edge functions product that’s built on top of Deno. A bunch of cool things like that. We have like a, a really active Discord channel and there’s always people showcasing what kind of stuff they built in there; we have a showcase channel. I think that’s like, if, if you’re really interested in what cool things people are building with Deno, that’s like, that’s a great place to look. I think actually we maybe also have a showcase Deno.com/showcase, which is a page of like a bunch of, yeah, projects built with Deno or products using Deno or other things like that.
Jeremy Jung 01:01:57 Cool. If people want to learn more about Deno or see what you’re up to, where should they head?
Luca Casonato 01:02:03 Yeah, if you want to learn more about Deno CLI, head to Deno.land. If you want to learn more about Deno Deploy, head to Deno.com/deploy. If you want to chat to me or you can hit me up on my website, lcas.dev. If you want to chat about Deno, you can go to DiscordTG slash Deno. Yeah and if you’re interested in any of this and thought that maybe you have something to contribute here, you can either become an open source contributor on our open source project, or if this is really something you want to work on and you like distributed systems or systems engineering or fast performance, head to deno.com/jobs and send in your resume. We’re very actively hiring and be super excited to work with you.
Jeremy Jung 01:02:40 All right, Luca, well thank you so much for coming on Software Engineering Radio.
Luca Casonato 01:02:43 Thank you so much for having me.
Jeremy Jung 01:02:45 Cool. This has been Jeremy Jung for Software Engineering Radio. Thanks for listening.
[End of Audio]