EPISODE 1887 [INTRODUCTION] [0:00:00] Announcer: Package management sits at the foundation of modern software development, quietly powering nearly every software project in the world. Tools like npm and yarn have long been the core of the JavaScript ecosystem, enabling developers to install, update, and share code with ease. But as projects grow larger and the ecosystem more complex, this older infrastructure is beginning to show its limits with performance bottlenecks, dependency conflicts, and growing concerns around supply chain security. Darcy Clarke and Ruy Adorno are veterans of this ecosystem. Both spent years maintaining the npm CLI and helping guide the Node.js project, where they saw firsthand the technical debt and design trade-offs that define modern JavaScript tooling. Now they're building vlt, a new package manager and registry that rethinks performance, security, and developer experience from the ground up. In this episode, Darcy and Ruy join Josh Goldberg to discuss how Vlt works, why they believe package management needs a server-side reboot, what lessons they've drawn from npm's evolution, and how features like declarative querying, self-hosted registries, and real-time security scanning could reshape how developers build and share JavaScript in the years ahead. This episode is hosted by Josh Goldberg, an independent full-time open-source developer. Josh works on projects in the TypeScript ecosystem, most notably TypeScript ESLint, a powerful static analysis tool set for JavaScript and TypeScript. He is also the author of the O'Reilly Learning TypeScript book, a Microsoft MVP for developer technologies, and a co-founder of SquiggleConf, a conference for excellent web developer tooling. Find Josh on Bluesky, Fosstodon, and.com as Joshua K. Goldberg. [INTERVIEW] [0:02:04] JG: Darcy Clark and Ruy Adorno from the vlt Company, welcome to Software Engineering Daily. How's it going? [0:02:09] DC: Good. Thanks for having us, Josh. [0:02:11] RA: Yeah, thank you. [0:02:12] JG: Oh, I'm very excited. Just to start off, let's go in order, alphabetically, your first name. Darcy, who are you? And how did you come to work with vlt and package managers? [0:02:22] DC: Who I am? I'm a software engineer who has been developing, I would say, in JavaScript for at least about 20 years now. And I got into package management by jumping head first into the npm, Inc., the npm company, in 2019. Was hired. And then shortly after, had brought on Ruy, and worked with him very closely. We were actually a part of the acquisition of npm by GitHub in 2020. And that was a pretty exciting time for us. We got to see what it's like to work at both a fast-moving and rocket ship of a venture-backed startup. And then we also got to see what it was like to go into a very large enterprise company, and that was also being acquired by the largest enterprise company, i.e., Microsoft. And yeah, we got to support the world's largest package registry. And I really enjoyed the space. And I really care deeply about community and open source. Yeah, just fell in love many, many years ago with JavaScript, and fell in love with also building software and making my own software. And got the opportunity late into sort of the npm company's life span to actually do it full-time. And we're back at it again here with our new company, vlt. [0:03:42] JG: And you, Ruy? [0:03:43] RA: Hello. My name is Ruy Adorno, and also a software developer. And very similar to Darcy, about 20 years of experience in this field. And my story with the package management ecosystem is very [inaudible 0:03:57] with Darcy. I joined his team back at npm in 2019. And we lived together through the GitHub acquisition. And we have fostered the npm CLI and participating in the Node ecosystem for over three years together in both these companies. And yeah, these days, I'm a member of the technical steering committee in the Node.js project. And currently serving as the vice chair there, which means I'm kind of help steering, fostering the project. Most of that is kind of helping running the weekly meetings, but also try to just be helpful to the other collaborators in the project. Yeah, I do like to mention a little bit of my background. I've been in Canada for half my life here. But I'm originally from Brazil, where I grew up. Yeah, that's kind of a little bit about me and my background. [0:04:54] JG: Great. Before we dive into all the interesting stuff that vlt is doing, I'd like to walk our listeners through what it means to be a package manager. What is the delineation between the core language or framework base such as Node and the package manager on top of it, something like npm? How does that all work together? [0:05:12] DC: That's a great question. These things obviously have to have some mutual understanding about where things live in order for the one to be helpful to the other. And I think there's, in many ecosystems now, actually, a dedicated package manager for the language or runtimes associated. And Node really, I think, has - we sort of trailblazed the path in the early aughts in terms of the relationship between npm and Node having distributed essentially a package manager with the runtime for so long. In terms of like the delineation of where the packages actually live, the runtime needs to be aware of where to be looking at these things. The package manager needs to be aware of where it should be placing these things to be helpful, right? And sometimes folks think that these things should be one and the same, or that's a more common approach that Bun and Deno are starting to take in terms of being both a package manager and a runtime. And especially with the approach Deno has taken, leaning into sort of the ASM spec and sort of browser standards, we are seeing a blurring of the lines between what it means to be a package manager and a runtime. But yeah, traditionally, the handoff has been where is the location of my modules or where does the runtime look for modules, and how can the package manager place those modules in a consistent place? And the value ad really for the package manager is consistency, security, great developer experience around ensuring you get access to updates. You understand that there's a net new version of a piece of software, and that's huge in terms of the value ad traditionally. If you're from developing 20 years ago, you remember consuming software directly from CDN's. And you would have no clue when there was a new release of a piece of software and you were hot linking. And there's a lot of problems with that strategy. And/or you were bundling the software yourself and you would be shipping legacy code all the time. And so package managers really had a lot of value in trying to keep your code evergreen and ensure that you are safer that way. And yeah, I think there's a lot of value in the space that is for package management or package tooling. But they definitely work closely with runtimes. And there has to be some sort of common understanding about where these things should live for them to actually be operationalized by the runtime. [0:07:34] JG: That makes sense. I've seen online quite a lot of discourse about things like Corepack, about how much effort the core runtime time shouldn't even put into allowing users to define which package manager they're using, or those API hooks between the two of them. I guess, Ruy, what's the current state of things? And what do you see as the role of something like Node integrating with hooks for something like a package manager? [0:07:57] RA: Yeah, it's a good question. The node runtime ships a package manager, right? There is that canonical story of npm being the package manager for the runtime. And yeah, it does take energy from the project just managing that relationship. It's already one. And I think there are further complication in trying to manage directly the relationship with all the possible package managers you could possibly use at the Node runtime. There is that. But the interesting part of the story is that Corepack, the whole discussion did linger for a while within the Node project, right? And it was not too long ago. I think it was early this year that the Node DSC finally sit down and had a vote to decide. Because it was a very polarizing topic subject within the project. On one side you have a group of people who really love the user experience and the capabilities that Corepack were enabling. And on the other side, there were a group of people that were very concerned about some of the details of the implementation. And eventually, we had a vote. I think it was early in the summer. We decided that the project itself wants to move past the Corepack story. The idea here is that we're doing a form in which it's still being shipped for a while. If I'm not mistaken, it's still included in Node 25. We just shipped. It's the current version at the moment we're speaking. And the idea is to start announcing to the user base that, "Okay, this is going away. This is being deprecated." But really give users some time before flipping off that switch so that they can adapt the ecosystem as a whole, can prepare and adapt to new strategies to serve the other package managers, right? And I think I'm pretty sure Darcy can also build up a little bit more on the Corepack side of things. [0:10:00] JG: Before we dive too deep, can you just define what is Corepack? [0:10:04] DC: Corepack was traditionally, or when it was first introduced, was actually under the name PMM, which was package manager manager, which is funny because that basically tells you what Corepack is. It's a package manager manager. And so, it was very much tailored to managing a subset of the ecosystem that we consider to be packages. Funny enough, when you think about package manager, what actually differentiates a package manager from another package, right? Is a package manager a package itself? Is a runtime a package? You get into these sort of like circular thoughts and actually definitions. And if you do think about package manager as a piece of software similar to any other package, then you start to see that, "Hey, it needs all the same kinds of assurances we have with all of our traditional dependencies or traditional packages that we want our package managers to take care of." And that includes ensuring that there's like integrity, and there's consistency for people's projects. And what was happening in the ecosystem at the time, and why I think Maël from yarn, who pushed forward PMM, and which became Corepack, was pushing forward was because he was seeing a lot of issues with consistency in terms of folks installing their Node modules. And so he was seeing that certain package managers, or PMP, or MPMs, were innovating in different ways, and they would conflict when someone would use one package manager or another. And so the idea was to try to, at the project level, isolate this sort of use case of type of software that you're using within your project by creating a net new definition for this type of software that's being used or that is a dependency of your project, which in and of itself is the package manager. And so by creating a net new key, top-level key in the package JSON, the hope was that there could be this blessed understanding about what tool you're actually using to install your dependencies. He reached out to us at the time. Back when we were at the npm team, he definitely reached out to us, and we talked a bit about the future of PMM and Corepack before it landed in Node. There's a lot of feedback that actually was given to the node project at the time before it even landed. And in today, you can still use Corepack if you'd like. If you still think it's a good solution for making sure that your teams are not potentially running into errors or issues with using different package managers in your projects. I think, generally, the hope with Corepack was that we would see tooling carveouts across projects, which I think is good and bad in some ways. I feel like it's a bit of a regulatory capture move to say, "Hey, let's sort of lock in your tooling ecosystem at this level. And anything that comes down the pipeline needs to be blessed into this new subset registry or subset ledger that we own and manage." And so that was a bit - at least from my perspective, was always like a bit of the problem I had with that, creating a net new definition of tooling that needed to be managed when I traditionally have looked at a package manager as just one of my packages. And we already have had mechanisms in place at the package manager level to actually enforce good practices around making sure those things were not conflated with each other. [0:13:32] JG: That's a good transition point then, because you've gone from working on a package manager or a package manager manager, you've gone to working on both the package manager and the registry itself. We have on the traditional side of things npm, now also Yarn and PNPM. Then we have these kind of all-in-one bundlers or toolkits like Deno and Bun. And then separately, you have vlt.sh or vlt. What are you doing differently? Or can you introduce to us what is the point of vlt in this multifaceted world? [0:14:03] JG: Totally. There's a massive opportunity, I think, here that exists for investment in infrastructure. What you would have seen in terms of innovation in package management for JavaScript in the last decade and a half was only focused on the client-side behaviors. And so when Yarn came out and PNPM came out around the same time, it was actually previously ied. I think you actually had Sultan on to talk about that and the history of PNPM. Around 2016, these were just net new clients that still interacted with the same APIs that the npm client did, the first-class npm client, which is the public npm registry, right? And those APIs and those endpoints really haven't changed in 15-plus years. The clients that are coming out even now, Deno and Bun, are still looking at those old APIs and still having to try to squeeze out performance gains and try to do things that are interesting purely at the client level. And so we think that there's a huge opportunity here to unlock the server-side aspect of this, to create net new endpoints, to innovate and invest in the registry side of things. And so we've actually started a project called VSR, which is complimentary to the vlt's client. And VSR is an acronym for the vlt serverless registry. It works kind of like a lightweight proxy to upstreams like npm, but also introduces a private package registry for you to self-host or publish into. We think this is a huge opportunity for us to innovate on those endpoints and create more modern ways of interacting with your packages. Part of the big problem that we see and have historically seen is a lot of wasted and redundant compute happen in all of our machines. If you, Josh, and me, and Ruy all install the same package, we all have to resolve the dependency graph at that edge, which seems really wasteful in terms of opportunities to cache and have a centralized understanding of the dependency graph. And so the two products that we have today that are out, the vlt client, which is both backwards compatible with those legacy understandings of what a registry is, as well as communicates with VSR. What we're complementing to that is an understanding of and indexing and crawling of the dependency graph, which is what we're doing today behind the scenes. And so I think there's a huge opportunity for us to innovate on the tooling we can offer the ecosystem by sort of looking at there being an understanding of the resolved dependency graph out there that we can all share, and essentially have a global cache of versus having to do all this redundant compute. And I think in the modern AI agentic wave that's coming, we're only going to see more and more of these machines doing essentially this resolution at the edge. And it feels like a huge opportunity to obviously save some folks some time and money and cycles, and also secure the folks as well. We think that's a different way of looking at the problem. And I think that that's the sort of the innovative part of what we're doing with vlt. [0:17:17] JG: Let's say that I'm an arbitrary team working on an arbitrary project, and I have experienced pain that my installs are slow, let's say. I've got a lot of node modules in my dependency graph. Are you saying that one of the benefits, say, of using vlt with VSR as the backing registry is that you'll have much faster installs that I can use? [0:17:35] DC: Correct. We actually give you the registry proxy that you can run locally. This is kind of a beautiful conceptually that you can self-host a registry. Npm never did that. Well, it started at open source and it went closed source in 2013 or '14. Unfortunately, all the innovation from then on, or any kind of improvements that were made to the registry code, was privatized. We think that there's a great opportunity here for us to give you a registry that can be collocated with your client and let more of the power go into that infrastructure and server-side API endpoints. And because you're running essentially a local instance of a registry, you get roundtripping benefits. You get a lot of other benefits of sort of having this proxy lightweight instance sitting alongside it. Traditionally, the only other option folks would have had would be a Verdaccio instance, if you know that project. Verdaccio is like the only other project out there that folks really have available to them today to mimic an npm registry. And so we are providing an alternative to that. [0:18:43] JG: Ruy, anything you want to add? [0:18:45] DC: Nothing really. I think Darcy is very thorough in his answers, and I appreciate that. [0:18:50] DC: What I should say is Ruy has spent a lot of time in making an efficient graph resolution algorithm. And we've spent a lot of time on our lock file format, which we imagine is going to be the exchange format that we'll use between the server and the client. And so if you look actually at our benchmarks that we've just recently published on our website, we actually are the fastest package manager that isn't named Bun. And we think that we can get there to be quite honest since we aren't even compiled. And we're dealing with the cold start that all the other JavaScript based package managers face, or the cold start problem that we all face by having to rely on a different runtime. [0:19:33] JG: Yeah, let's talk about performance in the lock file. I guess a two-parter question for either of both of you. One, what is a lock file, for those who haven't had the joyous pleasure? And two, how does that impact performance? Or how does the dependency graph impact performance of installs? [0:19:48] DC: I think this one's for you, Ruy. I think you can explain this quite well. [0:19:52] RA: Yeah, it is a fun topic to talk about, because the lock files serve multiple purposes, right? It is there to help you lock a given install, right? It's right there in the name in a sense of more - you want to make sure, and you're not being surprised, installing some - maybe this new package just got a new version, and it's shipping malware. One of the ways you try to lock your current install is by using a lock file. The rest of the team that just wants to reproduce a given install, they can just be installing from that source. But it also serves other purposes like speeding up install, because you already have a fully realized graph of how all these expanses should look like at the end in the user machine in order for this project to work. There is also that component that plays a big part on why lock file exists. It really serves multiple purposes like that. And at the end of the day, it is also serving the humans that are managing these projects because they can keep track on how the dependencies are evolving, right? And it's something you can audit after or during each install in order to make sure you're really getting the artifacts you're supposedly getting at the end, right? It has multiple purposes. And it is a fine balancing alace act trying to serve all these different personas, these different use a lock file in a way that it works great for not everyone, but each step of the way, right? [0:21:36] JG: Yeah. I'm going to give you what sounds like a softball question, but I suspect it's not. Let's say that I package A that requires B at 1.1 or higher, and then package C that requires B at 1.1.1 or higher. Why is it so difficult just to figure out what versions of packages A, B, and C I want? Or why is dependency resolution with versioning so difficult in the package space? [0:22:02] JG: In general, I believe you just use some grammar, right? Some non-standard SemVer grammar. Is that right? [0:22:09] JG: Yeah, I gave you in imprecise SemVer resolution specifiers for these packages. [0:22:14] DC: This is something that a lot of folks take for granted in our ecosystem. And in fact, they take it quite for granted in most packaging ecosystems. There is no actual specification. There's like a quasi specification or standard for ranges or essentially groups to be associated together of software versions. Every time we say SemVer, we're kind of actually stealing a term that actually doesn't have any basis for the ranges. Ranges have not been codified in the SemVer spec. Just so everybody knows, if you want to go check out the SemVer spec, you're going to find that it only codifies what a version is. It doesn't codify ranges. It doesn't codify to group things together. How you do that is all based around how the package manager interprets that spec. But definitely in terms of like how do we group, let's say, A and B, or two versions of C when you get into sort of what we call the diamond dependency problem where you have two different disparate dependencies that both have a transitive, what we would call like a transitive dependency, and you have them at two different explicit versions. In some ecosystems, let's take Java, you can only ever have one version of a library. You have to get rid of that conflict, what we would say is like a conflict between those packages or these dependencies that have different transitive dependencies. And you have to choose one, right? You have to basically say, "Oh, I'm going to choose one. I'll take maybe the greater one." The beautiful thing about the JavaScript ecosystem is, many years ago, Isaac Schleuter, who created npm, and he was the benevolent BDFL for a while taking care of Node, was able to figure out that, "Hey, we can actually get around this problem by allowing multiple versions of the same thing to exist in a project by nesting these dependencies, these architectures." And that's a beautiful way to kind of resolve that problem. And I would say I like to call that like the first layer dependency hell, where you have this issue of I have these two direct dependencies that then have a shared transit of dependency that we associate being the same thing but need to resolve to only a one thing. And the way that you can do that if they are truly in conflict is to just have two of them, have two versions of those things. And we would say that's like the nested strategy for a package manager will opt for what's the easiest thing to do is we just don't ever try to share these two versions. You don't ever try to do what we call as like deduplication of your dependency graph. But when you do want to do deduplication, because there's many reasons you want to do that, you have to then come up with a grammar, which is like that non-standard grammar that we have for greedily, or having sort of some slipperiness between your dependencies that allows you to opt into versions that may not exist today, but could exist in the future that allow for us to optimize for getting the best, most evergreen version we can without breaking software along the way. That small little example I think really - and I'm not sure if I'm doing a great job at summarizing this, but it really I think unlocks many of the details and nuances of what it means to do package resolution and also the problem of dependency hell in general that we were facing back in the 2010s. [0:25:47] JG: Really sounds like it puts the hell in dependency hell. Yeah. Resolve multiple versions of packages with different strategies per package manager. [0:25:55] RA: It's one of those things that we all have to understand, that there's this language the package managers have in their brains. This sort of parsing that happens of these specs that we put in our package JSON files and these dependency specs a lot of folks take for granted. They sort of are like, "That's just a SemVer value." Well, actually, no, it's not. That's going to do something different potentially. And we spend a lot of time talking about grammar and definitions and specs and schemas in the package manager world. And I would say one of the things that I constantly ask other people is what do you think a package is? What is a package? And I get many different answers to that. Or what is the most minimal thing that could represent a package? What does that string look like? And I get many different answers to that. [0:26:50] JG: All right. We've talked about the base of what a package manager is, what a registry is. I want to focus a little more on vlt and what's coming up next for y'all for the next bit of the interview. You've reimplemented the core parts of the manager and your registry, and you've added in some niceties, like working on the edge and duplicating for better performance. What are the other selling features or parts of your product that you're excited about? [0:27:11] JG: We are starting to roll out net new features to the client and the registry. One is obviously being safe by default has become standard. And what I mean by that is not running arbitrary scripts or install scripts has now become cool, which I'm very happy about. Our package manager obviously takes care of this by default. If you run vlt install, we're not going to run any install scripts. And we're going to print a nice message, which we just shipped recently. Ruy was actually the one to implement this feature. And we utilize our amazing query language, which is the other sort of core feature of the client, as well as we're putting this into the registry, to allow you to do sort of a nuanced allow list if you do want to opt into running those install scripts and you do want to create some kind of configured codified list of dependencies that should be running in, let's say, install scripts. Because there are legitimate reasons for that, mounting native binaries for your runtimes, or just generally building a package that you think is secure, or you trust it because you've seen it before. And so that's a huge unlock. We've created a net new command in the vlt client called build, which takes one of our query selectors, which we think is super powerful compared to sort of what the other package managers have done, which is just used package names and package versions. The query language itself is like a huge unlock in terms of being able to expressively write both with relationships or with metadata. Something that selects within your dependency graph very nuanced nodeless, essentially. This language is very much inspired by CSS, which I have a soft spot in my heart for. Not everybody does, especially the Tailwind folks out there. But it's, honestly, I think the coolest thing that we have. And we are seeing the benefits of using it in our core graph logic by unlocking something like this, which was super easy for us to implement. And we're super happy that we can say as well, along with PNPM now and Bun, that we're safe by default. We're not running install scripts. And so that's a recent unlock. Another recent ship that we had was just the ability to actually select in that query selector syntax projects across your whole system. Ruy also shipped this. It's called the host selector. You can literally write a selector that will query for any project that's being configured or installed with vlt. And we can essentially do cross project. If, traditionally, you're using monorepos, you'll know that it's totally easy to obviously select projects or list dependencies in your monorepo projects. But actually sharing dependencies, configuration, or doing querying across, let's say, many monorepos or many repos hasn't been possible before. This selector runs essentially in the global context of your system and can essentially go and query anything that you can query for across all the projects we can find on your system that are configured to the vlt client. And Ruy, I'm not sure if you want to dive into more on that, but yeah, that's the high level of what the host selector does. [0:30:38] RA: There's so many directions we could go from here. We could dive a little bit more maybe on the query language itself. What it is? How it works? Or maybe the other - I think another pillar of that is having a browser-based UI. And I think that's also fundamental for a lot of folks out there to actually be able to visualize. When you're applying a query, what packages are you actually selecting? Because then you have a real time experience that allows to traverse and navigate your install using the query selector in a more user-friendly way. [0:31:12] JG: What's an example use case of this that hasn't been brought up yet? Let's say I'm managing a monorepo. What are some things I might use this query selector syntax for? [0:31:21] DC: A lot of the time, it's useful for auditing or observability. But also, it's great for transposing configuration or setting and getting values from different package JSONs, let's say. We actually introduced the pkg command into npm when we were there. We've obviously carried it through to the vlt client, which we think is great. Bun itself just introduced it in a recent minor version. They now have pkg as well. It's kind of like the fun jq built right into your package manager. And so the great thing here is you can get and set and remove or unset values in your package JSON file. And utilizing query selector syntax, you can imagine you can start to share configs, you can share dependency definitions, you can share all types of things and actually get and set those across your projects very easily. You can sort of transpose and automate the config definitions that live within package JSON across your monorepo. There's a lot of automations that this sort of unlocks. There's a lot of things that we think even we haven't conceptualized yet. But the idea here is you can quickly go from having sort of like a template for your workspaces that may have a whole bunch of test scripts that you define. And you can go and then action a set, a package set utilizing the query selector syntax to go and pick up that definition and then apply it across all the workspaces. We look at as like being very composable, right? The pkg is a great example of what we think is a great composable subcommand within the client. And with the powerful query selector, which we've added to almost all - I think all of our commands now support essentially a flag called the scope. And the scope is essentially just a flag for defining the query selector that you'd like to apply that to. You can even do publish, pack, and publish. We have those traditional subcommands for creating and distributing packages. We also have install, obviously. We have version. All those commands, they have this special flag called scope, which you can define as a query selector. And it means that that operation will be ran against that selector, those dependencies that match that selector. It's kind of like PNPM, if you love pnpm and their filtering syntax. It's kind of like the filtering syntax on steroids, which I know I really loved when I saw when Sultan brought out that filter syntax. It's just a little bit bespoke or baroque maybe. I don't know if that term. But it's a little bit foreign for, I think, a lot of developers to know what specific little dots or slashes you have to put. I really don't want to have to remember X pass type selectors. CSS to me was just the perfect kind of in between between something that's super expressive but also super powerful. And it only gets better with the more metadata that you put into your packages and your dependencies. And we can select by that and we can select by other kinds of states for your packages. Yeah, huge unlock for, I think, workspace management is this selector syntax. And then we were trying to create more and more of those primitive kinds of operations like pkg, or run, or even exec all support this. [0:34:48] RA: Yeah, expanding a little on your example. Just to give a concrete example here that I find it's pretty fun. Let's say you're [inaudible 0:34:56], right? And you have over a thousand packages out there, right? Let's say you configure all of them using code line, right? So now you could go ahead. Let's say you're just migrated. You moved from Twitter and you're now on Bluesky, and you want to update all your package JSON to update that metadata everywhere. But let's say you have some other maybe conditions you want to apply. Maybe you just want to apply to the package where you're actually mentioning open collective for some reason, right? This language just gives you a very powerful way to express all that. So you could use a command like vlt pkg and go ahead and set your social media handler to now point to Bluesky instead. And you could be pointing, "Okay. Let me use the host local selector so that I'm selecting across all the projects configured on my machine. And let's apply some other conditions like, okay, let's just select the ones that I'm pointing the funding to open collective, right? This is the kind of very expressive and advanced scenarios you could work, and the query language enables you to actually kind of have that workflow available now. [0:36:08] JG: It's interesting that we've seen a similar pattern in many areas of tooling where at first the core base one package at a time problem gets solved. How to dedupe packages? How to talk to a registry? And then, of course, projects grow bigger, needs are larger, more complex. And now we're solving problems for workspaces, monorepos, microservices, and microtrends. And one area that continuously always comes up is security, which I'd like to talk a little bit about with Q2. Just looking at your blog post on the host context that you've been talking about, one of the example queries is the colon malware query, which sounds like a horrifying thing to have results at all in your project. Can you talk about the malware query selector? What it means to have malware detection at the registry and/or CLI level? [0:36:54] DC: Totally. It does seem super scary to run that query and maybe get results. Unfortunately, it's very possible that you could have installed some piece of malware along the way and you don't realize when you've actually done it historically. That selector is actually a mutable selector, which is updating all the time. The way that works is actually we have partnerships with security insights providers, like Socket, in our case, is one of those key vendors that we're working with. And we look at those folks, those partners metadata providers that we're hoping to enrich your dependency graph with more and more metadata. That makes that selector language even more and more powerful for you to codify heuristics into or for your teams. Or in this case, to find potentially vulnerable or insecure packages. Security, we take super seriously. We have default queries that we're trying to prevent malware from getting into. But sometimes these security companies can be slow. Npm has historically been slow to realize that there is malware being distributed. And so it means that there's an active and passive scanning happening. And you can actually opt in to the selector that we have actually called scanned. And so you can choose to refuse to even install stuff before it's essentially scanned, right? This is a beautiful thing where you can say, "Hey, I'm not going to trust any package. This heuristic to me is important that actually want to only install packages that have been scanned." And we're providing - which hasn't come out yet. But we're going to be providing specific vendor specific scanning selectors. I want to make sure that socket was actually the one to scan it. And that I'm going to wait until they have before I actually consume. Today, we sort of mask the underlying vendor by sort of giving you these nice, again, primitive selectors like malware or CVE and then we give you types, especially for CVEs, the risk type. In the cases of - sorry. CWE specifically are the types of CVEs. You can actually put put in there something like a ReDoS, like a regular expression denial service. That type is like one - I think the number is like 1333. If you don't care about regular expression denial of service because maybe you're a developer tool and a package has been maybe flagged for this inefficient regular expression, which you're like, "Okay, thank you Snyk." Not to throw shade or purple company, old school purple company that is trying to obviously protect the ecosystem, but at the same time has kind of spammed our ecosystem with maybe not so efficient advisories. You can sort of contextually apply or contextually filter out that noise with these selectors. And so we look at it as both us providing more and more metadata is going to create, I think, a more secure ecosystem. We don't know how everybody will apply the heuristics to their own projects. Whether or not they even consider some vendors to be suspect or not, if they want to opt in to some or not. We want to give everybody the option to sort of choose. But there are certain things that we consider to be default. We don't want you to install malware, but it might get on your machine prior to it having been flagged. That's just like how could that have happened? You still want to be able to find it, right? Let me easily find it if it did happen. And so we think it is still super powerful for you to be able to do a scan across all your configured projects like we show in the blog post for potentially malware that might have been previously installed that is now being flagged by any one of our vendors that says, "Hey, this is malware." That's kind of how that selector works. And there's more nuances there, for sure. Socket provides us with some great insights, whether or not a package has file system access, whether it wants network access. And we created some nice quick selectors there, like FS and HTTP. And so you can quickly - and also environment secrets or environment access. You can quickly say if I do or don't trust a package to have file system access, I can essentially gate that by using CSS's good old not selector, right? I don't want anything that has file system access. Find me those packages. And we have the override mechanism that you could essentially gate. Or we have modifiers as well, which are our version of overrides, which can gate the installation of those packages. [0:41:38] JG: I wonder if you could adopt that system for other concerns beyond just malware. For example, there are community initiatives such as [inaudible 0:41:45] around swapping packages that are older and/or slower for, say, modern, faster packages, or just native functionality. Could you add like a colon slow or colon just use node for this now you silly goose selector like that? [0:41:59] DC: Yes. I think there's a couple things here where we've also flagged the type of package. If it's CJS, ESM, we have that insight. There are definitely options for us to help with creating variants or distributions that are easily swappable from one to another. We would love to work with the existing maintainers though, so that they can be the ones to kind of control that flow. Many years ago, we actually proposed our RFC for what we call distributions, which would allow for you to have sort of conditional variants of a package. You could have that full source with all your test files and stuff that a lot of people complain about, but maybe you think is still important to have all the readme docs and everything else put right into your package. But then you could have that optimized production version that's compiled and has very minimal files inside of the package as well living alongside each other under the same scope, or have some sort of definition that's respected by the package manager to essentially make these two things well understood by the package manager. And so I think distributions and variance are the way forward in regards to that kind of bit flipping or switching between one condition or another. Whether or not it's your environment or the configuration that would tell the package manager that, "Hey, I can opt into a polyfill because I'm on a older version of node and I need that polyfill." Or I can completely drop that and use the version that basically just stubs out the native API. Now, those are options as well. And we're very close with some of the folks that maintain those legacy libraries. We're very mindful that they also want the good things to happen. I won't name names, but certain maintainers that we know of do want folks to modernize and do want folks to get off of maybe having two, or three, or 400 dependencies for stuff that is now built into the runtimes. They want to help with that transition as well. The ecosystem will, I think, evolve over time. But our hope is that we can provide, again, those primitives that would actually help us do innovative things, not just for getting rid of polyfills or getting rid of those types of things, but also provide innovative opportunities for native-specific conditional or natively built variants to kind of coexist in a blessed way versus the ecosystem we have today. [0:44:30] RA: Yeah. And from the end user point of view, a way to integrate the query language and also be doing these replacements, it's using this graph modifier feature that we shipped a while ago into the client. It gives you a way to actually point at a package or group of package and replace that definition with something else that it actually happens during the graph build phase of the package manager. We do have one already out, which is equivalent to the more classic overrides. And we do have more already planned to add on that graph modifier support in terms of supporting things like package extensions. I think this is how some of the other clients call it, which basically allows you to define maybe new dependencies at a given package right within the graph, and all sorts of crazy stuff like that. Also granting some extra power to the end user in this entire supply chain. [0:45:35] JG: That all sounds lovely. What are some other interesting aspects of vlt that you think users would be pleasantly surprised to discover? [0:45:44] DC: Oh my goodness. Well, we've talked a little bit about the UI, which Ruy mentioned before. And I think that our npm interop story is quite good. I would say that something that you would be very happy to know if you're a dev tools author or if you're someone that likes this space, we have the best npm docs that exist out there. I will put my foot down and say we have the only npm docs that exist out there. If you go looking, you will not find very many npm registry docs specific. I should have been a little bit more specific. The npm CLI was well documented from our time there. I would say the npm registry was never well-documented. And so if you go to docs.vlt.sh, you'll find pretty comprehensive documentation which gives you obviously endpoints that exist in our registry implementation, but it also provides a quite fairly good comprehensive documentation about what the API surface looks like for the npm registry itself. The beautiful thing here is we actually give you an interactive docs experience as well created by and/or facilitated by our friends at Scalr, if you know those folks. We basically distribute with the registry and interactive docs experience. When you spin up the VSR instance, you get a slash-dash-slash docs endpoint. And there, it's a Scalr experience that has mechanisms for you to actually go and interact and actually play with the APIs in real time, which I think is great. That, I think, will help with folks understanding what's available to them. And over time, it's only going to improve and get better. And so the hope there is that other dev tools authors or folks that traditionally would have wanted to build something for the npm registry can now be like, "Hey, I know exactly what the request and response should look like. I know what endpoints should exist. And here, vlt's helping me out." And we're going to maintain you know as best we can backwards compatibility with the npm registry now and going forward. That's another key, I think, highlight that should be exciting for folks sort of generally about what we're doing and what we're trying to maintain, especially because we care very deeply about compatibility. We care deeply about bringing forward the ecosystem. I'm not sure if Ruy has any key highlights. Go ahead. [0:48:14] RA: Yeah. Of course, mine are going to be tailored a little bit more to the client side of things. But it really is the small things that I really love when I'm using it. I mean, of course, the speed, it's very good. But kind of like for us, it's more of a baseline. We need to be fast. We know people are not going to be using the tool if it's not fast enough. But the little details. For example, on the query language, when you're using it from the terminal, default query command. By default, it prints the same tree structure that users are used with from npmls. Kind of the classic reference, right? For me, personally, that's so much more usable than having the JSON blob be the default output. But also, something we added to vlt ls and vlt query is different outputs. Not only the classic JSON output, which is improved. It has all these security insights metadata added to it. But we do have also mermaid output, which I find super useful, super helpful. And kind of plugs well with HackMD. I'm a huge fan HackMD. We're using all the time. I can just copy and paste basically an output and visualize my graph in the rendered view. But it could also be Notion, can be a GitHub issue. You can just be porting this mermaid out for whatever platform you want to take it to. Yeah, I think the other thing I really love is really spawning the browser-based UI, right? I'm trying to understand what's going on. I want to find some information about a specific package and just being able - for me, I'm a big terminal user. But I do love - sometimes I just want the product to take my hand, guide me to that experience, just take me to the place, let me find the information I want. I don't want to be typing and finding about 20 different commands to type to get there. For me, those are kind of like top of mind, the things I love the most about the client. [0:50:20] JG: That's really exciting stuff. Is there anything else on the technical side or about vlt that we should cover before we wrap up for the day? [0:50:27] JG: I would say we are actively working on the large version of crawling your fancy graph and realizing before you do, which I think walks that tight rope between the opportunity to both get performance benefits that have never been seen before, as well as security guarantees that are theoretical in a lot of other tools. I won't mention SOMs, but I just mentioned SBOMs, which are highly theoretical. And we could have a whole episode on that. So, I think there's a huge opportunity here. And we're super excited about being at the forefront of it. And we have a lot we're going to be shipping and sharing very soon, both in terms of better performance, better tooling, but also more of a destination for folks to hit going forward. Look out for that, I guess, very soon. And yeah, we're excited about that. [0:51:26] JG: Well, in that case, I just have a couple more questions for each of you. Ruy, what learnings could I, as a developer, attain from studying and understanding Brazilian jiu-jitsu? [0:51:38] RA: Good question, you may ask. I do practice it. It's kind of a recent thing. I'm definitely not a big, strong Brazilian jiu-jitsu guy. I'm just there to learn and being humbled every time. But it is a great workout. It is a great way for us developers to spend all these calories. It takes a lot of your muscular strength and flexibility. It's a very healthy workout for us to try. And if you're able to get it a couple of times during the week, right? Just getting us out of the desk or chair we're sitting all day long, right? Getting in shape. [0:52:23] JG: I've been told that Brazilian jiu-jitsu, or BJJ, involves a lot of strategy and kind of centering yourself. And as three people who actively work on open source projects around the internet on GitHub, I imagine it's probably quite useful for us to get practice at centering ourselves and regulating our emotions even in quick response times. Darcy, the last question is for you. Can you tell us about Kurt Cobain's journal? [0:52:47 DC: Oh, no. Yeah. I mean, as a teenager, having been a little bit of a punk myself, I think I got as a present a mass-produced copy of Kirk Cobain's Journal, having loved Nirvana and beaten a bunch of punk bands. It's probably the coolest thing, but also felt very inappropriate to own and seemed in retrospect completely unlike something he would have wanted to have exist. I think we should probably all burn our copies if you have one, and let it be. That's my thoughts on that. Having been a 14 or 15year-old, getting that, I'm sure, as I think it was maybe a Christmas gift, it's both cool and insightful, and probably shaped a lot of my early garage band days. But looking back in retrospect, I should have never had access to that thing. I don't think anybody should have had access to that thing. But it was nice to see just how creative, but also how much - unfortunately, it was very pained. I'm also around the same time I'm sure I watched Donnie Darko for the first time, which became my favorite film. Have that on VHS, that also feels like a precious memento of my time as a teenager up in the cold Sudbury, Ontario, small town cold Sudbury, Ontario. Yeah, Kirk Cobain's journal, let it rest. Let's all burn our copies. Hold on to your VHS's, though, of Donnie Darko. That's a classic. [0:54:30] JG: That's a great film. Could you just quickly explain the full plot and backing content of that film? [0:54:37] JG: I feel like, unfortunately, the director's cut ruined it for a lot of folks. So if you see the original, I hope you do. There was original copy without all these added things. The plot is there's a bunny in it. There you go. There's a bunny in it. That's all you need to know. It's a beautiful. And especially around Halloween, great thriller to watch. And it's one of my favorite movies for sure. And I will not ruin any of it. You can come up to your own conclusions about what did or didn't happen throughout the movie. And that's what I think is so beautiful about it. Yeah. [0:55:12] JG: With all that being said, destruction is a form of creation. It's been lovely to talk to you two about vlt, and package registries, and clients. I'll start with you, Darcy. If folks want to learn more about you or vlt on the internet, where would they go? [0:55:25] DC: So, you can go check out our website of vlt.sh is where the home of all of our information about our company, the clients, products. And we'll put announcements there as well. We push some blog posts out there. You can also follow us on Bluesky. It's vlt.sh, or @vlt.sh. Just our domain. You can also check us out on x.com. Used to be Twitter. Still is Twitter slash or @, I guess, vltpkg. And you can also find our GitHub, all our projects along with a couple extra projects there at github.com/vltpkg as well. For myself, you can follow me, again, x.com, or Twitter, @Darcy. Just my first name, which I think is pretty cool. I'm old. I'm very old. That's how I got that. And as well on Bluesky for myself. It's my full name @darcyclarke.me, my personal website. That's how you can find us. [0:56:26] JG: Great. And you, Ruy? How about you and the work you're doing on Node? [0:56:30] RA: I guess the place I'm more active these days is Bluesky. If you want, you should definitely follow me there. It's Ruy Adorno. Yeah. And I think if that's too hard to find, just like Darcy said, go to vlt.sh. Go to the company page. That has our names in it. And there are links to social media. And then you'll be able to find me there. [0:56:56] JG: Excellent. Well, for Software Engineering Daily, this has been Darcy Clarke, Ruy Adorno, and Josh Goldberg. Thanks for listening, everyone. Have a great day. Cheers. [END]