[0:00:03] Jeff: David Melamed, welcome to Software Engineering Daily. [0:00:05] David: Thank you very much, Jeff, for helping me. [0:00:07] Jeff: Why don't we start by you introducing yourself? Your academic background is quite interesting. And then we'll get over to the topic that we want to talk about. [0:00:19] David: Sure. I'm David Melamed. Born in France and currently living in Israel. Married with four kids. I have a PhD in bioinformatics. I have a very large engineering background. I was a backend engineer a few years ago in MyHeritage. I was a CTO for a startup for a few months. And then I jumped into the cybersecurity world where I joined the CTO office very quickly being a full-stack engineer at Cloudlock. And then after Cloudlock was acquired by Cisco, I was in the CTO office of the cloud security business units for four years. And that was before I decided that I have in me to be a co-founder and decided with four other people to co-found JIT, the cyber security startup that I'm currently the CTO of. [0:01:22] Jeff: You say you went into security and then you say that you've become a full-stack engineer, I suppose. And that's kind of the topic for today. Security is not always something that a full-stack engineer believes that they're responsible for to do from the beginning. Think security is absolutely my responsibility. Did that depend on the company culture? Was there or wasn't there a dedicated security team? How does that look in your case? [0:01:53] David: I think it's very interesting. Because if you're looking into developer course and programming course basically most of the time, they teach you about how to program. How to write a software without bugs and performing. But very rarely they're really talking about security. I believe one of the reason for that is, usually, the goal of the software developer is to write code and to ship it to production without bugs and with performance. And rarely you're actually measured to the type of security you're riding whether your code is secure or not. I believe it's a mistake. But currently, this is where the industry is. I think that, recently, there are more and more awareness around security. And you can see that security has been now really a central topic for a lot of companies with breaches all the time. And I think that all the world about shift left and also about the training for security for developers is getting tractions. I believe that things are changing slowly but surely. [0:03:11] Jeff: Yeah, I can see a definite parallel here between the DevOps movement and the same thing happening with security. DevOps obviously – well, it's obvious to me because I've worked in the field. But for those who don't know, DevOps, the origin of the movement was basically two teams working against each other effectively. Like you said, developers being rewarded on churning out new features and making stuff work. And then you had operations which were rewarded if the service was up, if you have good uptime, if you had stable deployments and so on. And so, their incentives kind of worked against each other. And we're stepping into a direction out with the DevOps movement and with more and more stuff being offered as, let's say, IaaS or PaaS that developers see themselves gradually becoming responsible for the infrastructure as well. And then there is kind of, I believe, still behind DevOps, you have DevSecOps, where we're finally realizing that security also needs to be in that equation, right? [0:04:32] David: Yeah, I would say that engineers in general, there is this sentiment that they really don't like security. And I think that's probably because there is kind of a lack of knowledge. They're not really specialists. As I said, they're not really learning about it. The tools most of the time are not really adequate. And so, it actually adds them more work and kind of distracting them from the primary goal, which is basically to ship the code to production. And a lot of tools are not that friendly enough. So that people can really use them while they're doing their work. And I would say that if security issues were considered like a software bug, then developers would be able to treat that really more easily. And besides that, you talked about the DevOps movement and the fact that developers are responsible for more and more stuff, I think there is a really interesting trend here. There is a say that software is eating the world. And basically there's a trend, ongoing trend that is many driven by velocity where there are more and more focus on how to ship code faster. For example, the fact that you have CI/CD put in place. A lot of automation, infrastructure as code. A lot of things are actually done in order to improve velocity of the time. And so, everything that is slowing down velocity needs to change in order to over the challenges. If there's a friction between engineering and another part of the organization, at the end, it needs to change in order to be able to facilitate the deployment of code faster. And so, you can see that the trends are really interesting over the last couple of years. Decade basically. In modern companies, QA, which was previously a different team outside of each engineering where usually you had a lot of friction between QA and engineers. Because sometimes developers were just writing code expecting QA to test it. If you want to be more efficient here, basically QA was turned into an internal function of the team. And developers are now also responsible of testing that code and doing that in their way, meaning in a codified way using unit test, integration test and end-to-end test. The same thing happened afterwards with IT and the ops word, where basically you had before some friction. You had a dependency between engineers and the IT department when you needed a machine to run your code on. It changed now because we have automation. And so, you have the whole world of DevOps that was born, where basically everything is also codified in automated. And now, with infrastructure code and all the cloud providers, you can spin up your instances on-demand. And engineers and DevOps that are now working together in the same team are really more efficient. Now if you're looking at the way performing teams are working today shipping codes to production, there is still some anomaly. Because they're responsible for building the service. They're responsible for testing the service, for deploying the service, for supporting the service after it's in production. But basically, they're not owning the security part of it. Because security is still mostly run by an external team that is not part of engineering. And so, these team is deciding on the policies. They're deciding on which tool uh the developers need to use. And they also need to fix stuff. And that's breaking the – slowing down the velocity. Because, basically, the security team needs to catch up all the time with what developers are doing, developers and DevOps are doing. And so, that's the main concern right now. And so, in a really progressive organization, they're thinking that it needs to change. And the way QA was actually moved into uh the engineering team and ops actually turned to DevOps, they also believe that security needs to be part of it. And so, DevSecOps, which is kind of the combination of the premise of it is the beginning of a movement where basically engineering are also owning security. [0:09:37] Jeff: If I were to be defensive and kind of argue why the status quo is the way it is, I would say that it's more trivial for developers to incorporate testing because you can positively test for the outcome you want than it is for them to incorporate security. Because that's kind of a non-exhaustive attack surface, I suppose. If we think about something like test-driven development, which is one of the ways where developers very much shift left on QA and integrate the QA function into developer team. As a developer, I would start writing tests saying the logic that I'm about to write. In this case, it needs to do this. And in this case, it needs to do this. So, they're all quite affirmative assertions. But with security, you kind of have to think about all the possible edge cases and all the things that don't even fit into the model. And it's much more difficult to incorporate that or to, I suppose, teach a developer, especially an average developer, about all of that. How do you think – what would be the best way to approach this, I suppose? Is it lack of knowledge or – [0:11:12] David: Well, first of all, what you're saying is true. But I could counter-argue that things are not as easy as it seems. Because, yes, it's true that writing tests are very deterministic. But, unfortunately, it doesn't mean that developers don't miss use cases. And that's why you have bugs in production. On the other way – and also, they're writing their own test. On the other way, if you're looking at security, we're not asking developers to write tools. They're using existing tools. And those tools are using known patterns for code that is vulnerable. Vulnerable to attacks. Vulnerable to exploits. For example, if you're looking at the OWASP Top 10 list, which is a great list. That is updated every year or so. There is a very well-known list of things that you need to ensure when you're writing code. You have also courses how to write secure code. It may not be deterministic because there can still be some new exploits because you're adding like code all the time, there's new vector attacks. That's true. But I think that it shouldn't be as hard as it seems. Because you have people that are writing that – or experienced security that are writing the tools. I think the issue is not so much with technology, but it's more of the experience. First of all, tools needs to be friendly. Meaning that, right now one of the main issues that tools like SaaS or even – mostly SaaS. They're too noisy. And the reason for that is that because they're doing their job but they don't have the context. They're looking at every piece of code the same, which, in theory, it is. But in practice, it's definitely not. And that's also the difference between security done by security people versus security done by engineers. If you're looking at security people, they would say, "Okay, we need to find all the issues and fix them." If you're looking at engineers, their goal should be to deploy their code to production and only fix what's necessary to reduce the risk to a minimum. And so, everything else is not interesting. For example, a lot of products in the cybersecurity world are showing you millions of vulnerabilities, like high-severity, medium-severity, low-severity. For engineers, that's not interesting because they will never get to the low-severity ones. There are so many things to fix. What they need to know is basically what really matters. What do you need to fix if you had only two hours in order to reduce the risk to a minimum? And maybe SaaS is not the best way to do it. Because maybe most of the code is actually not exploitable. And so, I think that that's the – at the end, it's more a matter of how to make the security more efficient than a technology problem. There's also a lack of knowledge, for sure. But I would say that the tools are there. It just needto be used properly with a good experience so that it won't draw too much attention and overhead for developers to use them. [0:14:36] Jeff: What you're saying is that we can probably apply the Pareto Principle and um tackle 80% of issues with just 20% of effort already. [0:14:48] David: Exactly. And I think, more than that, there is this notion. Like, if you now need to introduce security in a company, what you should do? You have two choice. If you're looking at the way security teams are handling security, they want maximum security. They want to do everything. But I believe that if you're looking at the engineering world, they would look at it very differently. They would look at it using AGI methods because it worked very well for them. For that, I actually like very much the idea and the approach of minimum viable security. Meaning, what is the minimal that you need to do in order to release your software? Start with that. And so, don't try to boil the ocean. Don't say that you should stay there and end there. But I think that that's a good start. Start with the basic stuff. Like, if you had to rank the different risks, where should you start? For example, things that are externally accessible are probably the most easily exploitable or can be used very easily by malicious actors. But if you had – if you compare the different tools in the industry, I don't think that everything is equal. When you have a high severity vulnerability in a SaaS tool, and let's say a high severity in the cloud misconfiguration tool, that's not the same. One is very easily exploitable. The other may be really harder to use. And so, one approach would be the minimum viable security. There's another approach, which is also very interesting. It's called the maximum viable security. What's the maximum viable security? Well, it's basically making sure that you add security to a point that you're actually not hurting developer velocity. Because at the end, you have this balance between adding security to ensure that you reduce the risk. But also, on the other hand, you want to ship your code out. If you're spending most of your time fixing vulnerabilities and you're not shipping your code, at the end your startup will be dead. And you don't care about security at the end. [0:17:09] Jeff: This is a very interesting discussion. Certainly. The first kind of question that is screaming around in my head is when do you know you're done? I think the vantage point that you're coming from is that we already have a lot of tools that can inform us on existing vulnerabilities. And they will rank them by severity. And we can also have a look and assess them by exploitability and by the data that they possibly expose and so on. Do you think that there is also a place within the developer team to do their own sort of exploratory penetration testing? Or is that too far and that is actually something that should be relegated to an external team? [0:18:06] David: I think that, ideally, I would like to use something like pen test as a service so that it would be part of the arsenal of the tools that you have at your disposal to test your application. And I would say, even more than that, the way currently pen test is done is basically, once in a while, you're doing pen tests against production and you're ensuring that you're not exploitable. The way you can do it better is not to do point in time pen test. It's to do continuous pen test. And the way you do that is basically based on code changes. Every time you change your code, you would like to know if there's a new exploitable path into production. And so, what I would do is basically run some [inaudible 0:18:57] tool whenever I'm deploying a new version to staging in order to see if there's something that I did that is now exploitable. And basically, being able to cover it back to some code changes. I think that's the way engineers are actually thinking. They're thinking CI/CD. They're thinking continuous all the time. This is their world. And I think we should adapt security to this new world and talk about CI/CD and CS, continues security. That's a new term which everyone should use. [0:19:38] Jeff: How would you define – well, you've already defined minimum viable security. But how, just in two sentences, do I know that my product has achieved minimum viable security and I can release it? [0:19:54] David: I think it's an interesting question because there's no real standard that defines that. There are a lot of frameworks today for security, whether they're compliance-driven or they're standards like NIST that try to define some kind of standard. I don't think there's a standard for minimum viable security. There is an initiative by Google and Salesforce called the MVSP, which is the minimum viable security product that is interesting. And they also try to define that. But it's kind of a little bit subjective. Everyone can have this kind of its own minimum viable set of things you would like to see. I think there's a really bare minimum that you would like to see. And if you're looking at the different components of security, you have the app sec, the cloud sec, infra sec, pipeline sec. There's probably a little bit of minimum in each of this area want to cover. [0:20:55] Jeff: Great. And would it be correct to say that minimum viable security applies to my product, but maximum viable security doesn't apply to the product as much as to the developer experience? Because minimum viable security is about getting a product to the customer that is secure. But maximum viable security is about not impeding my internal development process. [0:21:22] David: Yeah. I would say there is a continuum between the minimum viable security which is measured in the number of tools, the number of things that you want to attach and to embed in your process. The maximum variable security basically says that when you're iterating to add more security, at some point it can hurt your developers and your developer experience. And so, you need to stop. Because, otherwise, you will actually slow down the whole machine. And the machine is very well-oiled in order to deploy continuously every day. And if you add too many things and, for example – I don't know. In terms of tools. You're using too many tools or a tool that's too slow, you're slowing the whole machine down. And so, at some point, even if you think you will – you're doing good because you add more and more security, at some point, it's really hurting you and your business, which is the ultimate goal of any company, right? [0:22:19] Jeff: Yeah. Absolutely. I think now would be a good time to take a minute and talk about JIT to give our listeners some context. What is it? What is the problem that JIT tries to solve specifically? And how does it try to solve it? [0:22:37] David: Awesome. Thanks. That's a great question. JIT, as I said before, and that's actually a great segue, we believe that the inevitable solution to build a secure cloud application is by transferring the ownership of security to the engineering organization. And to achieve this goal, we have built a DevSecOps orchestration platform which packages the best open source tools, cloud native tools and commercial tools across your whole tech stack, whether it's AppSec, CI/CID pipeline, cloud infrastructure and runtime. And we built a platform that is self-service. And we're really targeting the engineering organization. So, developers and DevOps. And that's why we actually built a great developer experience. We're focusing there. We really don't want to hurt velocity. That's our main mission. Add security, but not at the expense of velocity. And onboarding on the platform takes really a few minutes. You're all welcome to try it out. It's JIT.io. Really, really simple. And be happy to get some feedback. [0:23:51] Jeff: A question I always like to ask is, if I now get started with JIT, what does it look like to me? Is it an ID I have to download? Is it just a bunch of processes I have to follow? From a developer's point of view, what does it actually look like? [0:24:08] David: Awesome. First of all, we're a true believer of shift left and very shift left and even more shift left. And so, that's why we also built an ID plugin. But we also have a platform. The platform provides two type of experience. There's an experience for the DevSecOps. The main persona that is managing the whole program will actually can pick a security plan. A security plan is basically a list of security requirements that has some defined business outcome, whether it's compliance, or improving your security posture. We have a lot of built-in plans. And so, you're picking up lens. And so, we're mapping behind the scenes a lot of tools that are integrating in your CI/CD environment, and in your cloud and your runtime. From the developer point of view, basically, you're just working as usual in your own environment, whether it's in Slack or in GitHub. You're opening a PR. And suddenly, you see JIT adds a few checks of security and behaving like your security body, your peer doing a code review. And so, we're adding comments in the PR with security vulnerabilities. And so, you're getting them really all the information just-in-time, hence the word JIT, in order for you to fix it. And really treat all those vulnerabilities like security, like a box, a software box. And we're also providing some remediation. So, you have code suggestion in order to fix it on the spot. And we're also splitting between issues that are related to new code that you're writing versus existing code. Because that's also one of the main thing and the main challenges we have in most security products. When you install them, usually you don't install that on a new product. And so, you have all the existing code or legacy code with millions of issues. And so, we don't want to overwhelm the developer. And so, we're splitting between new codes and an existing code. You still can see everything of the backlog in the product in the main platform, but we're not showing that to developers. [0:26:21] Jeff: That's the other view of the platform. You said one of the views was DevSecOps, which then focuses more on. [0:26:28] David: Yes. You can review everything. The vulnerability management piece. The DevSecOps KPIs to see if you're really improving, if your teams are improving. And you have the developer view, which is really very first, dev first, and native to their environment. [0:26:47] Jeff: It sounds like it's really aiming to help add business value and not be an impediment to the development process. [0:26:56] David: Exactly, which is basically what security should be. [0:26:58] Jeff: Optimally. Yes. You've talked about shift left and shifting even more left. And so, at some point, we come to this phrasing of born left. Where does this come from? And what does it mean? [0:27:16] David: Yeah. Shift left is a trend that I've been following for a few – some time now. The problem with shift left is that it's not really solving a problem. Shift left basically means that people got to the understanding that if they're fixing issues before production, it takes more time. It's less expensive. Born left means something else. Born left means that, basically, security is not an external – it's not external to the STLC into the engineering team. It's basically part of the DNA of how you build a product. It's embedded in your STLC. And so, when you're born left, you're not really dependent on an external team to do your job because you're owning it. It doesn't mean that you don't work with security people. But security people are actually in your organization and help you doing your job. If you have any question, you can still rely on them. But you're not depending on them. Because everything is built into the process. That's what born left means. [0:28:23] Jeff: What is if we have a lot of responsibility or almost all of the responsibility for security also being owned by the engineers? What is in that view the actual responsibility of the dedicated security team? [0:28:43] David: I think, at the end, the security team needs to be able to define the policies in the company. What is important? What other areas that they should focus on? What is the definition of the minimum viable security and how to go next afterwards? They should be able to review also what developers are doing. Because a developer can, for example, ignore some vulnerabilities because they believe it's not relevant. And, definitely, security people, experts, can, after that, look at the results of what was ignored and decide whether or not it should be reviewed or fixed. There are still a very central position of the security person in the organization. It's just like they're here to support the engineers and not really slow them down or trying to catch up what we're doing. They need to be able to define some goals for the organization in terms of what type of maturity they want to get to. And what are the business goals in terms of security? For example, there's compliance process they need to support. And at the end, they need to support the whole process and make sure that everything works fine. Because engineers, at the end, as I said, they're not really the ones that are focused on security. They need to treat security as just something else they need to do as part of their code. But that's not their main concern. [0:30:19] Jeff: And when it comes to things such as security as code or policy as code, all of that domain, firstly, do you have any general thoughts on that? Because it's quite a new domain that's just starting to emerge in the last couple of years. I think any good implementations you've come across? Any implementations with issues that you've come across? [0:30:43] David: First of all, I didn't touch that. But as I said previously, everything that, at the end, landed in the engineering world has been codified. Of course, security can only work if it's as code because that's how engineers are actually working and behaving. It's also very efficient because it can be automated. And everything that cannot be automated usually is not working well with developers. Now, in terms of good implementations, I think that there is a lot of initiatives. And I think that, for example, policy as code, now we have the great tool OPA, Open Policy Agent, in the open source world and Kubernetes world. That is getting a lot of attention. A lot of people think that it may be this kind of main and central way to evaluate policies. It's based on Rego, which is an interesting language. I think that, ultimately, this is how security should be done. Everything should be as code. This is also how we built JIT basically. All our security plans are build as code. All the policies, and the evaluation and how we map that to the different tools is also build as code. Because, on one hand, we believe that it's the only way. It's the only viable way to do it. And on the other hand, we also understand something very unique maybe to security, is that when you're building a platform for security, you're building that for 80% of the common risk between all the different products. But at the end, every product has its own custom risks. And so, how do you deal with that? The only way to deal with that is basically enable extending the framework or the platform in order for people to be able to define their own custom risks and Implement that using their own custom tool. And so, this is what we're currently doing also at JIT, is providing that to all the companies so that they will be able to cover the whole stack plus their custom risks. [0:32:56] Jeff: The plans that I can select in JIT, they can be customized. And I expect that there will be a few presets also from JIB. Is one of those presets effectively the minimum viable security? If I don't have a great security education already but I want to use JIT hypothetically from day one for my new startup idea, can I just select I want to do my MVP with MVSP and just go and have a good degree of certitude that I'm not exposing my potential customers to huge, unnecessary risks? [0:33:37] David: Yeah. I sure didn't tried JIT yet, because this is exactly what we're doing. This is how we built it. And we have the great – the same mindset. We built it also – [0:33:47] Jeff: That's just how intuitive it is. [0:33:50] David: Exactly. Exactly. Yes. Definitely. If you don't know what to do – and this is where to try here to help with. I don't have the expertise of the knowledge. But basically, just click on minimum member viable security. And, voila. You have all these tools that are running in the environment within minutes. [0:34:06] Jeff: That sounds very, very intuitive and just the way it should be. There's no point getting out a very sophisticated, very performing solution that nobody will use because it takes months to onboard already. I'd like to ask you a few definitions. Because I always think, if I haven't heard of something, there's a good chance that our listeners might not have heard of it either. What is the OPA, the Open Policy Agent? What does it do? [0:34:36] David: Open Policy Agent is an engine that evaluates policy that are written in Rego in order to basically allow access or deny access. And it was built for the Kubernetes world. If you're thinking about a microservice that, for example, needs to deal with, I don't know, salaries. And you want to see if some user can access it. Basically, you can use OPA in order to evaluate whether or not based on, let's say, some external data like the role of the user that is trying to get to the service whether or not the request should be granted or denied. That's the premise of OPA is working. And they extended that to a lot of different use cases. But that's the premise. That's the gist of it. [0:35:26] Jeff: And they extended it to work beyond Kubernetes now as well. [0:35:29] David: Yeah, you can use it regardless to Kubernetes because it works basically with JSON inputs and based on data that is also JSON and returns just true or false. It's very simple. The language they use is not always super friendly or human-readable and easy way. Rego. It's based on data log. But once you understand how it works, it's really powerful. There are also alternative to this language, by the way. There is a recent initiative by AWS that just released this Cedar Language. It's a little bit more focused on permissions in AWS, but it's way more readable and it's really an interesting language because they also manage to prove that it's working 100% of the time. It's sacredly proven basically. It's real interesting. [0:36:31] Jeff: Challenging the Pareto Principle there and trying to be more thorough. How do you spell the AWS Solution? [0:36:37] David: Cedar. [0:36:42] Jeff: Like the tree? Like the tree. [0:36:44] David: Yeah, exactly. That's the tree. [0:36:47] Jeff: And then you taught me a new saying today, which I'm very surprised after Googling. I hadn't heard yet. But same logic applies here. Software is eating the world. [0:36:57] David: Yes. It's actually was published originally in The Wall Street Journal by Andreessen, I think. This is something about software music and publishing. It's eating away the Madison Avenue. And basically, it says that developers are having more and more ownership. And software is now everywhere. Everything is codified, basically. If you're looking at it, everything tries to be automated. And so, basically, they're the king of the worlds. [0:37:31] Jeff: Yeah. And I think the responsibility that software developers take on goes more and more into the physical world. One simple silly example is you hear stories about like lockstep that can be controlled via Bluetooth or via Wi-Fi. And then, obviously, by now, humanity has another standing off like the physical reliability of a lock and making sure that it's proof. I suppose hammer-proof and whatnot. But from a software point of view, often, and I think you'll agree, in less mature organizations, the only consideration will be that it works. And there will be almost no security consideration. It'll possibly be enough to intercept some HTTP traffic and you have the lock open, which is relatively innocuous. But what scares me personally much more is the digitalization of things like cars. Because those are literally mass-accessible weapons and just thinking about the consequences of bad people exploiting vulnerabilities in those is, yeah, something that could keep you up at night. [0:38:47] David: Yeah, I can totally relate to that. I think that's, marginally, the IoT world is kind of concerning because there are so many IoT devices that are built very cheap and mass-produced. The problem is that, usually, those manufacturers don't care really about security. And so, it's really easy to exploit them. And because they are so massively distributed, you can think of the damage it can do. In terms of cars, there are already a lot of exploits that prove that we still need to be concerned about it. I still believe that we need to improve in that area and security should be one of the, if not the most, top concern for car manufacturers in that sense. [0:39:48] Jeff: Let's try and wind up this episode with a discussion that's fearing a bit more into the philosophical. How do we tackle this problem? I see two possible approaches. One is more education for software developers. Making security considerations almost a mandatory part of it. But you can force that at universities. Sure. But if you have anyone who is autodidactic, you can't enforce that they learn about security considerations. And then the other vector by which you could change this is shifting the responsibility onto companies that launch products and making them responsible for the consequences of various security breaches and other consequences or vulnerabilities. Wouldn't that negatively affect the ability of, let's say, new startups to launch because they'd have to dedicate too much time to engineering the security team? [0:41:01] David: I don't think so. And I'll explain why. I think that, at the end, if I had to summarize what we need to do, it's basically start with creating some security culture in any company. Whether it's a startup that just was born yesterday or it's a big company. Usually, in the companies, you already have that. Because at some point, they need to do it. And if they don't do it, they won't be able to sell because customers will ask for it. But there is this full sense of we don't need security when you're a startup. Because what can happen to us? We don't have a reputation. We're about to die if we're not shipping our software. The problem usually is that the day after, then you realize that, "Well, now you need to do something." And usually, you have a lot of technical debt in order to do something. I don't think it should take a lot of time, especially if you have products that are built to help you with that. And that's why I really believe in this minimum viable security. Because the goal of it is basically to get to some baseline that, without it, you would be actually ashamed to deploy your product to production. And so, the idea here is only to take care of the minimum and not try to put everything from day one. Because I agree that that would kill your business. But there's still some minimum that otherwise would be negligent. Yeah. It will be really same to deploy that without it. [0:42:46] Jeff: Optimally, security would be a day zero consideration, which it can be relatively frictionlessly with JIT. But even if a company focuses on just getting the functionality out and make security a day two consideration, JIT will make it relatively non-overwhelming to make sure that, going forward, you're secure and then you can tackle the backlog at your own pace. If I understood that right. [0:43:15] David: Yeah. Exactly. Try to think that you have a hole in your boat. And so, the way you want to deal with it is not by trying to remove the water from the boat. You want to be able to clog your hole before you actually do something with this. And so, same thing with security. First, try to stop the bleeding and ensure that you can deal with all the new code to not add to your technical debt. And then you can deal with all your backlogs. [0:43:45] Jeff: David, thank you so much for coming on the show. It was really interesting talking to you. Remind us again. Where can people find out more about JIT? [0:43:54] David: JIT.io. Very simple. JIT.io. Thank you very much, Jeff, for having me. [0:43:59] Jeff: Thank you. [0:44:00] David: Have a great day. Bye-bye. [END]