EPISODE 1823 [INTRO] [0:00:00] ANNOUNCER: APIs are a fundamental part of modern software systems and enable communication between services, applications, and third-party integrations. However, their openness and accessibility also make them a prime target for security threats, and this makes APIs a growing focus on software teams. StackHawk is a company that scans and monitors source code to obtain the full scope of an organization's APIs and applications, and runs tests to identify vulnerabilities and address them pre-production. Scott Gerlach is the Co-founder and Chief Security Officer at StackHawk and previously worked at SendGrid and GoDaddy. He has an extensive background running security operations and engineering, and in this episode, he joins the show to talk about the challenges around API security and leading-edge strategies to address them. Gregor Vand is a security-focused technologist and is the founder and CTO of1 MailPass. Previously, Gregor was a CTO across cybersecurity, cyber insurance, and general software engineering companies. He has been based in Asia Pacific for almost a decade and can be found via his profile at vand.hk. [EPISODE] [0:01:23] GV: Hi Scott, welcome to Software Engineering Daily. [0:01:25] SG: Hey Gregor, thanks for having me. It's super awesome to be here on Software Engineering Daily. [0:01:29] GV: Yes, great to have you here and you're here on behalf of StackHawk as a co-founder and we're going to be hearing all about StackHawk and what the platform does. I mean, it's all sort of without any spoilers here. It's all about security. It's about API security. So, this is a topic I love to dive into and I think API security especially is always something that I've always wondered how best to do this. So, we're going to be going into that today. But in true SE Daily fashion, it would be great just to just to hear a little bit about yourself, kind of what was your - just a brief, kind of what was your path to co-founding and you're also the CSO of StackHawk. So, what was life up until that point? [0:02:11] SG: Yes, absolutely. Wow. So many questions. First of all, just a little bit about me, I'm Scott Gerlach. My background is in running security teams, security operations, security engineering. I spent almost 10 years running different security teams and functions at GoDaddy, which is essentially like 50 years of security because of, give lots of random people access to your service and see what goes wrong. That was super fun. I learned a ton. After that, I was the CISO at SendGrid email company. Hopefully you know SendGrid. Lots of devs know SendGrid because so easy to get it installed and up and running and send an email. After SendGrid and Twilio got combined is when I left. I took a little break looking for my next gig, next role, and I met Joni, our CEO and co-founder, and she was out digging around in the application security space. We had a pretty good conversation where she was asking questions about application security and why it's tough, and I was kind of unloading a little bit because I was in that freewheeling, I don't have a job space. So, I kind of unloaded on AppSec a little about how terrible it was and how underserved devs were in this process. Literally, I think one of the things that we talked about was literally the people that can fix this problem are the last ones to know about it. In almost every scenario, we go, "Hey, devs publish code and we'll get back to you in like a month or so. Talk to you about all the security problems in there." So, after we were done talking about this AppSec thing, I was feeling pretty full of myself, but I was also like, "Well, that was pretty opinionated. She probably is not going to talk to me anymore." Like two weeks later, we started StackHawk. [0:04:07] GV: Awesome. That's a great co-founder story. I love that. I mean, exactly the application security it's something I've spoken a bit about on other episodes just how I think there's a misconception that developers or taught this if you do like CS or something that this is like part of the syllabus, which is partly true, but it's kind of like an afterthought. And when it comes to just general day-to-day work security for a developer again is sort of, unfortunately, the business end isn't often thinking about it and they don't see it until it's gone wrong really. So, it's been this slow burn to get businesses and developers more understanding of the problems. But I think the problems are pretty well understood. I think just to set the scene, I thought it might be helpful, just a couple of acronyms here, DAST and SAST, what are those and which one is StackHawk and why? [0:05:00] SG: Sure. DAST and SAST, the acronym soup of information security, Dynamic Application Security Testing that's DAST. So, testing what is a running application or running HTTP classic like web dot one web server or an API, HTTP API, so a REST API or GraphQL API, GRPC or God forbid, SOAP. That's what DAST specializes at testing. Testing that running version of the application is really beneficial because of a couple of different things, mostly discoverability and exploitability. So, when you are testing with a DAST tool, it does a really good job of going, "Hey, this is super important because it is discoverable. Someone can find it and can exploit it. And insofar as those two things are true, what's great about it is it will tell you, it will help you prioritize your time and keep you away from kind of what has historically been the noise generated by SAST or Static - [0:06:11] GV: Application Security Testing. [0:06:12] SG: Static Application Security Testing. This is like, so AST means two different things, two different contexts. Anyway, Dynamic Application Security Testing, which is really great at pointing you at exactly the line of code that you want to go fix. But it's also really great at, and also here's 9,000 of them and has no context of what is discoverable, what is exploitable. And where StackHawk fits in is kind of, we're trying to make our way to the middle. So, doing DAST, but doing it more of a gray box, white box fashion, so we are also aware of code, so that we can point you to, "Hey, this is discoverable and exploitable, but also, here's the code that you have to go fix." With the advent of LLMs, maybe the code fixes itself, that kind of stuff. So, really helping people find security vulnerabilities in HTTP APIs and fix them as quick as possible. As engineering teams and as security teams, we can get back to the business of helping the business run, which is the ultimate goal of everything of what everyone's doing. [0:07:22] GV: Yes, absolutely. So, that's a really nice sort of framing. StackHawk kind of sits in that sort of semi-middle ground, taking the topic here, which is StackHawk is proactive, it's a proactive approach to API security. I guess question number one is sort of like, what does that even mean? Like to be proactive? And then, looking just sort of this in context, I think a lot of developers are probably pretty aware listening today, you work for a company and you've got API endpoints, but you've probably got a ton of API endpoints and some are actually being used, and then you've probably got a bunch of endpoints that are still there, but no one's actually taking them out and this kind of thing. So, talk about that, the proactive approach of StackHawk, as well as sort of this in context of API sprawl is kind of a nice way to turn it. [0:08:15] SG: Yes, for sure. So, the proactive side is one of the things that I've kind of lived my security life by, worked at a hosting company. That is a super reactive environment, no matter how well you do at it. Being reactive, especially with vulnerabilities that you can find, sucks. Like it's just not fun. You know what I mean? I was trying to think about this earlier as an analogy of like building your own airbag and putting it in your car and never ever testing it. And the only time you get to test it is when you run it into another car. Hopefully, that airbag is really good or some kind of protection mechanism is really good. Putting untested software, unsecurity-tested software out on the internet is sort of a recipe for a fire drill later down the road. You're going to run into, whether it's now or later, you're more than likely going to run into, "Holy crap, someone's attacking us. Let's see if we can contain this before it turns into an incident." And if it does turn into an incident, now it's a whole big thing where you've got engineering teams and security teams all working, stopping their regular work and working together to do incident containment and remediation, which is going to happen, hopefully, you want it to happen as little as possible. Reduce the amount of risk that they're putting out onto the Internet or in production is one of the things that security teams are supposed to be doing. The reactive nature of API security, where publish the APIs, watch for attacks and then react to them, that seems scary. It's super valuable in a, I think I've put out the best thing I can put out. Now, let's see if and when we get attacked, we can respond and block attackers and recover. But doing it without tested software, holy moly. It seems really scary to me. I don't know how other people think about it in security teams or how engineering teams think about it. I think engineers generally want to put out the highest quality, most secure code that they can. The barrier to is this secure was really, really high. [0:10:41] GV: And I think the way I sort of have often thought about it from, like, why is this a scary? It's scary because APIs are literally just roads to your database. That's pretty much like the simplest way to put it, right? So, they just happen to be roads with lots of rules, but if those rules can be figured out or so on so forth, then you've just got a straight road to your database. I mean, that's just that's why it's scary, right? [0:11:03] SG: Yes. And API sprawl is, not only sprawl, but the explosion of APIs it's just compounding the problem so much. You've got the introduction of LLMs, which is making it easier for software engineers to write software. It's reducing their comprehensive cognition of what's actually happening in the code. So, like generally we understand what's going on, but we have less of that like manual one key to time authoring of some of this code. We kind of naturally lose some of the, what's happening in the code. And then the ability to CI/CD agile process, get everything out the door so fast. APIs are just insanely growing. So, API, I don't know if you know this, API traffic is approximately 80% of web traffic today. Like everything is API. That's crazy. You're absolutely right. It's just like, that is the most riskiest place because it's basically direct connect to the database where all the risk is. That's where all the data is stored. It's where all the good stuff that threat actors want, is right there in the database and the API is the gateway to get there. [0:12:16] GV: Exactly. I mean, could you maybe just speak a bit to how does StackHawk approach like the discovery and the management of this complexity? Then sort of going on from that, like how does StackHawk actually simulate like real-world attacks, basically? I mean, that's always the bit I've been fascinated by. I mean, I thought about this space a few years ago and I just assumed someone was doing it. So, it's probably why I didn't go down this road. But the simulating of the attacks is kind of, I think, super interesting. But let's start with how do you even like go about the discovery bit? And then like, how do you simulate the attacks? [0:12:53] SG: Yes, totally. So, the discovery bit, we think of source code as the source of truth for any company that's writing software for any value at all. You might have APIs that you didn't author running around in your organization, but you probably also can't fix them. So, that's just an upgrade or a patch. But we think about API discovery insofar as I wrote code that makes this API work and that code is stored in my source code repository. So, let's go look there, let's start there. One of the very first things, one of the newest things we built, one of the first things you experience as a StackHawk user is connect StackHawk to your source code repository. What we will end up doing is running some StackHawk magic on that source code and going, "Hey, we think this repository builds a REST API, this repository builds a Web 1.0, web service." We think this one builds a SOAP service, those kinds of things to be able to go. "Here's all the source code that builds APIs." Now, they might get stitched together later or they might all end up in a gateway. But that's sort of how we think about it and how we think about testing is like test that smallest bit of code that you can, because it makes it so much easier to correlate the problems with where to fix it, makes the problem set a lot smaller and way more distributed. So, getting that information to the teams that work on those things is super important. That's how we deal with kind of the discovery of APIs and being able to say, "Here's the attack surface that you have based on what code you've written." [0:14:31] GV: Nice. And then moving to the sort of, then what happens in terms of, okay, you've now discovered and then what happens? [0:14:39] SG: Yes. So, the really great thing about APIs, the great and terrible thing about APIs is there's not like a classic web browser to go looking around at. If there is that web browser, I was like to say the Instagram analogy, like think about when you're looking at Instagram and you're scrolling and scrolling and scrolling, that never ends. It used to, but they fixed that problem. All that is, is just API call after API call on the backend, pulling more content and more feed. You can't ever complete the task of looking at the front end to be able to get to the backend. So, thinking about testing the backend directly, whether those API, whatever flavor that API is, is the most efficient out to testing that kind of data flow. Being able to go, "Hey, there's a REST API in here, and there's a REST API, or an open API spec that's either generated by the code or it's hand-rolled or StackHawk helped you build it." Then ingest that and go, "Okay, I understand how this API works. I understand what kinds of data need to go into these URL paths or the post parameters or whatever that is to drive the URL correctly and also try to attack the API itself with valid and invalid data." Those are the keys to being able to really get into an API, test it pretty thoroughly and find out whether there are or are not problems. The next part of that is really kind of business logic key testing. That's a tricky problem. One of the things that we did for that was just be able to write custom tests. So, if you've got some custom business logic in your application that says, Gregor shouldn't be able to get to Scott's information about his address and his family information, you could write a test for that, right? I'm Gregor, see if I can get Scott's information. If that works, now I can throw an alert. Because that tendency is so hard to generalize or those kinds of workflows are so hard to generalize, being able to help write custom information about that, a lot of our customers found that super helpful. [0:16:55] GV: What are the differences, complexity-wise, between REST and GraphQL? Because, I mean, I think GraphQL is where more, at least just from where I sit, that's where I see even more problems have come up recently from an API standpoint where those exposing that GraphQL API aren't fully aware of all the traversals that can happen and what can come back. So, how does that look? Is it a different process or is it just the same thing? [0:17:26] SG: Yes. The complexity and differences between REST and GraphQL are wide and vast. The very first difference in the two is how they document. So, GraphQL is really good about self-documentation and REST is not. And that's the very first thing, which is a bonus and a drawback for GraphQL. You don't have to mentally think about documenting how the GraphQL API works. Therefore, it makes it really easy for us to go test it. But it also makes it really easy for anyone to go find that information and kind of start traversing their way through the graph, finding information that they shouldn't be able to get to with recursion, different recursion attacks, those kinds of things. That's not really a drawback of Graph per se. It's just kind of how it works and how it's intended to work to power the applications that it's intended to power. So, I wouldn't say it's really a drawback. It's just one of those differences you got to know about to make sure that you understand why it's a problem and you can test it and find it and fix it. [0:18:34] GV: Yes. I think this is, I guess, what's to me great about StackHawk that it's got, it can cover both. So, it doesn't particularly matter which one. You go for the API type that sits your business case without then having to overthink the security side of that. [0:18:50] SG: Yes. Or the language, right? So, like one of the really great parts about DAST and why we kind of picked DAST was it is language agnostic. Whatever language you decide to write in, like next week, you decide that Rust is where it's at and you're going to transform all your stuff to Rust and you've got terrible language support from static providers. Just when you're building an HTTP application, DAST has the ability to test it no matter what language it's written in. So, that's always a really great thing. For me, as a practitioner, being able to let the engineering team innovate and develop and try new and different things and be able to secure those same things as you're building them is super important to me. [0:19:36] GV: Yes. I think that's a that's a great call out. If we just sort of move on to, we have to these days, always touch on Gen AI and AI generally, it is pretty pertinent here, right? Because I'm an absolute convert to cursor. I love using cursor to write code now. Yes, it's turning out a ton of code. As long as it looks kind of okay to me and it works, I'm like, yes, great, we're moving on here. I'm not taking too much time over certain parts of the application now where I'm kind of confident that it's done its job. It doesn't look awful. It can be optimized a bit later on. Just to be clear, I'm using a little bit more on the front-end side right now than the back-end side, because it does help to be more aware of your logic. But this is the point, that developers are very much using Gen AI now to generate quite a lot of code. On the other side, we've obviously got the possibilities that people can be using LLMs to generate code or just instructions on sort of how might one go about, getting into this app through the API, et cetera. How is StackHawk thinking about this? How is like the last two years kind of unfolded for evolving the platform to kind of meet this? [0:20:45] SG: Yes. So, the Gen AI problem, the LLM being able to write code, doing it faster. We covered that a little bit. It's contributing to the explosion of APIs in exactly the fashion that you said. Like generally this looks right. It's going in my code. I'm not sure exactly what it does and that has been shown to date to introduce vulnerabilities. Sometimes it does, and hopefully, and I think it will in the future, get better at not doing that. I think where you're going to see, what you're going to see happening is, as you have co-pilot or whatever in your IDE and your writing code, I think you're going to start getting the Clippy experience of - hopefully not the Clippy experience, but the spell check version of Clippy in the IDE that goes, "Hey, you just introduced the security vulnerability here. Would you like to fix it?" Those kinds of things. So, instead of like running SAST on your code, like doing it live while that's happening, I think you're going to get a ton of value out of that in the near future. Near future, I mean like a year or two as contextually aware LLMs continue to get better and better and better, especially around code. So, that's one thing, the thing that you still have to test for is things that you don't see in code, right? So, the pattern that you don't see in code is a business logic problem or an authentication problem. Sometimes you can't see that information in code. So, you kind of have to still test running applications for how the app responds to inputs and outputs, right? There's a world there where those things I think get more efficient, developers get more efficient. The LLMs will stop running kind of our junior devs who aren't really aware of what's going on. When you have an LLM write some code for you and it's awful, you have to be able to see that, which is unfortunately a thing. I run a Kubernetes server at my house here just because I like to torture myself. Sometimes I go, "Hey, ChadGPT, help me write a deployment manifest for this service and blah, blah, blah, and I want it to expose itself on the NGINX web server and have it as to sell certainty." Completely makes it up and it's totally wrong. And you're just like, "No." But the trick is you have to understand how that today, you have to understand what it's supposed to look like be so that you can go, "Nope. That's completely wrong." I think that starts to go away as well like in the future like that whole total hallucination of what a solution looks like goes away. So, it's just going to keep getting faster and faster and faster and we as software engineers and security people are just going to become less and less intimately associated with this code. It's just going to be a thing that we're generating and getting out to the market to provide value for customers as fast as possible, and we're going to have less of that like pride I pumped hours and hours and hours into this code like we kind of do today. [0:24:04] GV: I'm already there. I've been using you know Cursor now for about six months, I think, maybe five months. It is that, I just feel less, less, less sort of precious, I guess, of the code. I'm curious, I mean, I think that maybe helps in this case, because I think with tools like StackHawk, at least it's not a person telling you you've done something wrong, but it's still kind of annoying when something's like this thing you wrote, you need to go fix it. Would you say that now if, if developers are able to kind of utilize more from the LLM generation side, there may be a bit less pressure. And actually, at the end of the day, it's just kind of robot versus robot sort of saying, "Well, that robot didn't write the right thing." And you're just kind of the pilot overseeing the whole thing going, "Okay, well, that thing just didn't write the right thing. And StackHawk said it's not correct." And actually, it's maybe that sort of dynamic is going to play out. I don't know what you think. [0:24:59] SG: It could. Human emotion is the thing and every time a human interacts with the human and goes, "You did it wrong," no matter what words you use, that's ultimately what ends up happening. You take a little bit of offense to that. You know what I mean? It's really hard to communicate, "Man, this thing you did is super valuable. I understand what you're doing. I know that you put your heart and sweat and soul into it, and it took you forever to do. This one thing could be better. There's a problem in here." Everyone skips that first part and just goes, "This part sucked." It feels like you're terrible at your job. To your point, that could turn into a deflective like, "No, I'm not terrible at my job. The robot was." Now, I'll just go fix that, maybe. I think the more important part is, the more in context of what you're doing, you are aware of the problems. The more likely you are to just be like, "Yes, yeah, no problem. I'll just fix it." You don't get mad at unit tests when they fail. You go, "Oh, okay, well, I got to fix that," and linters and all that good stuff. It's annoying, but you go, "All right, I got to fix it." [0:26:09] GV: Yes. So, I mean, just kind of wrapping up on the Gen AI side. At the end of the day, code is being generated much faster, and sort of that inevitably leads to code that maybe hasn't had a full check or a full kind of yes, anti-check from a human. As you've called out, Scott, I've seen code, yes, come out and it's as you say, it's just completely wrong. But it's only from having years of experience, I can quickly look at it and go, "That's completely wrong." And then just can it and then be like, "Okay, either this needs a different prompt approach or just actually, you know what, I'll just start it myself and then see where we go from there. I think clearly, something like StackHawk is able to sanity check that far faster than humans. And as you call out, there's almost like Catch-22 now between sort of junior developers coming in. Are they supposed to be using this from day one? Are they or are not? How are they going to learn to sanity check? Yes. So, it's kind of interesting. I guess, could you almost look at StackHawk in this context as sort of just like a, I don't want to say like checking the homework, if you know what I mean. But like, it's able to kind of sit there and just sort of, yes, be that sort of overarching checkpoint. I mean, we haven't even touched on the classic phrase shift left, which I think probably applies to this somewhat. [0:27:30] SG: We talked to lots of lots of different people using kind of LLM assistance, code assistance and one of the things they often say is, "Who's going to check this code? The LLM?" Well, like when would the LLM go? This is wrong or maybe even more importantly, when would the LLM go? This is right. And do you trust the LLM to check the work that it produced? Because is it going to go no, what I wrote there is completely insecure. You should rewrite that for me. You know what I mean? Having something else that can check it is super valuable. [0:28:05] GV: Yes, that's a great point. That positive affirmation, it doesn't, to my knowledge, give that yet. It doesn't say, I don't think I've seen anything especially from a security standpoint, where it says, "Here's this code, and I've definitely, definitely checked. There is nothing wrong with this code." [0:28:20] SG: Yes, especially if you're writing like a snippet in the context or the context of how that snippet is included in the rest of the app or the code base, like, that's all really hard problems. I think it's going to get solved over time, but right now it's like, okay, somebody, something has to check that this is being done the right way. [0:28:42] GV: So, just moving You know, DevX, we've got quite a technical, heavy listener base. We do have non-technical listeners as well. So, just sort of keep that in context for sort of when we're talking about containers and so on and so forth. But just maybe from a high level, like what does that look like for a developer? Someone on your business team or security team has come to you and said, "Hey, we're implementing StackHawk, off you go. Go figure it out." What does that look like? [0:29:09] SG: Go figure it out. I think for the most part when we started the company, the whole idea was how do we put the right information in the right hands at the right time? Because there's tons of security tools out there that do a great job at serving security teams and they do a great job at kind of building these reports that you look at and you have these huge pie charts of unfixed things and they never move. So, our goal was like, how do we help the people that can actually fix the problem understand there is a problem so they can actually fix it? We're staring at shrinking pie charts or some other kind of bar graph, whatever. That's how it started. So, the way that we build the tool and how you use it and the interaction and the documentation that we built and all of that is really, really dev-friendly. Configuration as code. We use YAML, love or hate YAML, we didn't do it in XML, so be happy about that. That whole process of thinking about how automation works and how you're going to wire this up in CI and how you're going to continuously test and how you can test on your laptop while you're writing. You put your little snippet of code into debug mode and you could test it and know what's going on before you spend 10 minutes in the CI pipeline and then it pukes, and information architecture like here's the information you need to know and the rest of it is off to the side. If you want to know, you can know but here's the information you need to know. That's how we really, really started with the tool and built the tool and people super love it. We've got a ton of customers who are classic security teams. They have had that kind of centralized security AppSec force that is trying to keep up with the developer teams. I don't know if you know this ratio, but generally the ratio of developers to security people or AppSec people is 100 to1. So, you've got like one security person trying to support a hundred developers, never going to keep up if they can't get this information out in a more timely fashion. So, almost every single time when we run into that central team who's like, "This is broken, I can't do it like this. I need to be able to do more automation, get more information distributed to more teams." They kind of take it on themselves and go, "Start small." And go, "Okay, I see how this could work." And we recently had more than 10 customers. We do this kind of, part of our onboarding is a developer training thing. Once the security team is ready to go, we'll come in and do a developer training, do it virtual or in-person, and really help people understand what we do and how we do it. It is crazy to see the developer teams just take off. It's crazy to see them come in there and go, "Oh yes, this makes a ton of sense." And then just start ripping away at configurations and they're like, "My app is covered or my four apps are covered," and just go. Going from we're testing one to three things or less than 10% of our things, to now I'm testing over 60 % of all of the things that we build from our source code in like days is crazy improvement and it kind of blows some people's mind. But it's all because when you put the technical tool simply in the technical people's hands, it just goes fast. [0:32:35] GV: It's all container-based. Is that right? [0:32:39] SG: Yes. Well, it's not all container-based. Obviously, StackHawk is a SAST - oh, I'm sorry. Jeez. Here we go. Acronym soup some more. StackHawk is a SaaS platform, but the testing engine, and this is really and I think unique to StackHawk. The testing engine is a Docker container or a Java executable that you can put around wherever you need to put it. It's maybe the only Docker container that's designed to be completely ephemeral, like the Docker containers were designed to be. Does its job. Does its test. Uploads its results to the StackHawk platform and then goes away. It's made to be really flexible, fit into an environment no matter kind of how you're doing development. If you have a classic dev-test environment, we can run in there. If you have a completely ephemeral CI/CD process, we can run in there. If you want to test just on your laptop for some reason, you can do that. All of that flexibility to help people improve their application security programs, no matter how their engineering team works, is how we thought about designing the product and the platform, as well as the testing engine. [0:33:49] GV: Nice. Yes, as you call it, it had a lot of from the developer standpoint and believe you mentioned YAML, I mean, that makes sense. I mean, yes, everything in this realm is now sort of just YAML somewhere. I think StackHawk also takes advantage of YAML overlays, I believe. You can share configurations across environments? [0:34:09] SG: Yes. YAML overlays are really close to some YAML include stuff, all kinds of like just DevOps-y theory and process, right? So, that you can write the least amount of code or quickly change a configuration by kind of doing that overlay process, like this one thing needs to change. Here's how to override that with environment variable or another small configuration file just based on where it's running, those kinds of things, and have the main configuration stay the same. That overlay system, people get really attached to it real quick because it's super powerful. [0:34:50] GV: Yes, I was literally just going to say super powerful. That's exactly what that sounds like. Just wrapping up on the DevX side, I believe authenticated scanning is something that's also possible. possible. So, can we just speak a bit to what even is that and how does that look? [0:35:06] SG: For sure. In DAST land, authentication is the key. It's the hardest piece of doing DAST, and it is also the most valuable piece of doing DAST, because any data worth its salt is behind - well, should be and probably is, and hopefully is behind some kind of authentication process. So, being able to get past that wicked important to be able to test what's going on behind that authentication. There's tons and tons and tons of ways that authentication works. Unsurprisingly, devs often know how it looks and are pretty good at going, "Okay, this is the configuration and this goes here and this goes here, and this goes here." The secret is stored in our CI system and all the DevOps-y processes. I used to say when we had a little less than 100 customers, we've talked to 100 customers, and we've seen over 700 different kinds of authentication and that's probably still - that ratio is still probably true. People do some real interesting stuff with their auth processes, but we've built a super flexible engine that has a ton of like standard kind of auths like OAuth, form-based auth, JWT auth, all of the kind of standard types of auth, but then the ability to customize auth to make it work for whatever weird scenario somebody cooked up because they were having too much coffee on a weekend or other things. Being able to kind of customize that auth so that it's repeatable and works was super important. That's a core piece of it. Some of the old ways that auth used to work in DAST tools is like record a web session, turn on the recorder, I'll punch in my username and password, and then you just replay that. And that just doesn't work with a ton of, for good reason, a ton of new web technologies and how security processes work because you don't want that to happen, right? You literally are trying to prevent that from happening in your app and hoping that your security tool can do it is a little crazy. But not to mention, APIs don't have front ends. Not all of them have a front end. So, you can't record that authentication. That real machine-to-machine authentication process and capability, really critical to being able to test and test quickly and thoroughly. [0:37:36] GV: Yes, just sort of wrapping up on that bit. I mean, just to throw another name out there, I think that a lot of developers are probably using, if not, I've definitely heard of like Snyk, for example, and I'm aware like, how would you, just to sort of put it in context, how would you compare the sort of to them? I noticed like, you put out some content recently, just about the speed that StackHawk can bring over something like that. What would you say to that? [0:38:01] SG: Yes. We still have a pretty tight integration with Snyk where we combine SAST and DAST results. Because of what we talked about earlier, thinking about testing small, testing in relation to source code repositories and how Snyk SAST works and most SAST works, it's like, here's the problem in this repository. If we're doing the same thing, we can really tightly correlate those findings and kind of point that code. The reason we can do that is because we can test like most of our applications or APIs that we're testing, we can test under 10 minutes. So, speed is a critical factor, especially if you're like, "Yes, I would like to integrate this into my CI/CD and software delivery process." It can't take days. It just flat cannot. Not only is that not developer-friendly. That's not people-friendly. I don't care what your job is. Waiting a day for something to finish its job sucks. That's kind of the capability they bought and I think they bought - I don't know why they bought it. Their goal there was to kind of round out their AppSec suite to say, "We have SAST, and we have DAST, and we have secrets, and we have SCA, and we have all the whole Gartner AppSec platform." I just don't think they stayed close to their core, dev-friendly AppSec mantra when they did that. It's very much a security checkbox security tool that you put something on the Internet and you scan it and hope that there's nothing bad in there when the results come back in a couple of days. [0:39:37] GV: I agree. I mean, I used to Snyk back in 2015, '16, this kind of thing. But certainly today, it's there. But yes, it's not a tool that I sort of find terribly inspiring. I think the DevX focused at StackHawk brings should be a clear reason to give it a look. [0:39:56] SG: If you're thinking about tools like that, even a tool like GitHub Advanced Security is reimagining and rethinking how that process works instead of like hook it up to your repository and then find all the problems and start sending them out. They started at, "Hey, this dependency needs an update. Here's a PR." And then they quickly moved. How do I get this information to security teams and security teams push around, tickets becomes really inefficient? GitHub Advanced Security is working on making that PR process and that iterative feedback and things like auto-fix with their LLM, those kinds of things. But they're taking that more, how do I serve developers, mantra and pushing it way, way forward, I think, in a much better fashion. So, that's super cool, I think. Being able to get that information to both parties is really important. Get it to the developers and help security teams have oversight into what's going on. Is my program getting better? Is my development team getting better? Where do I need to spend resources to educate people because they keep making the same mistake? Those kinds of things? Both of those things need to happen, not just one. [0:41:14] GV: Yes, I think that's some great analysis there. Just kind of as we start to wrap up, looking at like at the moment, who StackHawk kind of serves, I believe, quite a lot of financial companies is sort of quite a customer base for yourselves. Are there any sort of, I don't know, standout challenges there that that's interesting when it comes to working on like financial APIs? [0:41:36] SG: I wouldn't say that there's anything that really stands out. I mean, especially in financial services kind of vertical that is financial services. No one walks around going, "I'm in the financial services vertical," but the people that are kind of handling investments and money and stuff that's really important to people, not only is there a ton of regulation that goes into that, like talking about things like Gramm-Leach-Billey Act, which is super fun government regulations and Sarbanes-Oxley and SOAR. There's all kinds of regulations that go into that and it can really weigh down a company if you're kind of doing it the old way and wielding the security hammer of compliance. You can, however, make that an advantage to your company, and tons of our customers are doing that by integrating the security process and making it part of the value prop of being able to go fast and do it securely. There's nothing super different about the financial sector and the APIs that they're building and using to handle data. It can be one of the go-to-market specialties or go-to-market advantages for those companies to be able to meet all that regulatory compliance, keep that customer data safe, and do it fast. Companies like the Capital One that are rapidly iterating, using the cloud, being able to go fast, meet regulation and help customers save money, grow their money, spend their money, all the fun money things. I think it can be for a ton of those companies an advantage. [0:43:22] GV: Absolutely. As we wrap up, where's the best place to get going with StackHawk? Where should somebody go? [0:43:30] SG: If you want to try out StackHawk or you want to kind of book a demo, ask one of our awesome team members to show you around. You can do that at stackhawk.com. There's a free trial. You can always start a free trial of StackHawk. One of the most annoying things as a security buyer is you can't test a thing before you talk to somebody, so we tried to fix that. But if you just want to, it doesn't work for how your company works or reach out to us, we'd be happy to give you a demo, help you out, understand how StackHawk can help your business. [0:44:00] GV: There we go, stackhawk.com. Well, just one final question that I tend to ask to most guest these days. So, obviously, you've had quite a career to say through various companies, and most of which I think the listener base have heard of. If you could tell yourself something at the beginning of that journey now, what would you tell yourself? [0:44:24] SG: Tell myself something at the beginning? [0:44:27] GV: That you now know, like some kind of infinite wisdom that you've picked up over your different roles and, and this kind of thing. Something that you could tell yourself to sort of, whether it's just not worry about something or do more or something, or less of something, I don't know. [0:44:41] SG: I think right now, it would be get on the air fryer bandwagon earlier. No. [0:44:48] GV: I still don't have one. I live in a country where they're not really a thing, I don't think. [0:44:54] SG: I feel like I did a ton of stuff, right? And that was just dive head first and do a ton of technologies and get out of your comfort zone and stay out of your comfort zone. Meet new people, understand the way they do new processes. It took a while, but that's how learning goes. I think, I might tell younger me to have a little more fun in between. That's maybe the one thing, but I had plenty of fun too, so I'm not super worried about that. [0:45:24] GV: That's good. That's good. Somebody once say to me a while ago, they wrote me a note and said, "Have more fun." So, I did take that to heart slightly. Clearly, I was being a bit too serious about things at times. That's a good place to end it. Thank you so much, Scott, for coming on, telling us all about StackHawk. Sounds like an awesome platform, definitely something that I'll be looking out for. And hope we get to catch up again in the future. [0:45:46] SG: Sounds great. Gregor, thanks for having me. Really enjoyed it. Let's air fry something together sometime. [0:45:51] GV: Definitely. Definitely. All right, thanks. [END]