EPISODE 1769 [INTRODUCTION] [0:00:00] ANNOUNCER: Offensive penetration testing or offensive pen testing involves actively probing a system, network, or application to identify and exploit vulnerabilities, mimicking the tactics of real-world attackers. The goal is to assess security weaknesses and provide actionable insights to strengthen defenses before malicious actors can exploit them. Bishop Fox is a private professional services firm focused on offensive security testing. Mark Goodwin is the Director of Operations at Bishop Fox, and he was previously an officer in the U.S. Air Force, where he did cyberspace operations. Mark joins the podcast with Gregor Vand, to talk about Bishop Fox and the future of offensive pen testing. Gregor Vand is a security-focused technologist and is the founder and CTO of Mailpass. Previously, Gregor was a CTO across cyber security, cyber insurance, and general software engineering companies. He has been based in Asia Pacific for almost a decade and can be found via his profile at vand.hk. [INTERVIEW] [0:01:14] GV: Hi, Mark. Welcome to Software Engineering Daily. [0:01:17] MG: Hey, thanks for having me. [0:01:18] GV: Yes. So, great to have you here today, Mark. You are at the company, Bishop Fox, which we're going to hear all about and understand about what is called the Cosmos platform and really just what Bishop Fox does in general. I think one of the first things would be really just interesting to hear about - obviously, some details, I don't think you'll be able to go into, but just your background is very interesting, and sort of how you then ended up at Bishop Fox. So, maybe just to introduce yourself a little bit in terms of, I don't know, straight out of college or high school, what did you do? [0:01:49] MG: Yes. Again, thanks for having me on. My name is Mark. I've been with Bishop Fox for four and a half years. I graduated college back in 2012, 2013. Immediately went into the Air Force to be an active-duty officer, where I got to do offensive cyberspace operations. So, a lot of late nights, a lot of long days, and learned a lot about threat posture, how do we go after hard targets in ways that keep the toolkit safe, keep the platform safe, and go after nation-state objectives. So, I did that for six years, six and a half years. At the end of my time in the Air Force, started applying and Bishop Fox was starting this new service called Cosmos. It was all about continuous offensive security at scale, and definitely very exciting, very - I consider it the future of offensive pen testing, to be able to stay up with nation-state actors, follow emerging threats, and identify attack surface that customers may not know about. Like I said, I joined Bishop Fox in March of 2020, started as a senior analyst running the analyst team, which is all about discovering attack surface. Did that, for about a year and a half. Led the operator team, which is all about taking that intelligence that we drive, that attack surface that we see, prioritizing potential vulnerabilities, and providing true positives and true negatives. True positives here is a finding that we know exists, that we've validated and we've proved you're vulnerable to, with a finding. Or we give them a true negative to say, "We went and identified that this host is out there. We thought it was going to be vulnerable and it wasn't. So, you know you're good. Your defensive team is doing the right things, they're applying the patches. The configuration that we thought we saw, didn't actually exist." So, we're able to provide that safety blanket to say - even if you're not getting findings, you should be getting something to say, "We're looking at your attack surface and we don't see anything." [0:03:41] GV: Yes, super interesting. So, I'd love to understand. So, you've given a great, sort of quick overview, I guess, of Cosmos. My understanding is that Bishop Fox started as more like consultative company and the platform aspect has kind of developed over a significant amount of time. I believe Bishop Fox started back in 2013 or around then. [0:04:03] MG: I believe it's older than that. I believe we're an 18-year-old firm. So, yes, they have been doing consulting for a long time. That has been - that is how we've made our name. We bring on some of the best offensive mind in the country, across the world, and let them work on really interesting projects. We dedicate time to training and developing tools that is going to make the offensive cyber community better. I have to shout out, Joe DeMesy with his Sliver tool kit that he's built. Pretty exciting, and we use that almost daily. So, yes, Bishop Fox started as a consulting. It's still our primary way to get after securing customers, keeping them safe and protecting their customers' data, and trying to find vulnerabilities. We have our consulting firm, and then we have the Cosmos firm, and they're both continuing to grow. [0:04:48] GV: Amazing. What was the, I guess - do you know kind of what was the moment when it was thought actually bringing, turning sort of this knowledge into a platform made sense and like, what was kind of the thought process behind that? [0:04:59] MG: Yes. So, when I joined in March of 2020, we had been doing - we had called it CAST, which is like continuous attack surface testing. We had been doing that for almost a year. I think it started from this mind of like, you know, how do we take our insanely skilled consultants or operators, and give them a suit for Ironman, right? We don't want to build Ultron, where we're just automating everything, but how do we instead enable and make our operators better. So, that was kind of the driving force. On our very old documentation, from the way back times, we have that, we're not building Ultron, we're building Ironman, to propel us forward. Part of that was looking at enterprise customers. Their attack surface is huge. How do you look across all of it when they just need help identifying what is the most important thing? We only have so many hours in a day, in a week, in a year. How do we find those things that advanced actors are going to go over. Hackers by trade, very lazy. We know that if we can take the knowledge of 10 operators or 10 consultants, consolidate that data, make it easily attainable, take off some of the scanning that every external pen test starts with. What if we instead offload that to automation? So, it is on a weekend or smart scanning as we identify new targets, it gets scanned and then it prepares that data for us. So, I'm not wasting cycles waiting for a scan result to return. Instead, that data is there ready for me. Then, when I go to pick up the work. Because we've had five other super sharp consultants or operators do a similar investigation, we get a jump start on what we're looking at. [0:06:30] GV: Yes. I think this is where it gets super interesting. So, my understanding is the platform has - it kind of has three parts to it. So, you've got the automated attack surface, you've got the automated application pen testing. And then, does this understand also automated internal testing and pen testing? Is that correct? [0:06:46] MG: Right now, we're not at the automated internal pen testing yet. That's definitely on the roadmap, though. How do we do that smartly? How do we make sure that we drop into a right location? Again, for Cosmos, the entire point is scalability. I need to be able to do this on five targets, 500 targets or 5,000, and way beyond that. So, right now, the main focus is that, web app pen testing, doing the research on customers attack surface. We know it makes up the majority of their attack surface. There is going to be SSH and FTP services out there that may be vulnerable, and we're going after those. But most of the companies attack surface is going to be web based. That's where we spent the first four years, really focusing on making sure that we can automate what we should. Like, "Hey, this is an exposure, you need to go fix this." Preparing the work for the team of, "Hey, we've identified that there's a version mismatch or there's a potential SQL injection here." We're not going to throw that. We want to keep that to be a human decision. Because at the end of the day, the last thing we want to do is knock over customer infrastructure. That's one of the worst things that could happen. Using this insight, using this prioritized attacking, we're to say like, "Hey, we're going to automate these things that we don't care about. You need to fix it. Directory listing is something that you should not have. That's not a good configuration, because of the potential to put something sensitive there. But we don't necessarily have to go look at that. We can build automations that will look for directory listings, tell you about it, and then look for things that were exposed, and identify like, "Was there something sensitive here that needs follow on work?" So, continuous web app pen testing is what we're started on. Continuous external pen testing is kind of that next thing of, "All right. Now that we've got web app pretty much locked in, now that we know what we're looking for, how do we look for other services? SQL, and other endpoints that may be a little sketch to be out there, how do we start targeting those and take care of those? [0:08:37] GV: When it comes to the automated pen testing, and I might just sidebar for a second. I mean, this is what could also be classed as effectively automated red teaming? Is that a good way to think about it? [0:08:48] MG: I think that's definitely nuanced. I think there are some portions that could be considered that. When I think of red teaming, again, with the background that I have coming from the military, I always think of red teaming as this like, very high stakes. I need to move silently. I need to - I'm competing against that blue team. If I'm a red team, I'm competing against the blue team. I don't want to get caught. So, red team will have to operate, and act, and make different OpSec decisions. For our continuous external pen testing, for our continuous web app pen testing, I hope you find me. That would be a good thing. If you have smart filtering or smart logging to say like, "Hey, we see this thing, this isn't good, drop those packets." That's a win for you. So, that's kind of the nuance, I would say. But the methodology is still there, like, I want to be safe. I want to make sure we're not knocking over infrastructure. I want to make sure that we can get the data that we need out without causing logging to occur because it makes our job easier. [0:09:44] GV: Yes, I think that's a good nuance. I mean, some of our listeners, whilst, very technical security is probably not something they actually have had to touch a lot. So, yes, the sort of term of red and blue teams. Yes, red being the offensive and blue being the defensive. I believe there's even at this point now, purple teaming, which is - [0:10:01] MG: Yes. There is purple teaming, where the blue teams come together, like, "Hey, we want to see if we can identify this attack patterns." The offensive landscape is huge. You're not only competing against white hat or even gray hat. Security researchers, you have black hat actors out there that are willing to cause harm and willing to - ransomwares on the rise and at all-time high. So, purple team is this way to address that, where you take red team folks and say, "Run your playbooks with us." Like, tell us when you're going to launch an attack, so that we can look at our logging, and figure out like, do we have these systems in place to catch that? So, yes. Sorry about the nuance. Red teaming to me is a very specific subset of offense. But I would say, everything that we're doing is on that red team offensive side. [0:10:45] GV: No, that's great. I think it's really good to be able to explain these, as you say, it is nuanced. So, I'm just kind of interested to understand maybe more, how did Cosmos evolve? I mean, Cosmos, I think you said it's now been in, I guess, production for four to maybe five years. Is that correct? [0:11:01] MG: That's right. [0:11:01] GV: Could you just walk us through almost, I don't know, year by year, if that's possible? Where did it start, and what was added and why? Because I'm just super curious how you take sort of this human process, and then decide what and when that's going to get automated and brought into this platform. So, yes, if you could walk us through that. [0:11:23] MG: Yes. It's an ever-changing process. I know, since being here in 2020, what we do now is not vastly different, but it is significantly different than what we did back then. When I joined the team in 2020, we're operating out of just the goods. We're only going to provide findings for things that are highs or critical severities. Like, this is the, pull in your folks, they don't get to take a weekend, they have to fix this now. Everything else was in notification of, "Hey, you should probably fix this." As we continue to work with customers, we realized like, there is wider range of severities. It goes from informational, all the way up to critical, for a reason. So, around 2021 or early 2022, we started reporting low to critical. We still don't do the informationals, just because there's so much noise associated with that. Again, Cosmos is all about the validated findings. If we're writing you something, it's something you actually need to go take a look at. So, after about being here for around a year, we started reporting these low to criticals. So, that was more work coming in as we were getting more customers. That's where we started looking at, all right, how do we identify those vulnerability types that we should automate? Does it really bring value? I gave this example earlier, but does it bring value for me to look at a directory listing, and say like, "Yep, there's folders in there with files." That doesn't really always equate to something good, especially if it's something that is relatively benign, like a CSS folder. Very rarely is there ever going to be anything sensitive in there, but you do need to be aware of it, so you can fix it in case you have an intern or even somebody super senior that goes to drag and drop, and puts it in the wrong folder. There has been findings that we've reported where like, "Hey, we found passwords in a docs folder. That's not good. You need to make sure you don't do that." So, those become two findings. So, it really became, you know, out of survival and scalability, what do we need to automate? So, it is those - like these low level, medium level of - this is an exposure that you need to fix, this is a system configuration problem. and we're going to save that harder follow-on work for the operator cycles. We let our analysts make sure the target's in scope, and then our operator will do the work to make sure that, like, "Hey, here's the full business impact of what we found in these files. As we continue to grow around 2022, we started automating subdomain takeovers. Again, this is one of those vulnerability types that doesn't take a lot of time, but a customer lifecycle with Cosmos, you get onboarded into the platform, my analyst team goes out and identifies all of your attack surface that they can find. They use open-source intelligence, they use publicly identifiable information to figure out, like, "All right, this person is in your org. This person has registered this domain, through who is information. So, we believe this domain follows." We do this work, and we would always get this huge spike of trashed DNS records, stale DNS everywhere, domains that were for sale that used to be registered and had clear ties to a customer. Or just, again, misconfigurations where you have subdomain, takeoverable subdomains. Because we were at a point where we were getting a lot of new customers coming in, we looked at that and was like, "All right, let's prioritize subdomain takeovers." So, we spent some cycles to report it out as fast as we could because there's this other entity out here that we're working with called bug bounty. There's a lot of bug bounty organizations out there that are also trying to help keep customers safe. So, we bring in a mature, validated findings. We have quality assurance checks, where there's a lot of - not a lot of bureaucracy, but there are a lot of checks to make sure that everything you get still is that Bishop Fox standard. Where with bug bounty, there's not always that. It can be five people just trying to get first report in, so they're not considered a duplicate. That's how we've driven our automation approach. It's, "All right. What are the low-weight things we can knock out? Then, through a customer lifecycle, what are the high-impact things that take up too much time that really don't need to be there?" If I can write a script that can go out and validate this vulnerability, do I need to go look at that? Or, can I use my creativity somewhere else? That's what we always try to lean back on is, is this a creative exploit or is this just something that with enough time and repetition, we can train and build a good enough tool that can tell a customer about it, so that they can go fix it? [0:15:38] GV: Yes. I think, a lot for the listener base, the subdomain takeover I think is super interesting, and also very relevant given some news that came out recently around a company called watchTowr, who's actually based in Singapore, being able to take over the .MOBI who is records, I believe. Can you elaborate on why was subdomain takeover so important for Cosmos to focus on, and what does that even mean, and what can someone do if that is brought into effect, I guess? [0:16:10] MG: Yes. It's not the first order, it's the second order effect. If I was hosting something clients.example.com. Then, for whatever reason, I misconfigured or the application, like the hosting provider that I was using lapsed and that clients.example.com instead gets hijacked or taken over by somebody else. It can now become a phishing angle or a way to harvest credentials from customers that are trying to log in. So, it's bad and that it hurts customer, like it would hurt our customers' public perception, and making sure that we're keeping our customers' customers safe. We try to approach it from that angle. [0:16:51] GV: But when we talk about, I guess, subdomain takeover, I guess, is there a reason why this subdomain is more vulnerable? Or say, if there's subdomain, but you know, if you have access to the DNS of a domain, then you can just create whichever subdomains you choose. So, are we talking about both effectively domain takeover and subdomain takeover, or how does that look? [0:17:11] MG: That's a great call out. So, it's both. Like Azure subdomain takeovers, you're not going to be able to take over a domain that points back to an Azure. But if they misconfigure their Azure instance, then it would be possible, right? Given that the right things appear when you do a dig, you're able to investigate, like, "All right. This is vulnerable because of the configuration that they have, and then you can do work through the API gateway to take it over." There are other applications where you go out and you, it's a third-party that you're hosting on. If the third-party is vulnerable or you didn't complete a setup, Shopify is a great example. If you don't complete a Shopify setup, you're able to come in there and scoop it up. Now, that's your subdomain, congratulations. So, there's a lot of examples out there, and they kind of run the gamut of, yes, they are harder because you actually need the domain. That's why we also report on like, "Hey, this domain is for sale." It used to belong to you. It would make sense that either you make it very clear that it no longer belongs to you, or you go out, and you purchase it just defensively. There's a subset of offensive security called the brand protection. That's not something my team does, but our customers ask us about it. We try to lean in and help where we can. But because it's more defensively focused, we don't do it. But we're able to say, "Hey, the common things to look for are typos, squatting, off by one character, those kinds of issues." [0:18:33] GV: Yes. That's a very good way to explain that. So, when we think about, you mentioned having enterprise customers, and if you go on bishopfox.com, you can see a whole bunch of names there. How do you approach that, when it comes to companies with such large attack surfaces? Whether you maybe want to bring out any examples that you can, or just sort of talk at high level about a sort of industry. I've worked on attack surface, but I'd say probably, this is attack surface of fairly small companies. I mean, maybe the largest I think I worked on was Singapore Airlines. It was large -ish, but I wouldn't say, I think for probably the kind of cases that you work on, it was what would be defined as large, actually. We did find a couple of interesting things, like tons of staging sites and that kind of thing. To your point, it's not that critical, but there were various things where you said, look, if someone is looking for as much information as possible to try and say, spoof being Singapore Airlines, they've got a ton of resources now, so you might want to just lock that one up. Talk to me about maybe some of your largest attack surfaces and kind of what were the actually - because I think Bishop Fox talks about sort of the needle versus the haystack. We find the needle, not haystack. What were some of the needles, I guess, in enterprise? [0:19:46] MG: Yes, that's a great question. Our largest attack surfaces are six-to-seven-digit targets. So. that is a scheme, IP, or domain, and then a port. So, HTTP on example.com, and then, HTTPS example.com on 443. That would be two targets. So, we look at it as a whole, though, because we've seen through investigation and the report that we've done. That port 80 and 443 may serve up different information. So, we want to take that unique approach to each one of them. The way that we find that needle in the haystack goes all the way back to my analyst team. So, they are the ones that are tasked with identifying a tax surface. So, we do this breadth first, and then we go depth. We start with breadth of customer A, has a lot of subsidiaries. So, let's go out and identify each one of those. We use things like SEC filings, if it's a U.S.-based company. If it's not, we get to research like, all right, how does Germany do their filings for what customer owns different companies. So, it's a lot of fun investigative work there, to figure out, this customer that has the service actually has 10 other companies underneath it. We do this iterative approach, until we get to the point like, all right, we've identified all of the main companies that they would own that have a domain. From there, we start using reverse who is information to figure out, all right. This company generally uses these email addresses, or they have this as their org name or their tech name, and we're able to just search back through current information as well as historical information, to figure out like, "All right. Did they have these domains, and then they privacy protected them?" We're going to bring those in and have a conversation with the customer, to say like, "We found these, are they still active, because they're private and we don't want to include them if they're not yours. But if they are yours, we do want to include them." We try to get that whole approach of, we call it domain-based investigations. We're going to go out and yes, we're going to cover your CIDR blocks. We're going to research ASN filings and make sure that like we're grabbing /24s or /20s, and bring those into the platform. But we really want to make sure that we're identifying your domains because that's, again, how most of the things talk to each other on the Internet. From there, we're going to do subdomain enumeration with lots of different tools. The cybersecurity community is not shy when it comes to creating a tool and throwing it out there. Then, we get to go through and figure out like, is this useful, or is there a nugget in here that's good that we can take and make better. In some instances, we'll go back and do a pull request to help the community. In other instances, we've built a lot of things in house that, "Hey, this works really well, and we're going to do this for our service to give back to our customers to make sure that we're finding as much as we can." We let the platform help with reachability checks, resolving checks. So, we never cut off a target too soon. We give a target several chances to become validated and become something that we look at. So. it's a really long-winded way of saying, we do a lot of prep work to build out a haystack that is less haystack and more needles. I know if I'm looking at something, it's definitely theirs. Then, from there, I leverage my T team, they're like my threat enablement team, and they create scanning engines, they create things that we're going to go look for that are threat informed. We use the CISA Known Exploitable Vulnerabilities list. That is one of our driving forces, because customers care about that. I may not care about this thing that a security researcher found, because it's probably not going to be out there, but I know that my C-suite or my senior executives are going to be asking about these things. Again, we always want to partner with the customer, meet them where there are, get those baseline things taken care of. Then, we can start working on, oh, and by the way, here's this, and here's this, and here's this, and we continue to get better that way. So, it really goes back to, good to attack surface discovery, and then looking for the right things. So that, when the operators pick it up, they're guaranteed to have a needle. [0:23:42] GV: That makes a lot of sense. I mean, when it comes to the, I guess, continuous aspect of this and sort of continuous - I'd almost like, analogize with like being proactive versus reactive, in the sense of - you're able in theory to sort of - as it sounds, continuously understand the attack surface, and I guess, match that up with, especially with any newly declared vulnerabilities. How does that work? I mean, it must be a lot of - I'm just talking total layman terms here, but like a lot of filtering going on where you're like, "We looked at this yesterday, so we're not going to look at it again today." How does that work here? [0:24:17] MG: Definitely. That is one of the hardest things that we've had to solve for. You know, we say that keeping attack surface under control requires constant vigilance. It is a whole party team. We do that by my threat enablement team, they track emerging threats. So, again, we use the CISA KEV to help guide that. But we also look at things like Microsoft patch Tuesdays. What are the things that they're reporting on that like may not be a news article yet, but it will be once other people start researching. Then, you brought up watchtower tower earlier, they do some great write-ups. So, it is looking across the cybersecurity landscape to figure out like who else is doing really cool research. I know at Bishop Fox, we frequently do blog posts of like, "Hey, I have to shout out my, the capability development team, they're awesome. They're always researching those next things." So, we really try to leverage what they're finding, as well as what other researchers are putting out there to figure out - even if there's not a proof of concept and an exploit to go with it, what are the versions that we need to identify? What are the things that we need to tell customers about? That's where that continuous cycle comes in. We're continuously looking for attack surface to bring in. We're scanning customers to make sure that we have a right picture of their attack surface. Because if we're only scanning once a month, that's 30 days where something can change. So, we need to scan more frequently than that. We need to make sure that we're staying aware of the changes in the nuance of versioning, to make sure that if an emerging threat drops for version 5.2, we're not going out and reporting to customers 5.1 or 4.8. Because again, that's not delivering a good service, and we want to keep this high signal to noise. If you're getting a notification about an emerging threat, it is, "Hey, this is important. Try to do this before your Friday kicks off." Which is unfortunate, because a lot of emerging threats drop on Fridays. It's a classic joke. [0:26:03] GV: Humans are humans. [0:26:04] MG: Humans are humans. They got to work until Friday, but that's how we get after it. Yes, there's a lot of filtering with AI and LLM becoming a thing. Computers are good to talking to other computers. So, knowing that a CVE comes in, and it has a CVSS, a score that says, "This is a local privilege escalation." We can use an AI to help - it could be a simple regex, but we are leveraging LLMs to help make sure that, all right, why is it a no? That way, if a customer asks and they give us a random CVE string from 2022, we're able to look in that and say like, "Hey, we're not going to cover that, because that's not an external unauthenticated. That's a local privilege escalation. So like, you're going to care about that once they're already on your machines That's not something that like, we're going to identify from where we sit externally at scale. [0:26:57] GV: I think there's mention of, sort of live collaboration feature on the platform. I think, a lot of what you've just been speaking to, the client needs to be involved to be able to - because, I think quite a few things there, you say, we know it's yours, but perhaps, can you clarify X, Y, Z? From my experience, we had a scan done by someone on past company, and they found a notion instance with the same name as our company, but it wasn't ours. But they absolutely thought it was ours, and that platform did sort of enable me to say, "That is not ours. I can guarantee." Is that what the collaboration aspect is? [0:27:35] MG: Yes. So, our platform allows customers to do that with or without us. They're able to say it like, through the platform, they're able to see their targets. And I can say like, actually, those things, they don't belong to us. We're able to not scan those things and not report on those things. But we always tell our customers, "Hey, we work best from an informed perspective. If you want to get the most out of this service, lean in because we're here. We're here to help like we want to, keep you safe." The best way to do that is open communication. We've had some customers come in and with folded arms like, "Hey, we're not going to help you, you figure it out, and we'll still deliver value, and we're still going to do a good job. But those customers that meet us where they are, and they're able to say like, "This is everything we think we own, what do you see?" That is always a better conversation because we're able to say like, "Hey, yes. We found all of those things, but you've missed this section or this company business unit that you may not know you own yet. But according to the news and lawyers, you do own it." [0:28:28] GV: Why are some customers - I mean, I like that sort of analogy, or not analogy, but come in folded arms. Why do they have that attitude? [0:28:35] MG: Security's been around for a while. I think we're at a stage right now where we have to work - offensive security has to get back some goodwill. Because it is so easy to walk in, punch somebody, and be like, "Hey, go fix it." Frequently, that's what it used to be like. So, in people that have worked in the industry long enough, it is like, "I don't want to give you anything, because if you find it, I'm going to get in trouble for it." But instead, if it is like, "Hey, we're going to find it and we're going to make sure that we work with you," it goes better. So, part of our investigation lifecycle always - t should always end with a remediation test. We're going to work with you until it's fixed. I think that is like the one thing that we try to shout out to customers when we get to talk to them. It was like, "Hey, customers will fail a retest and we'll give them specific reasons why. We'll jump on calls with them to help - maybe not troubleshoot because we're not developers, but we are able to attack it with our mindset, and help them work through the problem. So, most customers that come in with arms folded by the end of it, they see the value, they see my team, and how much they care, and we get through. [0:29:42] GV: I think that's a really good call out It's just, yes, in days gone by, pen testing kind of done by a team that maybe was never seen before. I don't know what happens if someone gets this report in their desk, kind of saying like tons of problems. Quite frankly, they're just like, why did we have to find these problems? Could just this happen next month, or next year? I'm about to leave. [0:30:04] MG: It can be the next guy's problem. [0:30:06] GV: Exactly. I think it's just as you as you call out, it's changing the mindset of what a service such as Bishop Fox can do, which is, it's collaborative, like end-to-end. Security, if it's the right security partner, it should be a collaborative exercise, end-to-end. Obviously, confidentiality is just such a huge part of that, which is saying, "Look, we'll be as open as we can as a client with you, but at the same time, we have to trust that this stays within the walls of our relationship." Even as working at a security company previously, we had a third-party security company come and audit. It's just the way it works, but you have to obviously trust if one company is auditing another security company, you have to really trust that. I think it's a really good call out. One thing I'm interested in is, within the industry anyway, it's no secret that it's a very stressful job, like anything to do with security, and finding vulnerabilities, or like helping remediate. Would you say, introducing Cosmos within Bishop Fox has that sort of enabled? I'm not saying it was a stressful environment, specifically in Bishop Fox, just more in the industry. Has Cosmos enabled sort of - it's become a more manageable profession, I guess? [0:31:20] MG: I think, the short answer is yes. The long answer is, always, it depends. When you get attack surfaces for extremely large customers, you see the amount of work that exists. We call it our backlog. That's a pretty common name for the way to refer things. But to make sure, it's always that added pressure of - did we prioritize all the vulnerabilities that we see. Because at the end of the day, because we're not a scanner, and we're not fully automated, and we want to keep that bar to tell a customer to go fix something, we want to go look at it, there's a backlog of work. So, it's the stress of the unknowns. I think we prioritize all this right, and I know that we're working in what we think is the most important thing. But what if there's this random offshoot, and there's actually a critical vulnerability that may be hidden as a low. The way that we approach that problem is like, all right, let's attack it from all sides. I have my operator teams going priority first, but then, we also look at things that like, "All right. What's our oldest potential flag? What's the oldest thing out there? Let's go make sure that we're staying in line and not letting things get too old." Because the worst thing that would happen is something is vulnerable, there is - there are passwords out there and then they take it down on accident. But then, those passwords don't get rotated. That's still bad. So, we want to make sure that we're attacking it from both angles. That helps with the stress at night to know that, if it's not the most important thing, we're going to get it on the backend. Then, we're going to be able to tell customers like, hey, we've identified this a while ago, but because of priority thresholds, we haven't found it. But now, we did. Now, we can work together to make sure that it is remediated and we're able to give those timestamps to customers like, "Hey, if you are really that concerned, you can spin up an IR, an incident response, and make sure that like nobody beat us to it." But that is one of the things that we look at is, when we land remote code execution, we're looking to see, did anybody else beat us here? Because that is incredibly important because it helps build the picture for the customer of risk, but it also gives business impact. If I'm throwing an exploit that puts a file on disk, and I get there, and there's 20 other files, like, "Hey, you need to fix tomorrow." That's how we approach it. Yes, it's still stressful, but I think it is more manageable knowing that we're trying to approach it the best way we can. On a traditional consulting engagement, you scan and then it's kind of on you to figure out like what looks the most important. Because we're leveraging five plus years, and lots of operators and analyst input, we're able to say like, we believe as a service, we're looking at the most important things. It takes off that stress off the individual and really puts it on the shoulders of the leaders to make sure that we're making those right choices. [0:33:54] GV: Yes. It's not as if Tony Stark is not stressed, just because he's inside the Ironman suit. [0:33:58] MG: Exactly. I like that. [0:34:00] GV: I think that's a good call out. I mean, technology will help. It will, at the same time, as you've called out, by now, getting to almost have this sort of X-ray vision, that can lead to maybe more sort of thinking about what to prioritize, et cetera, and that can potentially just be a different - it's just a different stress point potentially. Moving on just to kind of the final area before I just ask about some future things. Regulation and things like SOC 2, like SOC 2 has become so, dare I say, popular. You've got companies like Vanta who've made it, in theory very, I wouldn't say easy to get SOC2, but they have definitely streamlined the process. Big time. How has that sort of impacted Bishop Fox generally? Is that sort of being a great driver of business or is that kind of actually being more of a problem to have to think about how to adapt things like Cosmos to be in line with regulation drivers? Or, how's that sort of evolved? [0:34:53] MG: Yes, that's a really good question. A driver for business, I don't know if I can speak to that. That might be more of my sales team's perspective on, is this driving them to help with customers that are SOC 2 compliant. I know for us, the work that went into it to make sure that the Cosmos platform and all of the operators, analysts, and engineers, that we are doing the right things. I know that that was good for us to make sure that, the common saying is, everybody's building a plane as they're flying it. But I would say, SOC 2, it forces you figure out like, "Hey, here are the non-negotiables, and how do we make sure we're doing this right, so that we can continue to fly and do the right things?" I don't know if that completely answers your question, but that's kind of my perspective on it. Just with my limited interaction with the work that went into it. [0:35:40] GV: Yes, for sure. I think the internal aspect is very interesting, because again, perhaps it's not always clear to maybe no security industry people that, just because you are a security company, it doesn't automatically mean that you've got SOC 2 or something. You should still go look and check. [0:35:57] MG: That is correct. It was a lot of paperwork, a lot of interviews, a lot of making sure that we followed best practices. [0:36:02] GV: Yes, exactly. I guess, I was just also interested with, let's just say, current clients as opposed to anything to do with driving new business. But with current clients, do you get questions to do with, "Well, if we don't fix this, does that make us uncompliant with SOC 2?" Do you get those questions or does that actually go to, I guess, that would go to Vanta or something? [0:36:23] MG: I think those questions get driven internally by the customer's infrastructure and IT teams. We're able to say, this is a problem, and you have PII or PHI exposed. So, to me, that means you're already failing, and you've got to go fix that. Some of it is like, they're just not aware that it's out there. None of our customers currently or historically have asked, "Hey, does this vulnerability or does this finding affect our SOC 2 compliance?" Instead, it's just like, "Hey, how do we help take care of our customers' data?" [0:36:53] GV: Yes. Okay. Makes a lot of sense. So, just kind of, I guess, looking ahead, what can you share about sort of how - where Cosmos is going? What are you guys thinking about in terms of - when I talk about new features, I guess, the customer kind of sees features, like they have [inaudible 0:37:08] things they have a UI that they interact with. But I mean, a lot of I imagine, the development is kind of more like under the hood. And what you actually expose to the customers doesn't maybe always look new as such, but what can you share about that? [0:37:20] MG: This is actually a really great question, because we are at this precipice of moving off of our old Cosmos platform, that has gotten us to where it is, or it's gotten us to where we are. It's successful, it's able to generate information that my team can pick up. We work the cases, we do that work, and we report it out to customers. Once we kind of launch into our next phase of our Cosmos evolution, there's going to be a lot more push for self-service for customers. So that they can go out and take the data that my team has found, and dissect it, and look at it in different ways, in ways that make sense where they're able to determine confidence values or why this asset exists in their endpoints. Like, "Why are you showing this to me?" All of that confidence information is going to be made readily available to the customer, so that they can self-service and figure out like, "Oh, this subsidiary, they're making it too easy for everyone to go out and find our assets." It's kind of two answers for reporting. So, the Cosmos is kind of the portal, and the service, and everything that goes into it. Then, we have different service lines. Our CASM, which is kind of the thing that started all of this, is our Continuous Attack Surface Management. That's going to continue to grow and approve. Like I said earlier, we focus historically on like web apps, and FTP, and SSH. We're going to continue to grow that scheme and that list of things that we're not only showing to customers through the portal, but we're actually providing investigations. You've got anonymous login for this service that you should definitely turn off. We do that for FTP and we do that for other logins, but to make sure that we continue to grow that so that the Cosmos CASM service is kind of that, here's everything from a threat-informed perspective. We recently just launched our Cosmos application pen testing, we call it CAPT. That's web app pen testing, that is delivered with some upfront testing, similar to a consulting engagement. But then, they continue to test those assets through the lifetime of the contract, looking for things like emerging threats specific to frameworks, and doing tail testing to make sure that like, "Hey, has anything changed from the last time? We hit this application really hard. So, that's our CASM, our CAPT, and then we have a Cosmos' external pen test. That's meant to be a point in time external pen test, similar to consulting, but delivered through our Cosmos platform. Where we take everything that my analyst team on Cosmos found, and we use that as the scoping. So, the future of Cosmos is continuing to be operations-led discovery, relying on the customer to work with us, and partner with us, and provide feedback, but to continue to grow and improve, what we look for, what we report. Then, there's a whole new world out there with AI and LLM of, how do you discover things faster? How do you help rip through SEC filings, so that you can generate that subsidiary list faster with better confidence? [0:40:23] GV: Yes. I think we don't have time to dive into, I guess, what could be done with ALMs and AI today on this one. But I think, you've mentioned it a couple of times today, which, which is a good call out. Which is effectively replacing things like regex with LM lookup. Now, obviously, from a cost perspective, that's expensive. But at the same time, in many areas, that's going to give much actually clearer data, and be able to react for kind of faster effectively than a regex. So, it must be quite an exciting time for you guys like thinking about integrating those. [0:40:55] MG: A hundred percent, yes. I'm very much looking forward to it. Then, I'd be remiss to say like, as we continue to grow, I think emerging threats are going to be the thing that continues to set us apart, and set continuous pen testing apart. How do you stay up to date and do what threat actors are doing? Not only with the old things that we know work, but also the new vulnerabilities and the new things that are dropping through either white hat security researchers or this APT got busted by a government and now that TTP, that tactic, or technique is now available. How do you leverage that to keep customers safe? I think that's going to be super important as we continue to grow. [0:41:33] GV: Definitely. Well, just as we wrap up, I sometimes ask either two or one question more back to you, yourself, Mark. So, what do you know now that you would like to have told yourself when you were starting out in this industry? [0:41:48] MG: Man, that's tough. You know, it's tough because I started as an officer so many years ago, where we were trained to lead and think a certain way. A lot of that has done me well, but it's also good to know to be technical. And to be able to work alongside some of the sharpest people I've ever met in my entire life. So, something I would tell myself going back 15 years is, embrace that hard training and know that it's worth it. Because from my experience, those individual contributors will lean in, and partner with you with a leader better if they're able to speak their language, and understand the problem set that they're trying to solve. I've had a lot of leaders and managers in the past too, because they didn't understand the difficulty, would oversimplify the problem set. I think that's something that leaders need to be aware of. Then, for individual contributors, it's to continue to embrace the hard thing, because that's where you learn the most. [0:42:44] GV: I love that. I really like that. I can definitely side with having not gone through any, I guess, military background myself, but wasn't a company that was started by U.S. military operatives. I think it was just having that mutual understanding is what would drive things forward very well. I don't come from a military background, but I understand a lot is learned there. Meanwhile, I come for a technical background. I hope that sort of a lot was learned on that side as well, and I think the sort of the collaboration and the combination was made that work. So, it was a great call out. [0:43:16] MG: For sure. [0:43:17] GV: Mark, thank you so much for coming on, giving up your time in an evening over in North Carolina today. I really appreciate it. [0:43:22] MG: Yes, sir. Thank you again for having me on, man. This was a blast. [0:43:25] GV: Yeah. Thanks a lot, and we'll catch up soon, I hope. [0:43:27] MG: Yes, sir. Thank you. [END]