EPISODE 1660 [00:00:00] ANNOUNCER: Ben Huber is a security engineer who has worked at companies including crypto.com and Blackpanda. He joins the podcast to talk about his career, penetration, or pen testing, attack factors, security tools and much more.  Gregor Vand is a security-focused technologist and is the Founder and CTO of Mailpass. Previously, Gregor was a CTO across cyber security, cyber insurance and general software engineering companies. He has been based in Asia Pacific for almost a decade and can be found via his profile at vand.hk.  [INTERVIEW] [00:00:48] GV: Hi, Ben. Welcome to Software Engineering Daily. [00:00:51] BH: Well, hello, Gregor. I'm very happy to be here. And thanks for having me. [00:00:54] GV: Ben, you've been a security engineer really since entering the workforce almost 10 years ago. Maybe could you just take us on a sort of quick journey back? How did you get into engineering? And then how have you ended up actually in security engineering?  [00:01:12] BH: Yep. Sure. I think I didn't start as early in engineering. And maybe with software, I would say I was a fairly late bloomer in that sense. For example, in my high school, we didn't really have computer science courses. It was only really in university where I was exposed to information systems.  I started out with a more generic business degree. But soon found out that the most interesting part of a business for me is how it's operating in a digital space. I was gravitating towards that information systems degree, which kind of mixes computer science with some of the more business aspects.  And from there on out, I went more and more technical. Trying out different roles within IT, such as IT auditing. To eventually landing my first job as a penetration tester. For the first few years of my career, it was more of a consultant role where we would be assigned to a specific project. Maybe pen testing a client for – sometimes it's just one to two days to sometimes two weeks or even monthly recurring penetration tests.  And from there, I soon realized that there's only really so much that you can learn being a consultant, like looking from the outside in versus to being part of an internal team and having access to all the code, to all the security settings, to all the infrastructure configurations. I made that switch from consulting to in-house around halfway of my total work experience. That was at crypto.com where I moved into an internal role.  In that internal role, we were kind of like almost like an internal consulting team for the company at large. When we had you know a new feature or a new product coming out, the product team would reach out to this application security team and ask for a penetration test. And in a similar way to the consulting experience, we would provide them with a report with everything a penetration tester would do as a consultant. But then also more.  We would be familiar with the code base. We would perform white box approaches and we would see exactly where a particular vulnerability occurred. I think that's a big difference between being a consultant versus being in-house where almost all of the time, during consulting, you will never have full access to the system. You won't have access to maybe important documentations, like API documentations or product requirements.  But being in-house, we would really gain a full or a deeper understanding of the application. And that allows us to have a more – I guess, a better perspective on the security of it. Because when we're trying to secure something, we want to first try to fully understand it. If we don't understand how the system works or if we don't know particular nuances, it becomes harder to identify certain edge cases that we should look out for from security testing.  In that role, we had a very clear separation between this is the security team. It reports to one C-suite level basically, like the CISO. And we had the tech team and the product teams which reported to the CTO and more on the strategic business side even.  With that separation, what it meant was that even though we were giving out security suggestions and we weren't able to, I would say, make fixes directly all of the time. Maybe this owner of the code base, this particular maybe tech lead, we would only be able to provide them suggestions. We could write out sample code for them. But we were not able or we were not allowed usually, also, I guess to keep it more secure. We were not allowed to make security changes directly to the code base.  We would say, for example, here's an input validation issue. You're allowing users to provide anything as input for that particular parameter. And here's our suggested regular expression to fix it. But because we wanted to have this clear separation between security and tech, we weren't able to just make a pull request to get them to merge that in. What we'd have to do is jump through I guess a few hurdles to get an agreement on how this fix should be implemented and suggest that to the tech side. And they would review it and implement it in their own way, which sometimes differed to our suggestion as well.  And I guess that was another change to when I joined Blackpanda, my current role, where now I'm having write access I guess to the repos as well. If I do find a security issue in a code base, I am not only able to suggest what I think is the ideal solution. But instead being able to, in some cases, make that fix or patch myself.  I guess it's, yeah, a kind of journey of multiple steps starting out from very little computer science background to first looking at clients' infrastructures or applications from the outside perspective as a consultant. And then basically going deeper and deeper into the space by being in an internal role instead of a consulting role. [00:06:42] GV: Yeah. very interesting. If we kind of go back to the pen testing side, how would you say that sort of that role has evolved overtime? Or maybe it hasn't? I mean, you talked a bit about how at crypto.com you had kind of quite clear separations. And then at Blackpanda, there isn't as much separation. Would you say that's an evolution of the role? Or is it just more about how companies look to do pen testing in general?  [00:07:13] BH: I think there's probably multiple sides to it. In some ways, I would say it's a good trend that in security we always talk about moving security to the left of the whole development life cycle rather than just pending a pen test when the project is about to be deployed to production.  I think as being part of an internal security team where we have more and more influence or more and more say on security even in the earlier stages of a feature’s development or just the development life cycle in general, I think with that approach it's definitely positive thing where we can identify security issues earlier. Identifying them earlier, it also makes them I guess cheaper or easier to fix.  For example, when we were at crypto.com, we often joined the agile ceremonies. Things like backlog refinement where we would be part of the design stage of a feature. Before any coding implementation actually happened, the security team would often be invited, especially for more sensitive features, to be able to give advice to the product team and the tech team on what we think or could potentially be security issues down the line.  It's almost like we're using our pen test background to predict security issues and then make that as part of security requirements before any implementation happens. And I think that really helps teams save a lot of time and effort. And I saw that many times in a different approach where security is only attached at the end of a feature's development that we end up sometimes having some fairly large security concerns that we think a feature might not be ready yet because the security risk is too large.  By bringing us in only at the end, that means that everyone else, like a product team has signed off on it, the tech team is happy with all the testing, the QA team is happy as well. That makes us kind of the blocker. I think by having this penetration testing and just security in general earlier in any stage of a systems development, I think that, yeah, it's definitely a positive trend where it's saves everyone time and it avoids having security being a blocker and thereby slowing down everyone else.  [00:09:37] GV: Yeah, certainly, exactly shift left in terms of security has come up as quite a big trend as an easy word I guess to put on it. Are there any I guess sort of frameworks or products even that you've ended up bringing into that flow? Or is it really more kind of what you've just been saying that it's just actually that you, as a security engineer, get brought into the process sooner?  [00:10:03] BH: I think one I guess difference or maybe one more recent development is trying to integrate some of the more traditional security tools such as static code analysis or even dynamic security testing into the CI/CD workflow. And so, we're able to find many I guess more traditional security issues earlier on maybe during the build process or during certain software testing stages. I think having these automated tools is great. Because I guess when I first started out in penetration testing, we would use some of these tools that are now integrated as part of many workflows. And we would still use them manually back I guess when I first started. But now having that more automated and getting the results automatically, I think that saves us a lot of time as well from the security side.  I guess actually many different tools can be integrated in this workflow. We can check for SSL security issues. We can check for common SQL or cross-site scripting vulnerabilities. And by having that part of the workflow, that frees up a security team as well where we can focus on things that are harder to automate such as more on the business logic flows.  I would say there's probably specific security tools for different languages that we can then use as part of a specific workflow. Yeah. But I would say probably things like static code analysis are the easiest to integrate. With tools like Bandit, for example, we're able to find security issues more statically in the code base. And the dynamic tools, I would say the results from that are still maybe slightly harder to fully rely on when they're part of an automated workflow. But it definitely does help.  Basically, I would say any kind of open-source security tool or even commercial tool can be integrated automatically as part of a workflow. Even things like Nmap for port scanning on a system, that can also – I've seen that being integrated as part of workflow, so that, for example, we can ensure every time that – from the Nmap results that there's no new open ports that we're unaware of that are maybe open by accident by integrating a new library or feature. [00:12:25] GV: Yeah. Very, very interesting. I think certainly what I kind of saw when I was really coming into the security field was that there are just so many tools, frameworks, libraries that I, sort of a regular engineer, will never have heard of. And then there's all these others that actually in the security engineering field are kind of the tool belt, the toolkit that everyone seems to kind of know about and use. And so, it's kind of this whole extra world, which is always pretty interesting.  You mentioned one, Nmap, for example, that people might want to take a look at, which is just for – even if you're not in security engineering, is a very interesting library to play around with.  Let's maybe just look at sort of contextually – you called out one of your major roles was at crypto.com. I'm sure quite a few of the listeners are familiar with that company. It was a crypto exchange. There must have been quite a few challenges there. Obviously, maybe some that you can't speak about. But at the same time, are there some maybe challenges and interesting things that you had to solve there?  Digital finance, I imagine that's – especially exchanges, they've had a general reputation for being targets from malicious people. Did that kind of heighten the role there? And what kind of I guess additional challenges did that bring?  [00:13:47] BH: Definitely, I think it depends on market conditions as well. What we typically saw is when crypto or the economy in general is doing well, we'd actually see more cybersecurity attacks against our systems. But when the market wasn't so good, I guess attackers chose other targets to hopefully gain more profits.  But even then, yeah, basically at all times, we saw a variety of different attacks or even just malicious activities in general. I guess they can be generally classified as is this attack trying to target the whole system? Or is it an attack targeting a specific user? And that second category, we would typically call it account takeovers, which is where, as an attacker, I would try to find a way to gain access to one user's account, for example. If I know that that user has a large amount of money, maybe they're something, like a social media influencer on crypto and they're always talking about their trades, maybe that they would become a fairly interesting target to try to get access to their account.  I guess our approach from security was trying to defend against both types of attacks. Both account takeovers and attacks that target the whole system that would target multiple users at the same time.  I think in general, maybe a bit more on the traditional security side, there was more focus I guess on protecting the system as a whole. Just keeping everything patched or preventing very common issues that would affect every user, like SQL injection, cross-site scripting and so on where, in some cases, attackers would be able to gain access to a massive amount of users' account.  But at the same time, we also wanted to make sure that each user account, each user session even, was secure. We spent a lot of time trying to prevent account takeovers. And I think when I first joined, we were still working out how to best try to do two-factor authentication, for example. And then we just kept adding more and more security controls on there.  For example, once we implemented two-factor authentication, we had to also figure out what's the best way to store these secrets. How do we encrypt the two-factor authentication seeds basically? How do we protect those? And how do we also verify requests coming from a user's mobile phone?  What we actually saw is that some attackers, somehow, they are able to find the password of a user and also able to get the correct two-FA code. Maybe that could mean that they have already compromised the user's mobile phone. Maybe a user's mobile phone has been hacked. We had to think of those situations as well to try our best to prevent account takeovers in any sense.  Essentially what we wanted to do was we wanted to be able to verify that each request, especially the important requests coming from a user, are really from that user's phone or really from that user. At every step, we would make sure, even though it's like a bit less friendly on the user experience in some sense, because maybe if we have to ask them to double-check their – double enter their password again, even though they're already logged-in, or enter a two-FA code, or even receive an SMS message, I think that added a bit of friction and definitely wasn't popular from the product side of having these additional security steps. But, yeah, I think those were necessary in the earlier stages especially while we were still trying to figure out what the best way of securing a user's account was.  In many cases, I guess it's also a matter of thinking back of the different factors of authentication. What you know? What you have? And what you are? We also integrated with I guess biometric authentication. And we ended up having this whole matrix of how you can really prove that you're are that user?  For example, if you had a password set and you had two-FA set, but you also had the biometric, like fingerprint scanning and/or face scanning set, in that sense, that would allow you to have a better user experience. Because we trust the biometric result more I guess than simply the password-only.  It became a very I guess complex logic of what factors are really required to prove that you're a user while at the same sense balancing out – trying to reduce the likelihood of having a takeover successful as much as possible. What eventually we came across was the best solution is probably something that's invisible to a user. Where, if we're able to identify your – the mobile phone that you use to log in with on the app and we're able to tie some kind of encryption to it and then sign requests, in that sense, I think that's kind of almost invisible to the user. But we're still able to verify that this request really came from that mobile phone.  I think those kind of more invisible approaches where it reduces friction for the user experience. But at the same time is based on cryptography and it's like very well founded in security, I think that might be the ideal way of authenticating or authorizing requests. And I think that's also an approach that many maybe finance or banking applications use.  I would say, in a sense, we would have a lot of overlap in the crypto space with traditional finance app. Not just how they're built from the security side but I guess just the functionalities in general and the user experience in general. [00:19:38] GV: Yeah. Very interesting. Well, there's many sort of factors here. I guess one is, ultimately, it's still a sort of shared responsibility model in the sense that you can have all these different factors. Whether it's multi-factor, plus biometric, plus something else. But at the same time, unless you enforce that across all users, then the user themselves has a responsibility to add as many as they would like to. And then if there is a problem, then I guess they have to also accept that if they didn't add a third factor, say, biometric, that the responsibility does slightly sit with them as well.  And, obviously, we're seeing things like passkeys come in. There's been already a couple of episodes on SEDaily from different angles of passkeys. And I think things like passkeys in theory are there to try and clean up some of this sort of multi-factor upon factor upon factor. But the actual implementation is still quite difficult to unify across platforms right now, which is interesting. [00:20:47] BH: I think what helped us as well is being more I guess transparent with the user on why we need certain things or what the current security status was. I think that can be a good consideration for applications that are being designed or developed is explaining exactly I guess the logic of how we authorize certain requests.  For example, in the crypto space, typically the request to withdraw funds to another wallet, we will consider that the most I guess critical or most sensitive request that we have in our platform. That's typically the one that's the most secure. And I think, on my way out, we had this feature where we would actually show the user what their current security status is. These are all the security factors that are offered in our application. And these are the ones that you have enabled. And here are some other security settings, such as the anti-phishing code.  And it's almost like a gamification approach I guess where maybe some users they want to get a full score on their security settings. And by being transparent on what I guess the security settings entail in general and then almost like rewarding the user, I think that has helped us out as well to get more adoption on getting more and more users to set up the security of their account properly.  And I think with the crypto.com app, we also had something like Diamonds, which is I guess almost like an internal currency which you can exchange to get certain benefits. For example, if you read one of our blog articles, you would get a small amount of diamonds. And I think you could even use that to buy things in the crypto.com shop or have other benefits along the way as that.  And so, by tying security to an approach like that, I think it also helps to motivate users to try to set up their account and at least they will have some kind of reward where they feel that there's an immediate or tangible benefit of having security set up. [00:22:49] GV: I like that. Some of the challenges I'm working on at the moment basically involve how to make it easy and enjoyable for users to have the best security. And, yeah, kind of the challenge we're trying to solve is kind of exactly what you've just been saying, which is that most times it's just a chore, "Oh, please go and add this thing." Or, "Please, you haven't set up two-FA yet." And it's just really kind of nagging users. I like that actually you took a different approach there, which was incentivizing with real benefits so to speak. Or benefits that the user can feel maybe more than they think they're feeling with just the pure security side I think. I think that's good.  Let's switch gears a little bit. You're now with the company Blackpanda, which is based in Asia Pacific. And that's a more sort of pure cybersecurity and, indeed, cyber insurance company. And I believe, over there, you're working more in what's called sort of attack surface. I think for quite a few of the listeners might not be familiar with actually what attack surface even is, even though I think all developers should really be aware of at least what it is and kind of how it affects them. Yeah, could you maybe just kind of give a bit of an outline of like what even is an attack surface? And what does that sort of encompass? [00:24:07] BH: Okay. Sure. In general, I would say attack surface is any asset or basically anything that can be used by a hacker or an attacker to try to perform some kind of action. Even if it's not like a malicious. Even if it's like just gathering information, for example. It doesn't have to be a vulnerability that allows you to access the server immediately. But even if it's something like revealing a small detail of your system or any kind of chance where the attacker is able to perform any kind of action at all, I would consider that an attack surface.  I think the definition is really quite broad. And, especially, when we're looking from attack surfaces from the internet perspective. This would be in a similar way on how real threats target organizations. In the reconnaissance stage, for example, threat actors would try to gain as much information as possible about their target company that they're trying to breach or trying to attack.  And I guess attack surface management or attack service scanning in general tries to identify what kind of public information is available about a company. What kind of systems are directly facing the internet? And have services running. And any kind of other information, like what kind of email addresses can be found from search engines or other public data? Any kind of data point I would say is part of that attack surface.  In general, I would say, if I had to group them, there would be things domain names, subdomains, email addresses, IP addresses. And then on those IP addresses, we would also look at open ports. I would consider each port on a server with IP address to be one addition to the attack surface.  Once we have this information, we try to make a complete scan result from that. And that kind of almost paints a picture of this is what real threat actors are able to see from the outside without being in your organization yet. Without having fished anyone or without having hacked into any system. This is what's already available to the public basically. Anyone with internet connection can try to see or perform port scans. Try to find your domains and subdomains and try to learn as much as possible from the systems without having ever to directly interact, I would say, with your staff or company. [00:26:48] GV: Yeah. That's a good definition. I have worked in this space as well in the past. And I think one of the ones that always got me was the fact that it's very easy to discover subdomains. And actually, developers can often get basically a bit lazy with cleaning up subdomains of test servers, staging, UAT, et cetera. And the internet is basically littered with old systems that companies have kind of no idea are still out there.  I remember we saw things like from a major airline, for example, test websites that they had still up that could in theory be kind of used to gain information as to how the current system works under the hood. Even things like design guides for this airline were also out in the public, which might not seem malicious so to speak. But, actually, if you can figure out how to very accurately mimic an email from that Airline, for example, then that's the definition of phishing.  Yeah, one thing to be aware out there for all developers. Just maybe take a quick look over your DNS and if you've got any old subdomains sticking around, just you know take 20 minutes and clean them up. [00:28:00] BH: We often do find like this exact case that you talked about where either old subdomains are exposed or subdomains which the developers think are internal. Maybe they think that they staging environment is internal. Maybe it was, it used to be internal. But for some reason it's now externally facing.  I think we often do see that. And I guess there's many ways of enumerating subdomains or trying to find out what the subdomains are. Given, for example, the top-level domain, we would have to use different ways. I guess the traditional way would be performing almost like a brute force attack where you'd have a word list of subdomains like staging, test, anything that's likely candidate for a subdomain.  And we have – I think the security community now has some lists which are like hundreds of thousands of words long that people just keep contributing subdomains that they find and add that to the list. There are I guess many of such lists available. But at the same time, the brute force approach is fairly slow or will also I guess impact DNS traffic quite a lot. It's maybe not the preferable way now. Unless you want to completely find out every subdomain that potentially is.  And I think another maybe more useful and more efficient way that is more popular these days is looking at certificate transparency records or also abbreviated as CT records. These look at certificates for a particular domain. And from there, we can – because the subdomains might be listed as well. If they're sharing a certificate between multiple websites, from there, both I guess threat actors or the company themselves can find out what are potential subdomains that are exposed.  I think there's a few open free services and also paid services where you're able to query what is the certificate record for a particular domain. And we often find many subdomains from there as well. There's also a few search engines Shodan or even just using Google, with Google dorks or sending specific Google queries. We're often also able to find many subdomains through that method. I guess, yeah, it all goes back to finding as many subdomains as possible and then trying to resolve that to IP addresses that then make up the actual systems of the attack surface.  I guess also another risk these days is subdomain takeovers where if, for example, you're using a particular cloud service and you set up a subdomain there, and in your DNS records there's a subdomain that points back like a C name, for example, to a particular cloud service. But then you're no longer using that cloud service or maybe the record – maybe your usage has expired of the cloud service. But then that DNS record still lives in your DNS server. And attackers are able to hide hijack the asset that was previously tied to that subdomain and launch their malicious website there, for example, and use your subdomain basically as a phishing platform or a point for further attacks. [00:31:11] GV: Yeah. I think very good to highlight the domain takeover side, especially because I've seen cases in the past where a company, for example, has its DNS actually inside, for example, Wix. Now this is not a criticism of the Wix platform. But at the end of the day, that's probably not where your DNS should be living, especially when in those kind of cases you've got actually a lot of access to that platform shared amongst, say, a marketing team. If they want to be able to update a website, that's fantastic. But actually, if your DNS is also in there, effectively, your entire marketing team has the keys to that castle. And they probably don't even realize that they have that access and so maybe don't treat the access quite as securely as they might if they understood that that was what was going on. But, ultimately, it's really someone else's responsibility to figure that one out and move the DNS out of something like Wix and put it in something much safer like, for example, Cloudflare or a simple DNS, this kind of thing.  How would you say kind of your experience to date has now helped in terms of – you're actually working on an attack surface management platform? That's helping other companies understand their attack surface. How would you say your experience in the past has now come through to help make that platform what it is?  [00:32:34] BH: I think, in many ways, it's interesting. Because it's almost like coming full circle back to my first job as a penetration tester. I guess that goes back to the point of how maybe security testing has evolved since I started out in the field. I think it still used to be fairly manual back in the day. But with things like attack surface management, some aspects of vulnerability scanning or penetration testing are automated.  I think there's still a lot that can't be automated for pen testing, specifically more on the business logic side within an application. But for things like scanning an external attack surface, I do think there's a lot of overlap between tools that are used in pen testing and then these security scanning engines that we build or that many other companies are building right now.  In many ways, we are trying to automate as much as possible from the traditional reconnaissance or traditional vulnerability scanning side into this attack surface scanning. Some of the tools that I guess are – as you said, there are some tools that are very common in the security industry like Nmap, Nikto, for example. And these tools have been around for, in some cases, over two decades. They started out really at the very early stages I guess of the internet. And, in some cases, they're also still being actively or somewhat actively developed.  But in many sense, the earlier security tools are I guess more like community efforts or even efforts from a single main contributor. And they're often still I guess the basis or the foundation of what a lot of security testing entails now. But I guess that being said, many of these tools were built quite a long time ago. Perhaps the technologies that are using or the approaches they're using are not as modern.  In some sense, modern attack surface scanning tools try to learn from these earlier tools. Try to see what works. What doesn't work? And then I guess also try to maybe reimplement the same behavior but in a more modern or maybe more scalable fashion.  I guess many of the earlier or even the more recent security tools, they're more designed for like being run on a command line by a person, by a human tester and then having them analyze the result results from there. There's also an issue on false positives of what external tax surface scanning or security tools in general entail.  I guess part of the challenge is kind of filtering out the signal from the noise for some of these security tools. For example, earlier tools, they would look at – they would try to send a specific Git request that was known to have a vulnerability on an X, Y, Z version of a particular software. And then if they get just a 200 response or a 404 response, some security tools would mark that as the vulnerability is present on the server.  What I learned from I guess both manually and now building up on attack surface service management scanning is that there are so many different configurations or so many different types of webservers or technology in general out there that, going back to that earlier case, where if you send this particular Git request to a specific server but then that server is designed to always return 404 or always return 200, then that could lead to false positives. And that's I guess where some of the more recent security tools try to have a more – I guess a more specific identification of vulnerabilities.  I think Nuclei, for example, is very keen on eliminating false positives. What they do is, for each template in Nuclei that is being developed, instead of just looking for a very simple identifier, they would try to analyze the response as much as possible to get an exact match ideally to gain a very high confidence level that a vulnerability is really present. And I think that's one of the trends in pen testing or security scanning in general where trying to be as specific as possible for identifying vulnerabilities so that that kind of signal-to-noise ratio is improved with having less and less false positives.  [00:36:49] GV: Yeah. And would you say are there any kind of trends that you've seen in terms of what people perceive to be a problem and what maybe gets highlighted is, say, a red highlight on these kinds of platforms saying this is a big problem. And actually, the reality is different? And I guess, vice versa, anything that comes up is sort of, "Oh, this isn't really a big problem. But it's something you might want to look at." And actually, that is one of the more serious things. I don't know. Have you seen anything on either of those sides?  [00:37:21] BH: I think definitely yes, especially for automated security scanning. We would see that, I guess for each security finding, there can be a lot of nuances present. For example, even if a system is slightly out of date and it has this known CVE vulnerability. CVE is like this kind of database of all known software vulnerabilities and common products. The largest products would have CVEs for them regularly published.  But even though a software version is out of date and has known CVEs, that does not necessarily mean that that CVE is always applicable in every case. I guess for some CVEs, some vulnerabilities, there are specific configurations or specific preconditions that are needed before that vulnerability can actually be attacked or can actually be reached from the attacker.  I think it helps to look into detail on like is this vulnerability actually something that is really exploitable or attackable in my setup of the system? Or is it just being flagged because the software I'm using is out of date? I guess this happens in dependency scanning as well. Things like Dependabot will often flag critical and high-security issues in an outdated library.  But I think what software engineers can do is analyze, "Is this really an issue that affects me? And does it require immediate attention? Or does this vulnerability exist in a function that I'm not using in the codebase right now?" If there's a vulnerable function but it's never being called or never being used, then that system is probably not vulnerable.  But I guess for security tools, they would not go as deep into analyzing if this function is particularly used, or vulnerable, or can be reached from a user-supplied input. And, yeah, I guess that's where developers still have to come in and see what is the risk of that outdated software being used? Because I guess, in many cases, what I also found is it's not always straightforward to update a software version. Things might break. Things might need to get rewritten. It's kind of balancing that out. Do we need to update this software version immediately? Or is it something that can be done later on if that particular vulnerable component is not presenting any immediate risk to us right now?  I think that's probably the most I guess common issue I would say where there's some nuances or some even arguments between the security team and a tech team as should this be something that's updated immediately or not? And, yeah, I guess it helps from both the tech and the security perspective to look at not just relying on the risk rating from a tool directly but actually going into the codebase or trying out, trying to make a proof of concept on if something is exploitable or not.  [00:40:16] GV: Yeah. Fantastic. And I think, especially any listeners out there who are based in Asia Pacific, and if you are, this is all maybe new to you and you're not aware that your company has any attack surface sort of scanning in place, maybe look up blackpanda.com and you can go and talk to them. Because they can sort you out.  Ben, just kind of to help anyone out there who's looking at trying to get into the security field kind of like you did, what kind of advice would you say, especially – I don't know. If I'm an engineer, I'm coming out of university or I'm just a self-taught engineer, but everything that's been said in the episode today kind of has piqued interest, what would be your advice for sort of how to kind of get into the industry?  [00:41:04] BH: I would say that it's never too late to start out in security. I've seen many colleagues, like ex-colleagues and friends of mine now who started out in a completely different background. I had one friend who was originally following a law degree but they decided it wasn't for them. And even though they were working in that field for a while, they were able to self-study the OSCP certificate, which is I guess kind of the maybe gold standard of entry-level penetration testing.  It's a certificate by Offensive Security. The company is called Offensive Security. And I would say that's probably the most common penetration testing certificate out there. And right now there's just so much content available online for free to how to get started in penetration testing. There's things like free capture-the-flag events or capture-the-flag platforms. Like Hack The Box, for example, where anyone can get started in trying to hack a system. Ranging from very easy challenges to very, very difficult challenges, which you can eventually progress to.  And I would say, yeah, there's a lot of I guess freedom to self-study security. I guess a lot of the ethos from the community is also trying to make as much resources as possible open for people to get into the field. I would recommend looking at these particular paths. Maybe CTFs, platforms like Hack The Box. And even pursuing a known certificate like OSCP for engineers who are trying to almost transition, like make a career change into cybersecurity.  But I guess for engineers who are more like – they're very determined to still stay as developers or they know that they will never fully transition into cybersecurity, I think the first best step to take is just reflecting back on a system that you're currently developing and trying to take almost like the destructive aspect to it.  Because I guess for many – most of the time, as a developer, you're trying to create something. It's more like a creative approach. Trying to make something work or trying to make – adding some value to the user. But in the same sense, it can be even a fun exercise to look at, "What if I'm an evil user?" Or, "What if I really hate this organization? What can I potentially do to this system?" Or what is the most evil thing you can think of that you can try to do with your system? What's the most valuable thing, for example, an attacker can try to do or try to get? What kind of data are you storing? Is there healthcare data, or personal data, or even financial information?  Basically, trying to think of the worst-case scenarios of what would lead you to being woken up by an emergency call at 3am, for example, if there was a real hacking incident? And trying to work back from there. Now that if you identified what's the most risky part of the system or the most valuable asset, I would say it would be good to work backwards from there and identify any potential areas of improvement or any kind of security controls that can be implemented.  And so, in that sense, even any developer or engineer working on a system can take this security mindset or security perspective and apply it to their daily work even. Even when developing a new function or a new feature, it always helps to take a break and maybe just reflect on now I made it work for the regular user or the happy flow. But what if I'm like an evil user, what can I potentially do in this new feature? And from there, probably a lot of ideas will come to your mind on how the system is secured.  I think it also helps that, as an engineer or as a developer, you're actually the person who knows the system the most. Even any kind of attacker would not know the system as well as you do. You're in the best position to actually secure it. In most cases, the attackers will not have access to your source code hopefully. And even if they do, it will take them a lot of time to study the source code or understand the decisions that went behind into the signing functions, or classes, or features.  But as a developer, you have the best firsthand knowledge of how you build a system. And I guess that brings you in a position where you're already a step ahead of the threat actors where you're able to know what to protect and how to protect it the best.  I think there's also many platforms available for developers to learn more about security practices in a particular language. I know there's Secure Code Warrior, which it's like a lead code style platform where it will walk you through different kinds of vulnerabilities and how they look like in the language that you're using in your daily work or in your projects.  I think, yeah, something like that where you're able to see how a SQL injection or how a cross-site scripting vulnerability actually looks like in your code. I think that also helps you then to gain this kind of knowledge or recognition of the patterns that you can then hopefully try to avoid and then get less real security incidents or less findings from pen testers. I think those would be my general recommendations. There are many different paths again. And, yeah, I guess it all starts out with this kind of curiosity or this kind of mindset of what could go wrong, I guess, essentially from a system.  I guess now it's still mostly human users. But I guess eventually we would have fully automated AI-backed attacks against systems. I guess for now you'd have to take the perspective of what could an evil user, human user or evil group of users do against the system.  I guess with automated attacks being on the horizon, fully automated attack chains that are just based on AI, that would be a concern I guess a few years down the line. But I guess even now the most advanced threat actors like nation-state hacking groups or organized crime groups, I would say they're still using AI more as a productivity tool in a similar way of how we would example a developer would use Chat GPT or Copilot to enhance their productivity. I think that's still the current status of security from the attacking side.  Attackers might use large language models to design phishing campaigns, like attacks for phishing campaigns. Or even just to like write functions or unit tests for their own malware. It's not at a stage yet where it's a fully automated AI attack. But it's still I guess at a similar stage of that AI is being used more as a productivity tool still by human attackers to try to enhance their attacks. [00:48:03] GV: Yeah. I think that's a great place to leave it. I think probably we'll end up having another episode in a year or something where we're discussing how AI attacks are on the rise and what they look like. But I think you gave some great pointers for people out there. Key thing, it really isn't too late to ever get into security if that's something that is interesting to you.  And I think overall, there's just not enough people out there who are kind of into it for the amount of products and code that gets shipped. It's never a bad thing to get a bit interested in it. As you've mentioned, you can do certificates to sort of get some formal training so to speak. Or you can just get super curious about your own code base and just think about what could happen. And just start kind of thinking through how you could tie up some of those holes, so to speak, in the code base. Make it a little less easy for someone who has an ulterior motive towards your platform.  Ben, this has been fantastic. And thanks so much. You've given a lot of great insights for people to noodle on. Thanks again. And I hope we speak again soon. [00:49:13] BH: Oh, thank you, Gregor. Yeah, it was really great reflecting and catching up again. And, yeah, let's see, as you said, perhaps in a year or a few years' time, there would be probably some changes, especially around AI or recent trends in security. And, yeah, I'd hope to catch up again on that in the future. [00:49:33] GV: Definitely. Thanks again. [END]