EPISODE 1619 [EPISODE] [0:00:00] ANNOUNCER: Security issues can often be traced back to small misconfigurations in a database or cloud service, or an innocent code commit. OpsHelm is a security platform that's oriented around identifying and fixing these issues. Kyle McCullough is the co-founder and CTO of OpsHelm, and he has deep experience in backend and data engineering. He joins the show to talk about the challenges of security incident monitoring, prioritization, and response. This episode is hosted by Tyson Kunovsky. Tyson is the co-founder and CEO of AutoCloud, an infrastructure as code platform. He is originally from South Africa and has a background in software engineering and cloud development. When he's not busy designing new git ops workflows, he enjoys skiing, riding motorcycles, and reading sci fi books. Check the show notes for more information on Tyson's work and where to find him. [INTERVIEW] [0:01:04] TK: Hey, everybody. Welcome to another episode of Software Engineering Daily. I'm your host, Tyson Kunovsky. In today's day and age, cloud security has never been more important. Just a couple of days ago, I read about yet another data leak from a large financial organization, and this particular leak was caused due to a database misconfiguration which ultimately exposed tens of thousands of people's personal identifying information to the world. Sadly, news like this has become the norm with organizations of all shapes and sizes from casinos to hospitals, regularly suffering at the hands of bad actors. My guest today, Kyle McCullough, is the co-founder and CTO of a company called OpsHelm, who are taking a novel approach to preventing cloud security incidents from happening in the first place from newer instantaneous automatic remediation. Kyle, welcome to Software Engineering Daily. [0:01:53] KMC: Thanks, Tyson. Really excited to be here. [0:01:56] TK: So, Kyle, before we dive into things, I'd love to learn a little bit more about your software engineering and security backgrounds. [0:02:01] KMC: Yes. So, I guess, right off the bat, I have to out myself. Definitely a software engineer by trade, but I've never had security in my title officially. So, you've got the wrong co-founder today, unfortunately, but I've been pretty close to security my entire career. I'll give you the sort of software engineers perspective. But my background is primarily a backend engineering, and maybe what we might today call data engineering, and focusing on distributed systems, and databases, and APIs, and that sort of thing. Sometime, in the past 10 years, I've sort of become a bit of a reluctant infrastructure engineer, as well. Definitely have a bunch of cloud operations experience and things like that, and have spent quite a lot of time either adjacent to security as a result of that, or on the implementation side of various security initiatives. So, hopefully, I can provide some interesting color in terms of what that's like as a software engineer. [0:02:54] TK: Well, I think that's a great segue. So, you think about almost all major companies today that are running on the cloud. With so much choice in terms of what they can do, which correlates if they don't have the right expertise to mistakes that can be made, what do you think's missing from today's existing security tooling, and the solutions out there like ORCA, Wiz, and the others? How do you differentiate from those modern security platforms? [0:03:16] KMC: Yes. It's a great question. One thing that's become apparent over the past handful of years is that security tooling does not have to be terrible. And I think, all of those companies that you just mentioned, have done a great job providing tooling that is easy to use, nice to look at, makes onboarding relatively straightforward. But one gap that I think has become increasingly apparent is that, your visibility is only as useful as your response to that, right? So, if you have an alarm system in your house, and it tells you you've left your garage door open, that's only useful for you go and close the door. Visibility is great. It's useful, but you also need the other half of that, for that to really be effective. I think that's the primary thing that we're seeing with the breaches, like you've just mentioned is, many of these things are caught and can be found in the tooling. If you go in retrospect that you might see that, "Oh, yes, this was open for days, weeks, months, whatever the case may be", and the information is there, but their response was not there, unfortunately. [0:04:18] TK: Why do you think it takes so long to come up with a response? Is there just too much signal and noise to cut through? Too much priority? Why does it take so long for us to stay abreast of all of these issues? [0:04:28] KMC: It's probably a combination of things. I've seen dashboards from those various providers that you mentioned, for some very large companies, and you log in as an engineer, and you see, "Oh, I have 200,000 findings." What do I do with that? I'm a busy individual with meetings and feature work to do, and all these various competing priorities, and I log in, and I see I have a backlog of 200,000 things that any of which could cause one of these catastrophic embarrassing issues. It's really hard for me as an engineer to go in there and say, I'm going to do this one today. I'm going to pick this off the backlog and work on this. So, I think prioritization is definitely a challenge, especially at that scale. As a platform provider, how do you pick out of that 200,000? Or however many it is. But that's an incredible challenge, and I think the ultimate truth there is that there is no magic formula, and I'm always very suspicious of these anyway. But there is no magic formula for prioritization that is, like, the best or the most accurate, right? If there were, and if anybody's selling that, it should be open for all of us to examine, right? Because priorities are very different between organizations. And what may be very important for one organization is not necessarily important for another. That provides a challenge to these platforms to provide a good, consistent, and meaningful prioritization to their consumers to their users. So, I think that is key to the issue at the moment. But I think the other end of it really is that prioritization is hard. But also, prevention is better, right? We shouldn't have to necessarily prioritize all of these issues. It'd be better if they didn't exist in the first place. [0:06:09] TK: Let's talk a little bit about automation. When it comes to automation, what does proper automated response look like? And given all these different signals, all this noise, all these priorities, why should we trust it? [0:06:20] KMC: Yes, great question. So, I think there's an ideal state in the future. But I'll talk a little bit about how we're thinking about it, and how we're approaching it, and how we're trying to broach that trust subject a bit, and we can dig into the ideal state later a bit, if you'd like. But in terms of what's possible now. No cloud provider really provides us hooks into their platform to do prevention, right? Prevention is handled through IAM policies and being very restrictive with permissions and things like that upfront, and practically speaking, it's very difficult for an organization to maintain any velocity if things are off by default. So, I think what we see typically is users, tooling, services, whatever the case may be, they tend to be over permissioned. So, that allows engineering teams to keep building and working. That allows services that are deployed and enabled to function without constant tweaking, and we've all sort of been there, I think, try to enable a new service or deploy a thing, and you spend three or four hours going back and forth with an IAM policy, trying to get it tuned just right. I think that sort of has created a bunch of fatigue in all of us that are consuming AWS, GCP, whatever, that we just maybe unintentionally, I would say, often unintentionally, over permission things. That kind of leads me to automate a response. So, what we are trying to accomplish now, especially at OpsHelm, but I think, in terms of any sort of automated response is detecting things as soon as possible. Being as event driven as possible and capturing interesting useful state about something that's just occurred, and that was responding to it in real time, as the engineers who are working on that are doing that work. So, I like to think of it as guardrails or policy enforcement, but kind of at a slightly different layer than IAM. So, it is after the fact. But it's intended to be a real time enough that it feels like it's before. I use the word prevention, but effectively, it's for honest. It's not really prevention. It's just the immediate response. In terms of what that looks like, I think it really depends on how you build it and the way that we think of it is it should be as surgical as possible. I think by being surgical, and very fine grained, knowing that the tool is only going to change values that need to be changed, for example, in a configuration. So, if we were to take an example here, let's say we have a firewall group that's applied to some service, maybe it's a database, and somebody pushes out a rule that opens that database up to the internet. This could be potentially catastrophic. So, we might want to respond to that immediately and say, "We're going to revoke that particular ACL in the firewall. What might go wrong here is we revoke all the rules. That we bad. Now, internal services can no longer connect to our database, and we have an outage and downtime. Instead of being taking a big hammer approach to it, we want to be very surgical, and just remove the offending configuration. The advantage to this and being this fine grained about it is we can always put that back to we know exactly which ACL violates the rule that we're trying to enforce and we can remove it. But if we actually need to put it back, we know exactly what we've removed as well. We're trying to build trust by being very specific about what we change, and also being very open about how that works. So, if you want to permission, our tool to operate in that way, you know that we only need permissions to change that one configuration option, right? We don't need permissions to create new databases, or we don't need permissions to change things that are completely unrelated to that. So, by constraining the scope, makes it much easier for us to, one, if we ship a bug to not change something that we didn't intend to change if we're not permission for it. But then two, by being open about that, our customers know that this is the only thing that could change here, and I think that's really the only way in this mode of operation to build that trust. In the ideal state, it would be prevention, right? So, we wouldn't need that level of trust. But this is kind of where we are at the moment. [0:10:18] TK: I can really hear your backend and DevOps experience shining through in response to that question. And I'd love to talk in a little bit more depth about what we as an industry can learn from both backend development and operations in regards to security. But before we get there, I want to double click. You make a really interesting and compelling point and I think the case is pretty valid. What else should the security industry be adopting in terms of automation of this kind? [0:10:46] KMC: Sure. So, obviously, we talked about prioritization as a challenge just a moment ago, and that's only going to get harder. The reason that gets more difficult as every day passes, is that the landscape is changing, often dramatically, between any two given days. Think of reinvent every year, for example. That's a big day in the cloud industry, right? How many new APIs were just released? How many APIs were updated? How many new things do we now have to worry about? The fact that the surface area is constantly expanding, means that we always have more things to pay attention to. That's just on the platform side. If we think about this in terms of somebody that is actually trying to attack or breach a system, they're using automation on their end. So, it's only fair to find that with automation. It doesn't make sense to fight that with people. There's just too much signal for any human to sit down and watch those events and monitor every system and sort of look at everything that's coming through and make a judgement about it. So, I think the only way to make this problem surmountable is to approach it with automation, and use that as a tool that's available to us. We need to adopt it and take it seriously and apply some of those practices that we've applied in the ops world over the past 10 plus years to security. I think that's the only logical path forward in a lot of ways. [0:12:05] TK: Okay. So, back to your background in ops and backend experience, what else do you think the security industry can learn from ops in this regard? [0:12:14] KMC: Yes, I think, automation first. If we think of the DevOps movement, which is not the term I'm particularly fond of. But I think some lessons did come out of it. We see very large-scale companies with sophisticated engineering teams. They now employ software engineers to do infrastructure work, and they build internal platforms to provide infrastructure to their internal customers, especially at the highest of high scale companies. I think the interesting thing about the way those companies approach it is, it's a software driven operation. They do things like provide immutable infrastructure, right? That was a popular drum to beat several years ago, or sort of stateless by default, and you force your state down to certain layers. But what that's led to, for example, in the ops world, if you have a server that's misbehaving, if anybody's still running servers, by default, you don't jump on there and like, rerun chef, nobody really does that anymore. What you do is you just terminate that and spin up a new instance, that is spun up from that default configuration. Then, you ship that. There's no sort of - well, you might retrospect it and try to figure out what went wrong. But you don't necessarily leave it there to think about what's going wrong. You remove it from circulation, from production, and put in a new stamp out from the gold master. You put in a new instance and replace it. I think that's sort of respond by default with automation and kind of selfheal a little bit is what I'm going for here. That's the thing that I would love to see adopted, the sort of snapping back to a baseline or guardrail, however you want to think about that. [0:13:49] TK: Given your background and your familiarity with the infrastructure as code, which is something that's near and dear to my heart as well, when you think about security in the context of infrastructure as code, and some of the remediation work that you're doing, how does infrastructure as code factor into this? So, for example, how can we better integrate security into the dev lifecycle, ensure that our infrastructure is secure, and our live cloud workloads are secure? What does that happy nirvana state look like to you? How can we achieve it? [0:14:18] KMC: That's a great question. So, this gets into the prevention bit, a little bit. This is where I get excited. So, obviously, one thing infrastructure as code does for us is it writes everything down. You actually have to create that configuration, and I think it's really valuable. One thing that that does, that is, I think very important, and is maybe often overlooked is it's a great way to obtain some information about intention. What was the engineer that was configuring this thing intending to do? We have commit logs, we have the actual configuration, and we can see that before it's applied, and it's valuable information. That especially gives us a hook into a prevention step. We can look at that before it's applied and make some judgments about it, and potentially take some action in response to that. That might mean something as simple and straightforward as a CI check that blocks a merge, and prevents a deployment. Or it could be even an automated response that issues a patch back to that change, and says, "You want to do this instead, potentially." And I think that is incredibly powerful. But there are some gaps there. So, one problem with sort of infrastructure as code, with respect to this massive security challenge of keeping the actual cloud configuration within those guardrails is that things can change outside of your configuration as code, right? So, you might have everything Terraformed very rigorously with incredible standards applied to it. But that doesn't necessarily stop somebody that's permission from going into the cloud console, or via one of the APIs, and making a change, and circumventing that right. It's an incredibly powerful tool, but we have to recognize that it is not the only actor in this playground here. We have to still pay attention to what's deployed. One thing I always like to tell people is infrastructure as code is great, and I think we should be adopting it. However, the only thing that really matters at the end of the day is what you have shipped. What is the actual configuration that's deployed? Not what you intend to deploy. Not any of those things, really. Not what is the configuration even on the main branch in your repo? It's what is the actual configuration that's out there, that's the most important thing, and we have to be paying close attention to that. Because the way Terraform, for example, the way that works is not by being a layer in between you and the cloud provider. It's on top, and it's a very point in time sort of thing. You apply a configuration and updates are made, potentially. But it's not an active thing, necessarily. So, we have to be aware of that as infrastructure and security engineers that just because the Terraform repo says one thing, doesn't mean that that's actually true. [0:16:58] TK: It's a really interesting point, which is, on one hand, a lot of organizations from a best practice perspective, require infrastructures code, such as Terraform, Pulumi, the CDK, whatever they happen to be using. They require their development to be done through those mechanisms. Yet, on the other hand, you have folks that have admin privileges that can go into these cloud service providers and the portals click around and change things. So, the ideal state would probably be it sounds like to take away this console right access and have everything done via infrastructure as code. How do we derisk the challenges that you've brought up here? Because it sounds like despite best intentions, at the end of the day, if folks have access and are making changes outside of prescribed workflows, it can yield to all kinds of serious security problems. [0:17:44] KMC: It's so true. And I swear, I'm not funneling you into a sales pitch here. But I'll talk about this as an ops engineer for a moment. Putting that hat on, I think about what is the cost of an outage, for example? We often think about reliability and competition with security sometimes. I think there's a tradeoff here in some cases where sometimes you just have to accept the security risk in order to achieve some mitigation of an incident. I always ask anybody who says that they are rigorous Terraform users. Do you have anybody in your organization that's permission to go into the console and make changes? I guarantee it's not only the Terraform user that has those permissions. One of the reasons for that is, let's say you are having a production incident, and you have a service that's down or maybe your entire platform is down. It's a terrible situation to find yourself in. I honestly, as much as I'm a huge Terraform and infrastructure as code advocate, I am the first person to say if production is down, log into the console, change what you need to change if that is the quickest way to restore service. Because we have competing priorities here. It's easy to say, when you're not in the middle of an incident that, "Oh, we should make this change in Terraform." But I've seen Terraform code bases that take on the order of hours to run to do a full plan and apply, and these are organizations that are struggling under the weight of that infrastructure that they've created. The idea that they might have to wait several hours to roll out a fix to restore production is laughable, right? Nobody would actually commit themselves to that. So, the ultimate outcome there is people still have access to production in order to be able to make those emergency fixes. What if your CI provider is down or what if you're - if you're using Terraform cloud, what if you're unable to access that for some reason, or if it's down? There are also those simple, practical concerns. So, I think I come back to the active prevention response here is that complement to this problem, which is we still allow manual changes and human access to these systems. But having a guardrail in place to prevent any sort of egregious mistake, is kind of the check and balance system that we need to make that safe. So, we still want to push things by default through our infrastructure as code pipeline, and make sure that we're following those practices. But we also want to detect when we've deviated from that. If that deviation is a valid deviation, maybe we want to port that change back to our infrastructure as code to close that loop. I think that's a valuable thing to do and encourage as well. But it would be very hard for me to tell anybody that they should not allow that. [0:20:26] TK: Talk to me a little bit more about the interplay between infrastructure as code security remediation, and live cloud security remediation. Because if there's a problem on the cloud, to your point, you want to fix that right away, and not spend hours waiting for pipelines to run and changes to get applied. These are incredibly urgent fixes that need to be applied as soon as possible. But at the same time, on the infrastructure as code side of things, ultimately, you need to go and patch up, say, some Terraform code. How does OpsHelm have plans in the future to not only be able to do what you're doing now, which is fixing things as soon as they occur, but also going and fixing that infrastructure as code to make sure that the state is in sync and both sides are secure? [0:21:06] KMC: Yes, great question. So, the short answer there is yes, we do have plans to address that very directly, actually, in exactly that way. We want to monitor the configuration that's actually deployed, but also tie that back to the infrastructure as code configuration, which, as I'm sure you know, from your background, this is not a simple task, and there are many challenges with respect to doing that reverse mapping. However, we do plan to address that. And I think, the way we think about it is we detect a configuration that we don't - that violates some policy, whether or not that's an out of the box policy we provide, or a custom rule that one of our customers has implemented. We still want to detect that. We still want to potentially respond to it. But we also want to be able to close that loop. So, thinking about it in terms of almost in the same way that Pulumi, or Terraform, or any of these tools, looks at the state of the world, right? They have the configuration that's written down, here's what I intend to do. They have the state file, which is the last known state that was applied, and then they also look at the remote resources as well, and try to figure out the diff. What is my delta? And how do I bring everything to the desired state? So, we're thinking about it in those same terms and implementing it in that way. We might first make the change if we're configured to do that, and our customers have permission with us to do that. But then, we also want to monitor the rest of the lifecycle and make sure that eventually, the infrastructure as code is brought up to compliance, and that whole, that loop is closed. So, kind of a long running like living process continuously looking at that, and making sure that things are moving in the right direction. [0:22:50] TK: It sounds like a very comprehensive approach that handles both sides of the problem. Just for my own curiosity, when you think about auto remediation and security fixes on the cloud, that a tool like OpsHelm is able to help out with, what are some examples of the problems that you're able to detect and remediate as soon as they're found? [0:23:08] KMC: Yes. It's a good question. So, to back up a little bit, I would actually describe, we're talking very much about security right now. But I would actually describe what we're building at OpsHelm, and I think, any solution along these lines. What we're building is very much a policy enforcement engine, and those policies in our case are often focused on security, but they could be any cloud configuration. For example, you, at your organization may have a tagging standard, and you want to ensure conformance with that, or you may have certain cost controls that you want implemented. Maybe you have no use for GPU-based compute, and you want to prevent anybody from using that. Or maybe you have certain operational standards that you want to adhere to. So, maybe you have services that need to be available, and somebody's spinning down an auto scaling group to zero or one instances, is something that would be very bad, in your opinion, at your organization. Any of those would be the things that we might want to enforce at this layer. It could be things, if we want to bring it back to security, it could be things as simple as the open S3 bucket or the database that is Internet accessible, or SSH that's opened up to the world, or some of these things that come up often over and over again, because default configurations are set up in a way to make everything easy. And we want to be able to spin up a service quickly. So, we just opened it all up. It's often those things. But it can't be more complex than that. I'll say the way we're prioritizing, the way that we are building this out is, what are the black and white issues? The things that are very, sort of, the debate is over? Don't put your database on the Internet, by default is a pretty good, and I know that there might be reasons to do it. But don't put a bunch of PII in an S3 bucket and then make that world readable. I think these debates are mostly over, like by default, we shouldn't be doing that. So, we're so to prioritizing based on those things, and then, things that we can actually auto remediate. Unfortunately, the nature of some of the problems requires a human in the loop. It requires some action to happen or some coordination to happen. And full automation isn't necessarily possible, but we can get part of the way, or most of the way there. We think about those as well. But mostly thinking about in terms of what things can we completely remove from the list of potential problems? [0:25:27] TK: To your point, some of those common mistakes, I'm guessing, probably cause the vast majority of all security issues on the cloud today. And I'm curious, though, what are examples of harder problems that you might not be able to detect or fix using auto remediation? [0:25:43] KMC: Yes. I'll give you a few examples. So, one that I like to give, for example, is anything that's destructive. Philosophically, stepping back a little bit, the way we've approached automation is anything that we automate should be reversible, because we want to be able to - the system can be wrong, or you may actually really want to do something that the policy says you shouldn't be able to do by default. Having an override button there is very important. So, by default, anything that is a permanently disruptive action is something that we might not proactively automate. To give you an example there, let's say you have a database in AWS, maybe you're running an RDS instance. For some reason, you've configured some automation to terminate a database cluster or an instance. You can snapshot that and spin up a new instance, based on the last snapshot state of that database, right? But you can never get back the original database. The same applies to any other sort of stateful thing in AWS, right? Or in GCP, for that matter. If you go and destroy an entire S3 bucket, or terminate an EC2 instance, or an EBS volume, you can never get back the original. That's an important thing to be aware of, in the context of automation is, can I undo this? Sometimes you can't. And being aware of that changes how you think about that and how you approach that. You don't necessarily turn that on by default. I think another example of a difficult to remediate thing that's maybe a little bit less extreme here is, let's say, encryption at risk. All of the cloud providers support block storage being encrypted at rest, and we want to do that by default, in most cases. But there's no way to migrate an unencrypted volume to an encrypted volume live. The only way to do it, at least as far as I'm aware in AWS and GCP is, you snapshot the volume, and you turn that into an encrypted snapshot, and even create a new volume from the last known state. There's no live encrypt of volume. So, that is something that would require a human in the middle to say, "Okay, I've got an encrypted snapshot. But now I need to make this the live volume that I'm using." At the end of that, there's a disruptive action of get rid of the old unencrypted one. But there's also that coordinated switch of, let's say, I have a server, and there's a volume attached to it. I can create that encrypted snapshot. I can attach it to my instance. I can mount it. Then, I can actually do that switch live and unmount the old volume and detach the old volume from the instance and then destroy it. But there is some coordination required in the middle there. If you're only operating at the cloud configuration level, for example, your automation doesn't have any ability to get into the instance and do the unmount/mount operation, for example, that requires somebody with OS level access. So, there are some challenges there, right? And you have to decide how far do you want to take this automation. Because obviously, opening up automation to then actually get inside of your resources and configure them that way, that provides additional risks. So, we need to balance these concerns. [0:28:58] TK: There's really so much depth here. And when you think about a modern business, who's operating their business on one or many cloud providers like AWS, Azure, or GCP, there's so many different security and compliance concerns that they have to deal with. What advice do you have for teams in terms of what skills they should be learning, to best avoid and deal with these problems that you're mentioning? [0:29:21] KMC: That's a deep question. So, one thing I would always recommend is being a little bit conservative and adopting new services within a cloud provider, and taking that a little bit slowly, and really making sure that as you adopt services, you're able to develop expertise internally, understand what you're deploying. So, I would say even before we get into infrastructure as code, and automation and all of that, like understand the services that you're using, and nobody likes to just sit and read the documentation. But it is an important step in the process, I think, is stopping and understanding, "Okay, here's how this service works. Here are the controls that it provides to me." If we think back to the fairly recent like deprecation and shutting down of EC2 classic, for example. Understanding the constraints of that before it was shut down is something that I think, would have been important. When that button was still enabled, you could just create an EC2 classic instance, and it's on the internet by default. Knowing that that is an attribute of that is an important thing to have. That knowledge is valuable. So, that's the first step, I think, just understanding that you really can't go dive into infrastructure as code and build these things out and configure them until you really know what you're configuring. I know a lot of teams that operate in a way where they sort of have a sandbox environment, and they'll go in and click through the console and build out a service, and then try to figure out what the options are, and then port that back to infrastructure as code and try to make that the configuration that's deployed. That's a little bit dangerous as well, because often this sort of point and click templatized versions are configured in suboptimal ways from a security standpoint, or even an operational standpoint, and you end up dragging those, that baggage along with you. So, being a little bit careful there. But then, I think, on the other side, if we think about software engineering, and some of the practices that like we're taking from software engineering and operations through the whole DevOps movement, and we should lint our configurations and we should have tests and continuous integration, right? I think, thinking about those things as well, anybody that's working in ops and security really should be familiar with those practices. Especially, on the security end of things, it's important to understand what you're securing. So, I think, getting ingrained in those processes, you should be able to write a little bit of code. You should be able to write Terraform, and you should be able to understand how those services work, because it's very difficult to operate a thing if you don't know how it works. I think it's also very difficult to secure a thing if you don't know how it works. So, I would recommend learning any of those tools, especially, if you're on one end of that spectrum, and you're tasked with securing a system that is configured and operated with a toolset that you have no familiarity with. It's a bit of an uphill battle. So, very general advice. But that's kind of where I would start. I would also say, keep it simple. Nobody likes a big Rube Goldberg machine, especially from a security and an audit standpoint, it's very hard to unwind systems that are built that way. So, start simple and make sure that you can really understand what you're building as you build it out. [0:32:28] TK: Baby steps. You got to crawl before you walk, before you run. And I think that's really good advice. So, Kyle, we're getting towards the end of our time here. I have one more question for you today. What's the best way for folks to learn more about what OpsHelm is doing? How should they connect with you? [0:32:45] KMC: Yes, of course. I think our website is probably the default destination there, opshelm.com. I'm sure there'll be a link provided, but that's opshelm.com, and feel free to connect with us there. We've had a blog as well. We're pushing some content out too, especially, security related content. So, we'd be sharing our thoughts and things there. Please come in and engage with us. [0:33:11] TK: Kyle McCullough from OpsHelm. Thank you so much for coming on Software Engineering Daily. [0:33:15] KMC: Yes. Thank you for having me. [END]