EPISODE 1893 [INTRODUCTION] [0:00:00] ANNOUNCER: Modern software development is evolving rapidly. New tools, processes, and AI-powered systems are reshaping how teams collaborate and how engineers find satisfaction in their craft. At the same time, developer experience has become a critical function for helping organizations balance agility, security, and scale while maintaining the creativity and flow that make top-tier engineering possible. Capital One is continuously transforming its developer culture with a focus on faster development cycles, reducing operational overhead, and boosting productivity across the organization. Catherine McGarvey is the SVP of Developer Experience at Capital One. She joins the podcast with Sean Falconer to talk about what developer enablement means at enterprise scale, measuring developer productivity, being agile in a regulated environment, AI and enterprise development, the future for developers, and much more. This episode is hosted by Sean Falconer. Check the show notes for more information on Sean's work and where to find him. [INTERVIEW] [0:01:17] SF: Catherine, welcome to the show. [0:01:19] CM: Thanks, Sean. Great to be here. [0:01:21] SF: Yeah, absolutely. It's good to have you on the show. So, I was looking into your background. You have a pretty fascinating background where you spent some time in startups, you've done some defense consulting, now you're leading a developer experience for 14,000 technologists at Capital One. How does that diverse experience across this wide spectrum, from fast-moving startup world to a large financial institution, shape the way that you think about tackling things like developer enablement at such a massive scale? [0:01:50] CM: Yeah, I've been super, super fortunate in my career to have these kind of pivots or these opportunities to approach different domains and different sizes. And I think what's really nice about that is you learn a couple of things that often there's not one way to do something, and that you've really got to think about what matters in how you're doing software development to the consumer and to the business to really spark down the path of, "Well, what do you have to standardize on, and what can you leave flexibility around?" And that's been really great. With the consulting side of startups, I got to be involved in the early days, from zero to one and one to 100 users. And it's amazing to think through long-term architectural design matters a lot less when you're just trying to confirm that you can survive and you can get those first couple of customers. And how much that pivots when you're talking about B2B selling or B2C, where I am sort of now, really thinking through how do we make sure what we deliver is resilient to customers and is set up. But at the same point, how do you keep it agile so that you're adapting to their feedback and able to make changes? So, it's been nice to draw on different parts of my career to date to really look to how do you bring the mindset of where we're focused now with this idea of how do you bring that agility and that adaptability to new information while ensuring you're keeping sort of risk, and security, and all of those quality concepts really top of mind as well. [0:03:16] SF: Mm-hmm. How do you think about developer enablement? What in practice does that actually mean? I think it can mean different things within different organizations. Some people might be focusing purely on how do we increase our velocity or become more efficient. What does this mean in terms of your ownership within Capital One, and how do you think about it? [0:03:34] CM: Yeah. Developer enablement, a colleague of mine actually mentioned this term for me, and I just loved it. I've latched on to it. In the past, I've really thought about developer productivity, which comes down to a lot of throughput speed and quality. Is the team churning out code at a certain rate? Is it at a high enough quality? And is it getting into the hands of users fast enough? And 15, 20 years ago, as product managers really stood up as this role that was critical in the software development process, there was a real shift to be, "Are we building the right thing?" Because it's fantastic if your engineering team can move really quickly. But if it's not connected to something of value, who cares? Productivity without connection to value is like a dying product. The product management side of this really sparked this idea of let's ensure we're building the right thing, and we understand our users well. And so when I think about developer enablement, it's combining speed and throughput, and how do we give the tools to accelerate that development? But it has to be balanced by are we listening to users? Do we understand what they need well enough that we're solving the right thing? Those two things in balance, I think, is where enablement's really about. Give them the tools, but the information as well, to be able to do their job well. [0:04:49] SF: And does that need to be something that you're doing - is that one person or one small team owning horizontally across all the engineering teams? Or do you see this as something more like a product manager that's going to work directly with one sort of cohort of engineers and kind of be trying to answer that question of, "Are we building the right thing?" [0:05:09] CM: I think for every team, the team size is really interesting. You need a sense of the direction that's connected to the customer and someone being that voice. And sometimes that's design, sometimes that's product, sometimes it's both, and sometimes there's multiple people depending on the complexity of what you're delivering needed there. But you need that connection to the team. And you need the swim lane small enough that that team can be effective in solving that outcome or delivering that product. And so that combination of, again, keeping that team nimble enough and then giving them the information they need to do the job well and having someone to talk to about it empowers them. Because they're not just picking up a product definition and executing on it. They're understanding it and discussing it and seeing how they can improve upon that experience overall, not just delivering a feature. [0:05:58] SF: Okay. With such a large organization, we're talking about 14,000 people, how do you even measure the success of developer enablement at that scale? [0:06:08] CM: I love this question because there's so much going on around developer experience. What do you measure? Years gone by, there was a really big great push around door metrics, and thinking about that's really like your throughput sort of style and your recovery style of things. And then there's been the work around space and really trying to get to the qualitative side of that as well. And I don't think there's one great answer, but there's a lots of good things you can measure. I do think measuring the important things matters. But in some cases, it's not the only thing that you need to measure. A couple of things we do look at is we are on this journey of continuous deployments. What is your time between deployments is a really interesting one to see. Are you getting your code out there fast enough, quick enough, and adaptable to change on that one? And to be able to do that well, you have to have great testing. You have to have great quality measures. So that's kind of part and parcel of that continuous deployment piece. The next one is, is it actually addressing the need of the customer that you're delivering to? In my space, I spend a lot of time thinking through, "Am I solving the outcome for the developer?" If my job is to make coding reviews great and I'm providing tools in that space, is it actually making it great? What's the output of those coding reviews? What's the impact of delivering this tool? Am I getting this NPS satisfaction. Other things we can measure? That's very much tied to what are you delivering. What's the outcome you're expecting to change? And so measuring it from the, "Are you making the change you're expected there makes a big impact?" And so continuous deployment and the tools to enable that. And then is the tool I'm providing, or the service, or the output I'm providing, is it having the outcome that I'm hoping for it to have? [0:07:49] SF: Yeah. And is some of this always going to be a little bit dependent on the organization, too? There's probably going to be some overlap. But you might have sort of custom things that you're measuring for success based on maybe where you feel like you have a pain point today. [0:08:05] CM: Yeah, absolutely. And I like using specific examples. We've had teams focused on different parts of the software development life cycle. And so one of them has been on onboarding. And onboarding is a really interesting thing as you hire or you have people switch teams. Part of it is am I giving them the right information? And then am I pointing them to where to go to discover things? And then am I giving them the right training and starting position so that they can commit code fairly quickly in their journey. And so some interesting metrics to measure there is time to first commit and time to 10th commit are really, really great. And they're kind of standards across the industry for measuring that well. But then there's also expectations. Does the person joining know there's an expectation on measuring these things? Because if you know you're being measured, that will change your behavior as well. [0:08:52] SF: Yeah. You tend to optimize the things that you measure. That's why you got to be careful about what you measure. [0:08:57] CM: And developers, I think, of all the customers in this space, are fantastic at gaming metrics. [0:09:03] SF: Yeah, absolutely. [0:09:04] CM: That's definitely one of the factors you consider as you look at this. [0:09:07] SF: And then the onboarding things interest - I would think also that potentially impacts retention or job satisfaction as well. Because, presumably, most people starting at a new job want to feel they can contribute, they're making a difference. If it takes you six months to get to a point where you've done anything, you've submitted your first PR or something like that, that's probably going to lead to a certain amount of satisfaction for a number of people. [0:09:32] CM: Yeah, absolutely. If you know you've been hired to commit code at some level by your job family that you're in, ideally, you want to start to show that you can do that because that gives you confidence that you're understanding the space as well. So there's that internal anxiousness about I want to demonstrate value as well as that satisfaction of the job is giving me what I expected to. [0:09:51] SF: Yeah. Given that Capital One's in a very highly regulated industry, you're going to have certain compliance and security considerations that probably a regular sort of software company might not have. How do you end up balancing the necessary structure and probably having to slow down and do the right thing while still giving people the freedom to feel like they can work autonomously, they're moving quickly, and all these types of things? [0:10:17] CM: Yeah. And this is one of the areas that really prompted me to join Capital One in the first place because I was like, "How are they doing this so well?" And I wanted to kind of come in and learn. And there's some really great strategies Capital One employs to really help enable agility while still maintaining that strong security posture, that strong compliance. And so the strategy that is employed here, which I really love, is this idea of let's make it easy to do the right thing. So all of the practices, the platforms, the services, and the model that we operate in, let's give you the best practice and make that your default behavior. And so instead of having to pick through what database to use, this is the one we recommend. And if that one doesn't meet your need, okay, there's things that you can do to go beyond that. And we can get approvals and go through sort of exception processes where needed. But by making it easy to just pick the thing, it doesn't become a question, and that sort of helps with the pattern. There's lots of best practices built in. And then there's standardization. When it is something that should be standardized across the company, company standardizes. And then requires every team to comply to that standard. That creates migration and other work from time to time. But you get great lift because you can put controls, you can put best practices into that standard that then require less work on the development teams. [0:11:36] SF: Yeah. And I guess it gives you some efficiency too when people kind of move around in the organization, because there's kind of one way of doing certain things. It's going to be familiar each time, even if you're solving net new problems. [0:11:46] CM: Yeah, absolutely. And what does happen, and just like at any company, is when there hasn't been a standard, a few different tools start to exist, and then we sort of bring about our standard or our approach. And that's working fairly well because it doesn't stop teams from starting that innovation or picking something in an absence of a standard. But it does prompt us fairly quickly to establish, "Well, what should be the standard here?" [0:12:08] SF: How do you make that decision about sort of what to standardize or centralize versus what giving a team essentially their individual discretion? And then even if there isn't some standard today, and an individual team has to make a choice, how do you become aware that the choice has been made and that there might be a time where you have to decide? For example, I'm sure we're going to touch on this a lot with the world of AI, if two years ago someone started experimenting with an early LLM, then at some point, obviously, you want to standardize. But maybe they were solving some very specific problem. How do you even know, become aware as an organization that that's going on? [0:12:44] CM: Yeah. You start to see a proliferation of tools or services that are similar, and that's when you start to realize there is overlap in capability. And then we typically will task a team for what's the right answer to the case for this. And you typically pick this up when we see it as high leverage. Lots of tools are coming in, which potentially prompts more work on security reviews and other things just because that just increases the level of awareness we have to do on approvals. But then, also, we look at things. Is there an open standard? As we look at telemetry, there's a lot that's standardized in the telemetry space. And so picking a standard there simplifies the world for us, and it can simplify what tooling on the LLM and other front when you're in an industry like we're in where things are rapidly changing. The change this year has just been so exciting and incredible. The more you tie yourself to one model or one approach, the more you're going to get boxed in. From the get-go, we've been standardizing on what is our success criteria and improving upon it each time for every pilot that we do. And we've been using sort of abstractions to think through, "Well, if this needs to be a coding assistant, for example, perhaps it uses this API or interface." And we're going to go through this review. And then we're going to use that same sort of template if we introduce another, or another. We're getting some lift there. But that's an area in particular where I don't think anyone's declaring one standard right now. Now, I think we're still going, "What's the right tool for the right job? And should we revise that with new information coming in?" [0:14:15] SF: Yeah, I think it's tough when you're in these early innovation cycles, because it's hard to know what would be the right choice. The right choice today could change drastically in a week from now, with the way things are moving so quickly. You really need to be sort of investing in technology. And if you're building whatever your solution is in a way that's sort of adaptable to this type of change. Or you're going to hurt your velocity down the road because you become locked in and tightly coupled to one way of doing things. [0:14:44] CM: Absolutely. I think one super exciting thing, though, with these agentic workflows and other tools that is happening in the industry, this idea of lock-in is really interesting. Because how locked in will you be if it's easy to migrate now because you have tools that enable that migration? I'm excited about just how it opens up doors for us. And we're doing a lot of that abstracting and keeping an open mind on areas where there is great innovation happening. And we want to be getting the benefit of it rather than just picking one and waiting. [0:15:14] SF: Mm-hmm. Yeah, makes sense. I know you have rolled out LLM-powered coding assistant. Could you talk a little bit about what went into that decision-making process? And then what were some of the good things that you saw from this? And what were some of the challenges? [0:15:29] CM: Yeah, I maybe I'll speak about a challenge first. But I think if you go back a year or so ago, where we were with an industry, and the messaging around coding assistance and AI tools is there was a lot of fear, a lot of concern that this will replace my job. And so there was a fair bit of messaging we had to do. And Capital One does this really well. We do this at multiple layers of leaders, including our CEO, speaking about, "Hey, we want to get lift in this space. And we want to automate anything that isn't part of the creative problem solving of software development. Let's all adopt the tools that create that lift. And let's try it and see what works." Our approach was to really roll this out and encourage everyone to use it and see if it works for them and what it works for them. And so we went through a full risk review and compliance review, and all the reviews that we do to sort of do the POC, and a pilot, and scaled it out, and really started to track, "Well, who was getting benefit from it?" And then as part of that, we created some Slack channels and other things where we could surface who's using it the most. What are the things they're discovering? And it's really fun to watch people share their examples of what worked for them or how they used it. And as we've been rolling out other tools sort of since then, continue to use that same approach of letting peers share what's working well and what isn't, so we can learn from each other. Because it is such an area where there is so much change happening at the moment. [0:16:53] SF: Yeah. And I think, in my experience, I've seen a couple of things that can sort of hinder a company's use of these tools. One is that they kind of underestimate the actual learning cycle of how to use these tools properly. So, especially with some of the agentic ones, where you're doing sort of more of a perhaps spec-based development, and it is kind of a new way of thinking about doing your development. I think sometimes people can put in a prompt, they don't get what they want, and then they kind of write off the tool that this doesn't work for me. I think companies sometimes miss thinking through how do we actually make sure that people know how to use these tools properly and get the value out of them. I'm curious to hear your thoughts on that. And then the other thing is that when it comes to developer sort of efficiency, even if the tools for the coding assistants are working 100% as you would want them to work, the time that developers actually spend coding is maybe 20% of their job. There's all this other time that they're doing other things, whether it's like design reviews or thinking about architecture or stuff like that. I think sometimes companies are surprised by maybe the ROI ends up being different than what their expectation is because they're thinking about the 100% time. But actually, you're kind of optimizing 20%. I'm curious to see how Capital One is thinking about that other 80% as well. [0:18:13] CM: Yeah, that's great. And I think really great framing about the day in and day out job is not just sitting there coding, how everyone works. I think there's a fair bit to unpack there. The training and the best practices with some of these tools, we've found the more you can make it use case-centric, the easier it is for people to understand the value. I feel like a year ago, there was a lot of focus on how do you prompt well. And there's a lot more focus now on how do you tweak your plan to then get the behavior you want to see happening. Even the training a year ago is probably not as relevant as some of the training we'd roll out now on that front because it's just changing so much. But the use case-based training is really great. And we're really thinking about, for this type of task, this is the best tool from it, and becoming a bit more opinionated. The more we use different approaches, the more we able to identify this tool's great for this right now. Maybe that'll change. But for right now, this is what we recommend. A spring boot migration or upgrade might use this tool, versus a test case generation might be great with this tool. That's an area we're focused on. And then measuring the impact, there's been some interesting thoughts around, to your point, how much time is each developer spending programming versus other tasks they might do. And so there was some really interesting stuff in the industry around what do you measure. And I'm very much anti. We don't measure lines of code produced. Is that perhaps the worst coding assistants or agents might produce the largest amount of code, and that's actually not a measure of value in any way. We were looking at things like how many suggestions do you accept as being an interesting thing. But a big part of this is, is the tool providing value to you? And over time, is it creating lift? And so maybe throughput over a long period of time of a team for a year. You might be able to see some impact there. Quality, you continue to measure to see if that's making the change you want to see. But then the qualitative of, "Are you using this tool? And then is this tool providing value? And what do you use it for?" still being the bigger drivers there. If the quality is continuing to either stay the same or get better. And if the team is able to do a little bit more, then that's really the outcome and the lift we want. And hopefully, do a bit more of the more interesting programming work. As much fun as migrating an old version to a new version or resolving a npm dependency might be, that's not where I want to spend my time, or where I hope most engineers don't want to spend their time, or vulnerability patching, or any of these other things that are kind of not where their expertise and skill set really show up. [0:20:52] SF: Yeah. I mean, I think a lot of development ends up being these kind of tasks of pushing and pulling data. You're pushing data into something and then you're pulling data into something, and then you're transforming it to render some view. That is not necessarily the most exciting work all the time. Or even, I remember when IDEs like Eclipse brought in - it's like, "Okay, I have a bunch of private variables. Now I can automatically generate my getters and setters." It's like, "Okay. Fantastic." That was very helpful. Because writing getters and setters is not necessarily the thing that I want to spend 30 minutes on. If I can do that in 5 seconds, that's fantastic. But are you seeing that in terms of it being something where it does give people more time to either tackle more complex tasks or even sort of shift what the role of a developer is, where it's less about necessarily hands-on keyboard, but more deep problem solving, and thinking at maybe a higher level of attraction? [0:21:48] CM: Yeah, I think we're still early. But the things we're learning so far is that for those that are kind of newer in their career journey, it is providing tremendous lift both in understanding code bases, learning new languages, new frameworks, understanding things that already exist about the codebase, the dependency trees. There's huge, huge lift in that part of it. For a lot of the remedial tasks, the basics on style, and linting, and formatting, and the improvements you can make there that are just such low-level but provide great value and that consistency for others. It's a great lift. I think we're starting to see that climb up the stack, refactoring, and other opportunities. But it is still - I think what we're seeing, it's still a bit hit and miss at times. And so we very much have the human in the loop in all parts. And then we still have that code review that goes on as well to really ensure that quality remains high. And I think the learning and knowledge piece is really high. And then as we're getting into some of more of these other tools, we're starting to see mass improvements, migrations, things that are very high value but low creative, not high value tasks can be automated really well. So we're getting lift on the things that high value for a developer. I think we're still pushing on getting lift for the things where it really pushes them fully into that creative problem solving. [0:23:13] SF: Mm-hmm. Yeah. I think one of the things you mentioned there was like the code review process. And I think one of the reasons why in many ways agents and even some of the things that we're doing around using these models in the workplace, like engineering's kind of been the tip of the spear for that, is in part it's because we actually have some sort of guardrails around what gets produced, or there's basically built-in quality checks. If it's code that can be compiled, I can at least compile it and make sure that it's syntactically correct. I can run it through unit tests. Presumably, humans probably are going to be involved in the review process. It's going to probably go through multiple stages before it ever hits production. And even when I hit it in production, I might be doing some sort of canary-based rollout. There's a lot of things that go into it before it lands in a customer's experience, essentially. Where in a lot of creative work, it's harder to have these kind of like deterministic ways or process in place to actually validate that what's being produced is correct. I think that's like a huge advantage for this. Because evaluating the result is hard in a lot of domains. Whereas engineering has some kind of built-in ways of doing that. And the other reason I think that engineering is a bit of a tip of the spear is because engineers historically are very good at investing their time into ways of taking work off their plate, essentially. Automating tasks and making it so that they can spend time on other things that aren't necessarily these rope tasks. And this is the superpower way of doing some of that. [0:24:42] CM: Oh, 100% agree. And I think the qualitative and validation tasks, the testing and other things, there's so much we can get lift from the visual testing and behavior of clicking through that are more challenging to write those tests, those sort of feature level tests that are really easy for agents and other things to do, which will again provide lift. But you need some balance in the equation today. You don't want the agent writing all the tests and all the code. There's no balance in that setup. Figuring that out as we continue to invest in this space is really exciting. [0:25:16] SF: You talked a little bit there about these tools being helpful to like a junior developer, new developer who doesn't necessarily know the code base. And I would imagine having sort of like a not judgmental, psychologically safe assistant to ask questions to is probably really, really helpful. But do you see a difference in terms of people's either use of the tool or where they find value depending on their seniority, where maybe someone who's more senior really knows the system? They know how they want to implement things. And maybe they see less value in the AI-powered assistant. [0:25:51] CM: I think it's still early days. And I think some of this depends on the quality of the model that you're getting. The better the response, the more senior you are, the more you might be expecting from that response to create the lift. Whereas you might get that lift as a bit more junior in your career, from a perhaps less great response, but good enough to help you understand what you might need to do. But I think as we're unpacking some newer tools in this space, away from the coding assistance piece, we are starting to see more lift for the senior engineers there. And I think the concept of planning mode and being able to tweak the planning mode effectively will also create great lift on that, too. Starting to see some green shoots there, but we'll see in the six months and a year where we are on those. [0:26:34] SF: Yeah. I mean, I know all this is early, but do you think that looking ahead a little bit, that this is going to substantially change sort of the way that - or like the nature of a developer's job? It's not that a developer job is going away, but does the nature of it start to change? [0:26:49] CM: Yeah, I think it already has. I think even if we think about the last year's worth of development, easier to look back confidently, of course, on this. But on the last years of development, what we've learned is that you can get up to speed on a codebase pretty quickly. You can tweak and validate things in a way that you weren't able to before. And you can improve the quality of the frequency of what you're able to produce by using a coding assistant. And that's exciting and incredible, as this has changed. That means you've got another person almost sitting next to you working this through with you. And the next paradigm shift is you've got another person doing the work for you. And so I do think it's fundamentally changing. I think this comes down a lot to learning that the value of the human in that loop is really around judgment and understanding of direction, and being able to articulate that clearly and understand the code well enough to ensure it's getting along that path. And this is an area I'm super excited about the innovation for the next couple of years. We will see that yes - when I think about learning assembly myself, learning C++, every time there was a big shift there, it really changed what job you were spending time on. The day I stopped having to worry about memory leaks and download size, those were great days for me. And this feels like another big shift, but maybe even bigger than that. But it is, I think, changing the role of the engineer and the expectations of where they spend their time. [0:28:18] SF: Yeah. I mean, you're right. This isn't the first time that we've had sort of these jumps in level abstraction. This is maybe a larger leap than we've seen historically, versus assembly to C, C to C++. But with each of those transformation, there's also, I think, a pocket of engineers that resist that transformation because they feel like you're losing some connection to essentially what's actually going on underneath the covers. And with that, you're either diminishing the value of the engineer, or it creates too much disconnect between what's really happening at the low level of the system versus what you're actually implementing. Do you have any thoughts on that? [0:28:56] CM: Yeah, I think if you're an engineer that values resolving npm dependencies or migrating from an old version of a Java library to a new one, then this evolution is not for you, right? This is going to take you out of that space. I think that type of engineer may also love debugging issues. And that's changing as well, but there's still an area where you might really enjoy going after more roles that meet that criteria. I think if you're challenged by this, it's probably because you love some part of the job that is now able to be handled in a way that wasn't before. And so, identify where the joy is and shift to a role where you get that joy would be my prompt there. It's worth seeing the lift you can get out. It's like I talk about it like having your open-book test and choosing not to use the book. It just feels like you're just missing this big opportunity to get the lift. And here at Capital One, we are encouraging our engineering teams to get all the lift they can. We're doing it in a secure way. We're doing it with quality, with our validation, with our checks in place. But we want to give our teams all the lift. Why wouldn't we? And others out there are doing it. And so you get the benefit of the industry rapidly innovating. And so that would be my prompt is if there's fear based on it, explore and see how much you can learn quickly and adapt quickly. And I think the majority of engineers that came in with the mindset that you called out before of I want to automate the things that I don't need to do anymore, so I can get out of doing anything that's not fun, that's the same group that's really going to continue to try and learn from each new innovation coming out here. [0:30:38] SF: Yeah. We talked quite a bit about using these AI tools around coding efficiency. But I also mentioned that's like 20% of the job. So, how are you thinking about that other 80%? Are there investments that you're making there to also improve or increase efficiency, or maybe take certain things that are maybe not the fun tasks off of people's plate? [0:30:58] CM: Yeah, that centralized model that we're really working on is aiming to help us here. So, if I think about things that aren't code that a developer spends time on, it can be triaging issues in production. It can be getting code to production in the first place, getting that deployment, the test coverage, the sort of predictability of their pipeline. And then it can be understanding customers, understanding what the feature is in Jira. And then the coordination across teams and things as well. As a company, we're definitely looking at productivity that matters to a business and AI tools that support there. You can imagine we've got quite a few different things in play, both POC and then in-use as well that get us to the point where we can get lift on AI and other aspects of it. When I think about how you're writing your docs? And how you can improve upon that? And definite lift we can get from AI tools in that, space just as an example. The opportunity on the pipeline piece is really exciting because this is about how do we ensure the pipelines are predictable? How do we help with any errors or issues in a deployment? And how do we again make that selective tests run for each deployment? What are some great innovations we can do there to help, which just ensures that a dev team doesn't have to spend a lot of time on their deployment. And so each part of that SDLC is really important for us to focus in on how do we make it more efficient, or more predictable, or at the very least, centralized. And the engineering team doesn't have to spend time on it. Because the more time they spend on other aspects of the SDLC, the less time they're spending on their code. And we want to push them back to the code at every opportunity. [0:32:38] SF: Okay. Between all these pieces of automation, obviously, you want to have a strong engineering culture. Now there's automations kind of taking a step function with generative AI. What does peak developer productivity look like for organizations either in this year or looking ahead to next year? [0:32:59] CM: Yeah, I think peak productivity is I have an idea, and I can get it in front of a user that day. And that's really the dream of where a lot of agile development really was. And I think from a Capital One perspective, I can get it tested, validated, securely deployed consistently with all of our controls and behavior in place. And so there's very few touches over time where you actually need a human as part of that. And there's a lot you can do with these new tools as well to automate or validate parts of the deployment. "Hey, you have these coding standards. Let's confirm that you've met them all. Let's confirm you've passed your test. Let's confirm all of these steps have happened before it gets to deploy." And this idea that, "Hey, we have a great idea. Let's see it in the hands of a customer." That's the thing I'm super excited about because that's what it's all about at the end of the day. [0:33:49] SF: That's a good way to think about it. I think you're 100% right. It's really what impact are you having on whoever your user base is. That is like really the driving force behind all this. [0:33:58] CM: Yes, absolutely. [0:34:00] SF: What do you think one sort of common trap or mistake you see with engineering leaders is when they are tasked with kind of fixing developer productivity? [0:34:10] CM: I think it's really easy to track every possible metric or to track things that are going to drive the wrong behaviors. Because I think the point around what you track - people drive to what you track and can gamify. Things like number of PRs per engineer. [0:34:28] SF: Yeah. Just make smaller PRs. [0:34:29] CM: Yeah, make smaller PRs. Now with all these great tools, I can generate a whole bunch of PRs for you. [0:34:36] SF: It's like Sean submitted a thousand PRs today. Wow. Amazing. [0:34:40] CM: And we're looking at a whole bunch of what's the right quality bar, but that still enables progress. Never having an escape defect or never having an error, that's a really extreme quality bar. And so what's the right way to measure to push on that? Each of the measures you pick really do matter, and where your team will focus. And so it's easy to call out the bad ones. It's much harder to get the right ones. But getting this idea of, "Okay, let's make it easy to get from code review to production quickly." There's lots of measures we can do there. I'm super interested in exploring more and more. How do we get from story started to story delivered? That's a really cool one to start to drive that right behavior. And then, of course, your point about how much time devs are spending in code is a really interesting one because it's more about are we utilizing and giving them the information they need to do the job well? Which means meetings, and architecture reviews, and all these discussions. The worst thing to do there would be to track calendars or anything like that. The best thing would be to be having your engineering managers and your others really spending time on, "Are we using our time effectively? Are our meetings effective?" There's nothing new there. That's a really important thing to check in on regularly to make sure we're getting the benefit of the team. And that's also an enjoyment factor. If the meetings are effective at getting what you need, that's a great meeting. [0:36:05] SF: Yeah. [0:36:05] CM: And if they're not, evolving on them as well. [0:36:09] SF: Mm-hmm. You said from story start to story end. What do you mean exactly by the story? [0:36:14] CM: Yeah. I would say, you click start on Jira on your backlog atom to the time it's in production. That's your overall cycle time there. And that gets us closer to this idea we were talking about of where do you want to be with this idea got created, and it got deployed to production in front of a user. That encompasses a lot of the creative problem-solving, right? And so that's also one that could be gamed or might not be done well. But you do want an idea of how long does it take from when we start something to when we deliver it. There's some interesting thoughts around that investment time. When do you recoup that investment time? If it takes me five days to develop that feature and I get it into production on the sixth day, how many days does it take until that feature was actually valuable for me, so it was worth that 5-day investment? And so interesting schools of thought if you get really down to the heart of the value and the ROI of doing the development in the first place. [0:37:09] SF: Yeah, I think that's an interesting - it's kind of like in the world of developer experience. You look at things like time to first API call or something like that as a sort of measurement of success there. And the best companies in the world are able to do that within a handful of minutes or something. You can essentially touchdown on a page. And within 3 to 10 minutes, you're making that API call successfully. And then there's other places where that might take multiple days. And that directly impacts sort of the satisfaction and uptick of success for adoption of whatever that API is. [0:37:43] CM: Yeah, absolutely. And a lot of that comes down to how self-service is your team. Do they know where the tools are? Can they access them? Do they have the right permissions to find them? Do they know the best practices for that tool? There's a lot around that knowledge share and that discovery of where this information is, especially when someone's first starting. And after they've been there for a while, are they still able to find that when they need it? Is it still surfacing in a way that makes that easy for them to get to those metrics you just mentioned? All of those are great ones. And then I joke that I don't think that developers are ever happy. Or maybe put it this way, I don't ever want to be responsible for their happiness. I do want to be responsible for increasing their time in developer flow state or increasing their joy at their craft. And so regularly asking and polling and learning what the biggest challenges are and where they are spending their time in a qualitative sense does drive great insights on where I can help improve that dev experience. And the worst is when you get an error message that you don't understand. You don't know how to action. And so those are always some of the easiest ones to help impact and make change around. [0:38:53] SF: Mm-hmm. Yeah. And then in terms of advice around moving to a large organization from like a smaller organization, what would you advise people, engineering leaders making that kind of transition? [0:39:04] CM: Yeah. And even just culturally different organizations operate. I think this advice would be true moving from big to big, just different - every company is a bit different on how they operate. I think the first thing you really have to do is get to know how that company operates. What are the norms around communication? What are the norms around behavior? What are the norms around prioritization? Who communicates? And how often is it communicated? What is tracked? What surfaces at what level? And sort of getting a sense of what matters to the company on the sort of the metrics piece. But then, also, what's the communication style? How is bad news delivered is a really interesting one, just to get a real sense of does it match what you're used to or does it differ. And what are the things that are really celebrated I think is another really good one to understand that cultural shift. And one shouldn't assume large means slow. Because I don't think that's been true. And I've jumped around a bit company size-wise. Large just means that if you can jump on the established pattern or routine, you'll get a lot more lift if you want to try something completely new. It'll be a lot harder at a large company. But if you can identify the routine that exists already and jump on to it and sort of add to it or iterate on it, then you can sort of really get that speed of change happening as well. [0:40:25] SF: Awesome. Well, Catherine, we're coming up on time, but is there anything else you want to let our audience know about? [0:40:32] CM: I think the exciting thing in this space is there is a lot around where we're heading as an industry that should create more developer flow and developer joy in that flow. And if you're not seeing that yet or if you're working somewhere where that isn't being championed, that innovation, definitely look around because there's lots of opportunity in this space that I think should highlight and encourage you to increase your skill set and really be valued for your judgment and your experience in this space. And that's what's got me super excited about serving the large audience we have of developers at Capital One. And I think this is a really exciting time to be a developer. [0:41:13] SF: Yeah, absolutely. Well, thank you so much for being here. I really enjoyed our conversation. Cheers. [0:41:17] CM: Thank you, Sean. Really appreciate it as well. [END]