[0:00:00] JMC: Hi, Christina. Welcome to Software Engineering Daily. [0:00:02] CF: Thank you for having me. [0:00:04] JMC: We're here to talk about a really interesting topic and a source of what I think is an ever-running continuous source of controversy, and analysis, and a fascinating topic, which is developer metrics, productivity, throughput, developer experience, and how that adds up to the business, and moving the needle for businesses, and how software engineers and teams contribute to the bottom line of any business. That's a very rough description of what we're going to talk about, because we're going to touch upon really different granular aspects of that. But I'm interested in what brings you here. Could you present yourself, introduce yourself, and also Uplevel, and maybe just glance at one of the last pieces of research that you've done at Uplevel? [0:01:02] CF: Yes, absolutely. Hello, nice to meet you. I'm Christina Forney, I'm the VP of product at Uplevel. I have a background as an engineer, I started my career as a developer, working on internal tools, systems, and kind of have ebbed and flowed between product and engineering throughout my career, but have largely focused in the dev tool space. First, building internally for companies, and then turning my focus to think about how do I build developer tools for developers. Uplevel itself is an engineering intelligence platform. What does that mean? That means we are looking at how developers are working in order to better understand how to help them have a better experience. We are driving better developer experience, we are driving greater levels of productivity, and we're doing this by creating transparency across organizations into what's really going on. We're helping to answer the question, what is my team doing? Are we working effectively? Are we working efficiently? Are we working on the right things? How can we improve together? [0:02:13] JMC: Because that's the main challenge. It sounds kind of surprising, maybe what are the things that one takes for granted about software, and this is probably because we are all skewed toward or we're very – our vision is very polluted in a good way, with open source. Developers in general work in the open, that they contribute code on a daily basis, and therefore their contribution to the business to the company, to the project, to the team is fairly clear. Yet, that's not true for two reasons, I would say. But you probably have broader, deeper answers that, one is that, code contributions are not the only work they do. Second is that, they hide a lot of things beneath, just the tip of the iceberg. I guess my broader question is, what makes it so difficult to be as an individual contributor, as a developer to be transparent about what you're working on, and yes. [0:03:11] CF: Yes. I think inherently, developers know what they're working on. But where that gets abstracted as you increase the size of the organization, as you increase the complexity of a codebase, as you increase the complexity of an organization, you have so many systems, that it obfuscates what's really happening. Then you end up having these disconnected leaders where they have their initiatives, their high-level goals, their most important things that the business needs to get done, and it's completely disconnected from what's actually happening within the software development teams. We did a study recently, so bringing up that research that we've done recently, as we surveyed software engineers, and ask them what they thought of CTOs, and do CTOs understand what's going on. A vast majority of them believed that CTOs are making decisions without understanding the implications. They are disconnected from their teams; they are disconnected from what's going on. With a third of them, believing that the majority of engineering roadblocks aren't even noticed by leadership. [0:04:28] JMC: How come? Could you explain – maybe just drill down into the roadblocks? Are there any typical definitions of roadblocks? Which ones are the most frequent, and why are they difficult to communicate, or are they by nature of obfuscated in the reporting process? [0:04:47] CF: Yes. What these roadblocks are varies greatly across organizations. But consistently, what we see is that CTOs are making strategic decisions without understanding those negative implications. They're making strategic decisions, and 56% of developers say that they don't understand how that's having a negative impact on this team. Fifty-one percent of developers think that CTOs are moving people around onto different teams, or tasks, or initiatives without understanding the implications. What that says is that, there is an unseen, and impact by leadership on the actual direct business, and the way that developers are able to best drive value for the business. An interesting example here is, again, leadership thinks that these are the most important initiatives that our organization is working on. In reality, there is one team that's responsible for a big bulk of that work. But that team is getting context switched all the time, they're spending so much time of inbound, customer support requests, keep the lights on kind of work. They're only really spending maybe 5% to 10% of their working time on these really important initiatives. What happens, and that's the disconnect that gets created is leadership is frustrated, why isn't this thing getting done. It must be because the developers are incompetent. You get this kind of negative reinforcing cycle of mistrust. Whereas, the developers will say, "Nobody understands why this is so hard. Why doesn't the leadership team give us more support and help?" Something is wrong here. We need more reinforcements, we need – maybe it's just – we need more support to get some of this tech debt handled, and that can be helped by another team working on that kind of area of focus. But it's just this reinforced cycle of mistrust, where developers don't trust leadership to support them, and leadership doesn't trust engineers to get the right work done. [0:06:55] JMC: I'm wondering now of the frameworks, like DORA metrics have provided any, at least partial bridging of this gap. Have they brought any clarity? Have they brought a framework of discussion between existing disconnect between senior leadership on the individual contributors or the team leads? Have they helped at all in your opinion? [0:07:23] CF: I think DORA metrics are really helpful start, but they're only a small piece of the puzzle. DORA metrics help you understand, are we shipping, are we getting workout, are we moving, are we delivering, are we delivering the right thing, are we doing it in a sustainable way, are we spending too much of our time in meetings. Maybe I only really have 50% of my capacity, because 50% of the rest of my time is being taken up by meetings. You have to look at the more holistic picture. DORA is a great stepping stone, if you want to understand, are we moving. But you have to look at a broader perspective to understand what's really going on. You have to understand all of the moving parts and pieces of the organization. [0:08:10] JMC: I mean, this might be a bit of a too challenging question. But if software engineers, developers "only" care about coding, debugging, being as much as possible in the zone, doing deep work, and maybe reading documentation, right? Those are the four key things, they wish they would be measured, and senior leadership only cares about contribution to the bottom line, which I would equate to DORA metrics or to the DORA dashboard. Is there any way in which that potentially now narrower gap can be closed? Or is this approach missing other aspects that you just mentioned or others? [0:09:02] CF: Yes. I think what we saw what was really interesting to me in this research that we did, was that developers actually did want to be tracked. They did want data to be shown, and they did want leaders to use data to make better decisions across the organization. This was a huge majority of 91% of them, seeing that, like – they want metrics to be tracked. They want to know that leadership is looking at the right things, but they're not happy with the actual metrics that they're looking at. Typically, leadership is missing that piece of the puzzle. So developers do want leaders to look at things like, how much deep work time am I getting? Am I getting enough time to focus? This is a huge correlation between my ability to deliver value to write code to do the things that leadership is asking me to do. If you're are making me stand in all these meetings, I'm just not going to be able to get that work done. You need to understand the cost of what you're putting me through/ They also want you – leadership looking at like, "Am I going to burn out? Am I being asked to do too many hours of work?" Because what we see in our data, with Uplevel is that there's a very high correlation between always on metrics, meaning, like I'm working outside of a normal day, and deep work. If I'm not getting enough time to do my deep thinking, and my deep work during my normal business day, I'm going to extend that time, and I'm going to be working extended hours in order to get that focus time after hours. That is a very high indication of burnout. [0:10:50] JMC: I'm wondering out loud and putting this out to you the expert on this, if there's any solution to this, what seems to be a flywheel effect, negative effect of software engineers. The supply, the supply of software engineers to the world, to the labor market is shorter, way shorter, and it grows slower than the demand for them. That works in favor of software engineers in general, especially senior engineers, because they earn more. But in general, it raises the salaries of all of them, but by definition of equilibrium in with shorter demand, and higher – higher demand and shorter supply. But I'm wondering if the constant push upwards of software engineering salaries in average is actually putting so much pressure in hiring. In turn, CTOs and senior leadership of the software engineering business unit is actually wanting to extract too much from them and increase throughput. Even at the cost of what you just described, like requesting too much work from software engineers and so forth. If that's true, in your view, and you could explain it properly, much better than I do. If you think there's a solution to that, because from what I described, it seems like this is going to get worse and worse, because software engineering salaries have no indication of going down. At the same time, especially in economic context like this one, well, CTOs are not going to stop wanting to get the most out of their engineering teams. [0:12:38] CF: Yes. I'm going to try and restate this in a little bit different way and not – R&D costs are some of the largest costs of organizations who are building software. Very often, leadership, especially boards who are not part of the engineering leadership group want to know, "Am I getting the right return on my investment? How do I have confidence that your team is delivering the maximum value for my organization?" Now, if I'm asking my developers to spend a massive amount of their time in meetings, that is time that's being taken away from spending on value creating exercises. Now, I'm not advocating for zero meetings, because you have to have meetings, you have to have synchronous collaboration. But if the bulk of my time is being spent in meetings, that's a problem. There's a balance between how much time I'm spending collaborating synchronously, and my ability to get enough focus time. What we see is typically, I'm making this massive R&D investment. Maybe we just increased our team, we went from 100 people to 150 people. Now, it feels like we were getting like 75 people worth of work done, and now we're getting 80 people worth of work done, and I have massively added the number of folks. What's going on there? Why did I not massively increase our throughput or output? There's these costs to onboarding new developers to the number of meetings, to the chat interruptions that are happening that slows down an organization. There's things that you can do to increase our capacity of our team. That's – what I think what we typically see is many organizations are really only getting maybe 50%, 60% capacity out of their team. Once you remove the overhead of out of office time, lots of people don't take that into consideration. I expect that I should be getting 100 people's worth of work. But the reality is, like you have to take 15%, 20% off the top just for out of office, sick days, things going on, life. Are you planning to that capacity? Then you have to take another 20% off the top of that to meeting cost. Like I am in meetings, I context switched. I changed, I went to a meeting, I only had an hour, I went to another meeting. In that one hour between those meetings, I'm not getting quality work done, because there's not enough time to really get into flow and get into that harder thinking. There's a cost to the business, and you're reducing the net capacity of your team overall. What we want to look at is how do we increase that capacity of your team so you can get more done within the hours? It doesn't mean asking – kind of circling back to what you asked initially is, these high salaries, these people, we want to push them really hard. But the reality is, we're not giving them the opportunity to give us their full capacity. Because as a business, we've put barriers, and roadblocks, and toil, and hard things in their way to really delivering to the maximum potential that they have. [0:15:53] JMC: One thing that – what are the consequences? One of the insights that the study revealed that you've mentioned before didn't surprise me, right? Because I've conducted myself research on this point, right? I was doing so concerned that a product that I consulted for was requesting access to behavioral code metrics that are quite invasive, right? I was framing this in the way that it's not for the product development team, but rather for your boss, right? It's your boss who is willing, and who's actually interested in how much time you spend as a software engineer, doing deep work. I would suggest you spend collaborating, documenting, working on legacy code, and so forth. What you said correlates with what I found out, is that most of them to my surprise, and you said, I think not over 90%. Again, that correlates to what I just saw, just three months ago, are willing to give away, have their boss metaphorically over their shoulder, tapping them, and saying, "You're doing the right thing. Go ahead. You're not doing the right thing, try refocusing in this way." But giving them full transparency, so that didn't surprise me. It did surprise me when I conducted the research. But now that I know that, I'm quite in the same light. What did surprise me, in fact, is the results or the insights that your study revealed about async. I thought that the large majority of the population studied would be completely in favor of async, but it's not true. You just entered it about a minute ago, that meetings are required, but maybe in a bigger dose than I thought. Could you elaborate on the insights that you got from recent work from your study? [0:17:47] CF: Yes. This was really surprising to me as well. I think what we're seeing in this data, my hypothesis, again, is kind of a reckoning of we've all moved – the whole world had to skew towards async. A lot of people found that freeing, and they loved it, and they were able to actually get their focus time done. Because someone wasn't spinning around in their chair, tapping them on the shoulder. They were able to focus in ways they never worked before. They didn't have to commute for hours on and waste all of this time. There was a really positive kind of flow as the world went to remote working. I think we're kind of having this pull back, and what we saw in the data is roughly, and not quite a third preferred async devices async communication, a third preferred synchronous communication, and a third preferred a mix of both. Actually, it's a little bit skewed where almost 40% preferred a mixture of both async and synchronous communication with 35% preferring synchronous. I think we saw that reduced drop in only 27% thinking that asynchronous communication is best. I think there's a little bit of a reckoning and a resetting. But the reality is, everyone likes to work in different ways. We see this just split across the board, and really kind of the slight bias is that 40% think that both is necessary. [0:19:22] JMC: I'm wondering if the study conducted by up level captured anything about – because what I'm getting from my own research, and from the street, from what I hear in the street, is that attrition levels are quite high despite what it just said about salaries going up. You would think that on average, highly paid job role like software engineer, even juniors again, would have long, on average again, long tenure times turns out that it is true, but it turns on average tended times are being reduced quite dramatically. We've hinted to two potential reasons that we've just described. But I wonder if Uplevel's study captures anything about the nature of software engineering projects becoming more complex in time. It seems like software engineering was easier 10 years ago, microservices, the cloud, the stack being – every now and then completely revamped, in a way, no sign of less programming languages being used, no sign of less infrastructure, or have infrastructure becoming easier. No one wants – Kubernetes for example is becoming pervasive, and yet, every single software engineer out there will say, "I don't want to touch that the internals of Kubernetes with a 10-foot pole." I wonder, maybe you don't have any strict hardcore insights. But if not, would you have any proxies from your study about that? [0:21:06] CF: Yes. I think complexity is going in both directions. Added layers of organizations, we now have these legacy systems that are getting really, really hard to maintain. There's a lot of transformation initiatives going on across the industry. A huge part of that is what we're seeing is, basically. every company is now a software company. Even if you're an automotive company, if you're a bank, where your primary thing you are selling is not software, you are still now a software company, and you have to learn how to do development. That might be out of your core competencies in how your business operates. We're seeing this kind of shift of every company having to learn how to build software and how to build software well. We have this complexity, we have this big learning curve, a lot of companies on a transformation journey. We can see the acceleration of the cloud is really not that old. It's only really been in the last 10, 15 years. The opportunity for what companies can do has just grown massively. I think we're seeing just big shift overall, and it's happening very, very quickly, compared to the speed at which we were developing, and iterating, and learning before. When you were restricted by hardware, and the cost of hardware, and the cost of infrastructure, and you had to maintain these massive server farms, and you had to do all of this work, it slowed down progress. By having what seems like unlimited compute power, now, we're able to take on more transformational opportunities. That's adding to layers of complexity, because there's just so much happening, and so many companies trying to do new and different things, and try out new technologies. Not only do developers feel like the CTOs don't know what's going on, but 96% of the developers say, they don't know what the leadership [inaudible 0:23:20]. It's connected. It goes both ways, where developers don't know what these important initiatives are happening, and they're trying their best to make the right decisions for the business. But if they truly don't understand what's going on, there's just this massive disconnect between what's happening, what needs to happen, and kind of perception across the board. [0:23:46] JMC: We've established now, and described mostly the bottom end of the gap, right? But we've also talked about why these slightly – and if we want to elaborate on that, please feel free on the reasons why CTOs are missing, and making strategic decisions without counting with the real-world data that lies beneath them, right? I wonder why they will do so. But if you could actually elaborate on that, and do so maybe with an example in which Uplevel has helped bridge this gap in any direction as it helped software engineering teams understand what the leadership was doing strategically. Then, they factor that in, or the other way around, or both ways. Could you give us an example of that? [0:24:31] CF: Developers and development teams, especially inside of large organizations. One of the things that we see is there's just inconsistency across the entire organization of how we build, how we develop, what tools we're using, what are our processes. It's just vastly different across an organization. When you have that many disparate sources, it makes data collection really, really hard. Where you have to pull together information, not just from things like my code, my project tracking tools, our calendars, our Slack. So much work is happening in Slack now. [0:25:08] JMC: Oh, yes. [0:25:10] CF: I'm not talking about four sources. I'm talking about – our teams have board of 10 different code repository tools, where we are actually building software, where our code lives. We have multiple types and instances of project tracking tools. So you can't just go to one place to run that query, there are many sources. Then on top of that, you're not able to derive combined insights together. It's just very hard as a leader to truly understand where all of this work is happening. That's one of the things that Uplevel provides, and we do it in a way where we're providing that same transparency across the entire organization. We believe that the best way to lead engineering organizations is through transparency, by building trust, by helping create that visibility across the layers of an engineering organization. Any developer within your organization should understand what are the metrics that our leadership team is looking at. Leaders should be able to see how is our organization spending our time, and how can I, as a leader start to advocate for change in support of my engineering organization. An example here, maybe, we are seeing that our team, our engineering organization isn't spending too many hours in meetings. Our volume of meetings is good, but we're not getting enough deep work. What that tells us is, very likely, we just have really bad meeting hygiene. When meetings are scheduled is really suboptimal. I'm not getting any deep work time, because I'm interrupted once an hour, so I don't get that – don't get into flow, I don't get that focus time. Not in a ton of meetings, but they are just wrecking my focus. An engineering leader could see that and say, "Okay, as an organization, I'd like to roll out this policy of no meeting Wednesdays, or no meeting Fridays, or some policy that they're trying to roll out." That's going to require work. I have to now go update my calendar, and that's annoying. I as an engineer, maybe I'm on a team where I don't have a problem with my meeting schedules, and meeting hygiene. I find this incredibly annoying. and I think it's frustrating, and stupid, and I think leadership is disconnected. But the reality is, I'm disconnected from what's happening across the organization. By creating that transparency, as a leader, I can say, "Look at what's going on." Your peers, the engineers of our company are not getting enough focus time. Together, to make the experience better together, we are going to inform this policy, we're going to take meetings off of this day." Now, I'm much more motivated. Even though my calendar, I had to do the work, I had to go change my own calendar. I'm doing it in support of my peers. I'm doing it in support of the organization to get more buy in, more alignment, more understanding of the why behind the policy decisions. So you're going to be able to enact and affect change much more effectively. [0:28:17] JMC: Okay. That's a brilliant example. I wonder if there are more. Have you ever been – have you ever seen any testimony of actions, data-informed actions, like the one that you just described? Even if hypothetically, if only if this one was hypothetical, based on information around, I don't know, code reviews maybe, conflicts in merge request, time that it takes to solve it. I don't know. Are there other typical software engineering metrics that are in aggregate have led to inform decisions across the board, or maybe granular, and apply to different teams that have generated not an opposition, like you just described, but an endorsement by the person being applied to this new policy? [0:29:09] CF: Yes. Another really great example. That example I gave, I obfuscated it a little bit, but yes, that was a real example that we have had. We've seen this happen time and time again. Another kind of example that I'll obfuscate is. Say there is an initiative, and you have a team working on this very key important initiative. One team is based in the US, and the other team is based in Europe. You have a third team that's also based in the US, that's working on a totally separate initiative. Now, looking at our data, what we could see is, we look at cycle time, and then we break down – so cycle time is from when a ticket is first open to going through the different phases of development. We use for PR review, testing, and then deployment. What we see in this kind of scenario where you have two teams who are reliant on each other that are based in different time zones, is that you have a very high time of waiting for review. Then you have a cost of, now, it has to go through this cycle where it has to basically wait 24 hours every time you want to get through an iteration, instead of being able to iterate quickly together. When you can show this information, people do not like having projects taken away from them. They feel like it's some sort of failure. I was bought in, I was motivated. This was the thing that I was owning. I don't want to have this taken away from me and give it to another team. But if you can show them in the data, look at this cost, look at how much – look how hard this is. We are making this harder than it needs to be. You can shuffle people around, and teams around, and who owns different initiatives around to better align based on time zones, so that you increase collaboration, especially when something's high impact, high risk, time sensitive. Where any sort of cost of delay in time impacts the business. [0:31:09] JMC: Yes, that's a brilliant example. Did we miss any specific insights from the study that – by the way, where can anyone go and find that study? [0:31:26] CF: Yes, if you go to our website, uplevelteam.com, you can find access to that. We have it in our blog, and resources area. [0:31:35] JMC: Did we miss any of the main key insights that we just went through from the study? Did we miss anything that was specifically relevant to this conversation? [0:31:48] CF: I think I can just really quickly recap, like my key takeaway is that CTOs do need better, they need to be better connected to what developers are doing, and that developers want to be measured. They just want those metrics and insights to be really meaningful, and for leaders to be making decisions on complete information. Because when you don't have the data, you're relying on gut feelings. That means you have to use really manual methods to create an understanding. Some of the manual methods that we saw, engineering leaders relying on is just live meetings and conversations, again, taking away from the deep work of developer time, sending messages, or sending emails, or sending out surveys to their teams, which again, is just incomplete and anecdotal. So looking at the real work happening in a way that's supportive of organizations is key. [0:32:43] JMC: My prediction is that if a product like Uplevel becomes very successful, pervasive in a good way, right, everywhere is that – then the next research, piece of research study that you do next year, when this is happening in three years' time, whatever, async positivity support, and that third of the population that you said that supported async work will increase. Because if this scenario that you just lay down, and again, this is me being – thinking myself as a good forecaster, which is probably not true. Is that async doesn't work in a scenario like the one that Uplevel is trying to solve, the scenario that you were describing a minute ago. If a company is aligned with how software engineers work, and how to improve those workflows, according to data, a company – the leadership of that company will get endorsement, approval, and support to changes, and workflows, and many other fundamental project management skills in any given software engineering project from the individual contributors. I think that they would feel more comfortable once those best practices are established, based on real world data. They would feel much more comfortable than they think. The amount of people supporting async workflows will increase. But anyway, regardless of that prediction, I really wish that a product like Uplevel and the mission that is trying to achieve and fulfill becomes a reality. Because yes, the attrition numbers are growing. Every single company in the world is a software company, and we better get good at being that, and better be humane, and treat ourselves with respect, and based on data than any other way, to be honest. [0:34:59] CF: Yes. I love that hypothesis. I really do hope and believe that that could come true. I agree with your thesis there. [0:35:06] JMC: Well, thank you so much. With that support for my forecast, I think that we can finish this conversation today. Thanks for being on the show. [0:35:16] CF: Yes. Thanks for having me. Thanks for a great discussion today. [0:35:19] JMC: Bye-bye. [END]