EPISODE 1864 [INTRODUCTION] [0:00:00] KB: Modern software teams typically rely on a patchwork of tools to manage planning, development, feature rollout, and post-release analysis. This fragmentation is a known challenge that can create friction and slow down software development iteration. It's especially problematic for cross-functional teams where differences in roles, expertise, and work culture can further complicate collaboration. There is growing consensus that successful software product development requires continuous collaboration across functions, including design, engineering, and operations. Tobias Dunn-Krahn is the CTO, and Doug Peete is the Chief Product Officer of Atono, which is a software development life cycle platform focused on cross-functional teams. They joined the podcast with Kevin Ball to talk about the challenges of modern product development, the importance of low-friction UX, the role of AI and product tooling, and how to unify product design, engineering, and operations in a single workflow. Kevin Ball, or Kball, is the Vice President of Engineering at Mento and an independent coach for engineers and engineering leaders. He co-founded and served as CTO for two companies, founded the San Diego JavaScript Meetup, and organizes the AI in Action discussion group through Latent Space. Check out the show notes to follow Kball on Twitter or LinkedIn, or visit his website, kball.llc. [INTERVIEW] [0:01:37] KB: Hey guys, welcome to the show. [0:01:39] DP: Hello. [0:01:40] TDK: Hi there. [0:01:41] KB: All right, let's get started with a little bit about you two and what kind of led into Atono. Maybe a quick intro and then how that has evolved into what Atono is becoming. Let's start with you, Tobias. [0:01:54] TDK: Okay. Hi, everyone. My name is Tobias Dunn-Krahn. I am a CTO and Co-Founder of Atono. I started out in the development sphere over the course of my career in the past 25, 30 years. That role evolved into managerial roles. I did get quite involved in the operational side of things over time and then actually ended up doing quite a bit in terms of product ownership and product strategy. A pretty good swath of the various roles, not all the roles that are involved in making software, but a good little sampler there. And I've worked at companies that are very small, just a few people, all the way up to large public companies. And a lot of that experience actually informs how we've decided to build the way that we built Atono. But before we get too detailed on that, I'll pass it over to Doug. [0:02:46] DP: Hi, my name is Doug, and I'm Chief Product Officer at Atono. Like Tobias, I come from the 1900s, been building and designing software for a long time. I'm a geek at heart. I started out in the engineering world but I've always gravitated more towards customer-facing type of technical roles, consulting technical pre-sales. But for the last 25 years, product management. Typically, forming with Tobias a dynamic duo building out strategy, building out road maps, building out products, getting all the way down to the feature levels that we built together. [0:03:26] KB: Well, and I guess that makes a lot of sense then that you're now building a tool targeted at that audience of how do you think about product management. And I think this is an area that's ripe for some disruption. There's some long-standing players. I know there's a few different folks who are trying to disrupt it. Maybe talk a little bit about Atono and what your angle is into this area, and then we can dive into the details. [0:03:48] TDK: Sounds good. Atono does do a lot of things that you might be familiar with from those long-standing tools and some of the up-and-coming tools for managing the software development life cycle. One way in which Atono is different is that we put an emphasis on, first of all, high-quality user stories. And we think that they're worthy investment for a number of reasons, which I'll get to in time, in excess of just the usual value of having high-quality user stories where product people can talk to the implementers of those stories. We acknowledge that when a story is complete, deployed, shipped to production, that's not the end of the life cycle. Atono is built to support cross-functional teams that not only build that software but run it. We have integrated features such as feature flags, where you can natively control the roll-out of a feature right from within the story, right from where you wrote that story. In addition to that, teams want to know how their product investments are performing. We have a feature engagement module which allows you to see what are your most used features, how the uptake of a particular feature or user story that you have implemented is going, if it's performing to your expectations, if customers are adopting it, and so on. We have a lot of ambition to go further down that road of not just building software but running software. But hopefully that gives you a basic flavor of the direction we're headed. [0:05:25] KB: Yeah. A few different things that I'd love to dig in there, but let's maybe start with something that I think is interesting, which it sounds like you're kind of targeting multiple audiences here, right? You're talking about things that often are sort of more in development land around feature flags, and integrations, and running software. You're also talking about the metrics and things that PMs care a lot about. One thing I've seen in this space is most of the tools out there are targeted to an audience. PMs love Jira. I have yet to encounter an engineer who does. [0:05:58] TDK: I didn't know we were allowed to say that on this podcast. [0:06:00] KB: Oh, I will dive straight in. Right? And I'll flip it around. A lot of engineers really like tools like Linear, and I found those to be less intuitive sometimes to nontechnical audiences. I'm kind of curious how you think about who are the primary personas that you're targeting this at. [0:06:16] TDK: Sure. Yeah. And without going too far back in history, I do see this as a fairly straightforward evolution. If you look back to the pre-cloud era or the pre-agile era, you had software development teams that were, say, separate from testing teams very early on. Those were combined so that they had mutual goals. And moving forward more into the cloud era, the same thing happened with operations teams, where you had embedded operations engineers that were specialized in actually running and operating the software, but it was a team concern. It was made a team concern. All of the trade-offs that you had to do at every level had to be considered by the team. And in our careers, we've seen that to be very functional. It allows for even larger companies that have scaled up to still act like a bunch of little startups, and each team being fully capable of shipping an increment of software and delivering value to customers. We want to support those teams, those cross-functional teams. And I would expand those roles that I mentioned to include product owners, designers, software engineers, testers, operations engineers, and even customer support engineers, and beyond. Having all of those concerns be amalgamated into a single team makes for a very functional unit that allows you to serve your customers. And we think that the existing tooling out there is not - like you say, it's very splintered. Each of those roles probably have their own tools. We don't necessarily have the ambition to cover all aspects of all of those roles, but centering on user stories and the expression of what is valuable to a customer and moving out from there, we think is a good strategy. [0:08:11] DP: I think it's interesting, Kevin, that you talked about that spectrum of what appeals to PMs versus what appeals to engineers. The way the product's been built out, I would say that it actually provides a lean towards the engineers, but that's not necessarily who it solely caters to. I mean, we put emphasis in saying cross-functional product teams because we believe it's the whole team. It's just we feel like the engineers haven't had as solid of a voice in how products get built. I'm not necessarily the most eloquent at phrasing things, and there's been this idea in the back of my head around that empowerment that's always troubled me. And a couple years ago, I was introduced to some of the work of Marty Cagan, and he talks a lot about the empowerment of product teams. And when we look at the consistency of most product teams, you have PM, a product owner, designer, but a bunch of engineers. Look at an organization, there's usually multiples or orders of magnitude more engineers than there are PMs. And it's a shame that we don't do a better job of empowering these engineers to be part of the design process, part of the creation of what it is. We treat them like mercenaries often. And so when we look at how Atono leans, part of its lean is to try and address that disparity in representation. [0:09:40] KB: Let's dive into that concept because I think that's something that I've seen it showed up in a number of different ways. I've seen it showed up with design, right? Where you'll have a whole decision-making process and then pull in designers towards the end. And I sometimes hear this described as to put on their design fairy dust, right? Make it look pretty, where a good process integrates design thinking, and user research, and that sort of things from the front. Similarly, as you're highlighting engineering, sometimes it's like, okay, there's this whole decision process. And then how do you build it? What goes into a tool that can facilitate that more integrated and cross-functional kind of feedback loop across all these different parts of the process? [0:10:20] TDK: Atono workflows are customizable to some extent. There are workflow step categories that are not customizable so that we can provide things like accurate cycle time or predicted done dates, and so on. You can support a number of different workflow styles. But out of the box, Atono has a story refinement workflow where stories are born. And that includes a design workflow before a story goes to a team for sizing. In our opinionated fashion, we believe that stories can't be accurately sized without a design. The design is pushed further to the front of the process by default. And we like to see that as a gray area, not like a series of gates that you have to pass. You don't have to have a fully baked design before engineers see anything. That's not functional, not agile. However, small variations in the design can generate drastically different levels of work. If you want to have a predictable process, you have to have that conversation happening early on. And that's our recommended approach. If you really don't like that, if you want to take the fairy dust approach, you could customize workflow to do that, but that wouldn't be our recommendation. [0:11:39] DP: Just piling on that. First of all, Kevin, thank you for using the term sprinkle pixie dust. That sounds much better than lipstick on a pig, which is what we typically hear. One thing I would pile on with what Tobias was talking about is that we treat stories differently in terms of the amount of structure that they have, all the way down to even acceptance criteria being first-class citizens. If we're going to get geeky, it's not just some text field or markdown field with a bulleted list in it or a numbered list in it. AC's are actually objects, and that means you can do special things with them. You can move them around. You can attach comments with them. The comments stay attached with them as you move them around. You could have AC's that are suggested and known to not necessarily be canon. People aren't afraid. A lot of times, developers are afraid to touch the story. They think it's some sacred document. But by having AC's be something where they can have a state, then we can allow people to suggest ACs and then have various approval processes-type concepts. It also unlocks some of the behaviors that Tobias has hinted at around feature engagement and being able to tie the metrics all the way down to specific ACs that are getting exercised in the different environments. There's some pretty unique things, I think, that we unlock across the spectrum by treating acceptance criteria as first-class citizens. And some of that spectrum is on that design side and helping pull in UX and engineers, anybody else who's part of the cross-functional product team to participate in the story in that earlier mechanism. [0:13:24] KB: That's super interesting, because I think, to your point, acceptance criteria is one of the things that I think as an industry we've sort of evolved to. If you want to actually verify that the thing that happened is what you wanted, you need some sort of structured acceptance criteria. But a lot of times that just means here's a text blob. Make sure this is true. Make sure this is true. How does that structure that you're adding there, it lets you create a life cycle, as you mentioned? How does that integrate to the other parts of the process? How do you integrate that into your code for that automatic testing? What does that all look like? [0:13:56] TDK: For just like a very practical application of that, I don't want to go by without mentioning this because it's sort of something that's irritated me for years, which is when you reference an acceptance criterion in, say, chat, Slack or whatever, you're usually doing it by numbers, like AC1BI. Your user story changes, that reference is no longer valid. Somebody doesn't know what you're talking about. Just from a very practical perspective, any AC, you can just grab a link to it, put it in chat. When the recipient clicks on that, it brings it up, flashes which AC they're referring to. And if the story changes, you're still going to get a reference to that AC. It's handy just for really practical things like that. Where we're headed with that? Well, first of all, just to echo what Doug was saying, you can attribute product usage to individual ACs. If you had a set of AC's, for example, in our demo application, there's a number of ways that you can share something. You can share via individual social media channels, or by email, or by a link, or something like that. As users use the product that you're building, select amongst those options, you can attribute that usage back to those AC's and see which parts of your story are getting the most uptake from customers. It allows you to do some like pretty sophisticated product analytics. Where we're headed with that is enhancing what has to be done for each of those ACS in each of the workflow steps. You can imagine, within the design phase, making sure as a checklist that all ACs are covered by a design. Similarly for development, similarly for testing, or similarly for product review. And then when you get to having a deployed product, which of those ACs are actually being used in production? You can evaluate how well you've done in terms of making the most valuable story possible. It unlocks a lot of different possibilities. [0:15:53] KB: Yeah, that's really interesting. And thinking about them, yeah, they're an entity. They have an ID that you can permanently link to that you can reference back to in your product user analytics. Very interesting. Are they still distinct to a story? Can you share across stories? Or are there any other sort of interesting dynamics that come by making this a standalone entity? [0:16:14] TDK: They are not shared across stories, but it does unlock the possibility of having automation be able to split stories for you. If you need to take a story and make it smaller, that is something that can be done in an automated fashion because these are top-level objects. And one of the things that we are playing around with right now in terms of features that are coming out soon is having AI suggest to you splits. First of all, does a story meet invest criteria? If it's not quite getting there on the small aspect of that criteria, suggest a split. In terms of how well it does on that, that's TBD. It is surprising to me. Honestly, I'm quite an AI skeptic, but some of the testing that we've seen so far is promising. I believe there'll always be a human fixing up stuff. But it's certainly a labor-saving device to have a reasonable suggested split. Maybe you have to tidy up each of the parts. But in any case, that type of automation is facilitated by having acceptance criteria be first-class objects. [0:17:19] DP: I would also pile on that in terms of being able to interact with third-party systems as well as AI is greatly facilitated by having it be structured, because I'm not just relying on diffs. If you want to say help me make my story better, suggest some AC's, not just suggest decomposition, but all the different things that you might want some help with. When it's in that structured format we can literally make specific recommendations. You can step through, approve, reject, do all those other kinds of things as opposed to coming back to saying, "Ah, I'm going to diff this text blob and this text blob," and then let you kind of figure out what you want to keep, not keep. It unlocks a lot of things so far, and we're finding many more behaviors as we go that it helps unlock. [0:18:11] KB: Totally. Well, and I'm deep in the AI coding world. And one of the most valuable things you can do to unlock that being actually useful rather than pushing you off in weird directions is having something that's a formal validation. That you could do a validation in the loop of like, "Oh, you went off and did something? Can we check it against unit tests, or typing, or acceptance criteria?" [0:18:32] TDK: Yeah, exactly. And just to be clear, we've also put a lot of effort into the UX of creating stories. From the user's point of view, as they're writing a story, it just looks like they're creating a list. It looks like a free text field. You can just blast through and create tons of ACs as if it was a free text field. It just happens under the covers. These are structured objects you can drag around, and manipulate, and so on. We definitely didn't want to compromise on a lot of structure at the expense of a poorer user experience. [0:19:03] DP: If it's not clear from the geeky passion coming through, we're obviously big believers in stories and story quality. And oftentimes, as we're working either within our organization or sometimes we help out other organizations with their challenges around software project management, we always come back to invest. Are you familiar with the invest pneumatic? [0:19:28] KB: It's probably worth going through it quickly for anybody listening. [0:19:33] DP: Invest is a pneumonic, and it speaks to the attributes of well-written stories. I don't know if going through every one of the bullets is as helpful or not. But for listeners, I highly suggest you go out there and look at the different content that's out there on it. But it greatly influences where we are putting features in and how we're building the story interactions and the things that we're trying to make sure that the product simplifies. There's a few different ones in there that I'm a big proponent of. The N in invest is negotiable. Trying to make sure that you don't over-specify what it is that the story should do and how to do it. Again, let's unlock the design powers of everybody to make sure that we can figure out what the best way is to solve a problem. And that has a whole, I think, interesting bit of philosophy around it. But there's the other two I run into that I think are quite interesting is the V is valuable. How do I deliver incremental value? That's what Tobias and my role primarily is, is how do we get valuable stuff to our customers? And what the definition is of that value versus the S, which is how do you keep things small? When you work on smaller things, you get better estimates. They're easier to deliver. You can get feedback faster. There's so much value in keeping things small. And so back and forth between playing with that value versus the size of something, that's where a lot of the tough decisions get made. When I hear Tobias talking about unlocking things like the decomposition of stories, we're big fans of decomposition and emphasizing the small and getting small valuable increments to our customers. What are the tools that support that or what are the features in our own product that support it? It ends up being kind of meta because we're building products to build products. But yes, when we think about how we build Atono, it often comes back to invest the decisions that we make. [0:21:36] KB: Yeah, any friction you can remove in that process is good. There's resistance to decomposition pretty much across every role. There are reasons to do bigger chunks of work in terms of efficiency, or tooling, or just the overhead of splitting out the story, shipping two different increments. You get resistance pretty much across the board. I don't think anyone thinks it's a bad idea in general. It just can be a lot of paperwork. Anything we can do to reduce that friction, we will do. I do think it's interesting on the design side. I think that developer tooling, in terms of continuous deployment and just general CI/CD pipelines, have been improved over the years so that there's less resistance for engineers to ship small increments as unlocked by the cloud era and all of the tooling that's been built up around that. I do think that the design side suffers a little bit. It is quite laborious to make independent designs for each of the small stories that would comprise an epic or even a larger theme. I often feel guilty about asking designers to come up with individual designs for each of the decomposed stories. I don't think that's going to be a problem that we're going to set out to solve necessarily. But just a shout-out to anyone out there building design tools. A way of splitting designs in an automated fashion would be pretty awesome. [0:23:06] TDK: We'd like to party with you. Reach out to us. [0:23:08] KB: Yeah, I love that. Well, and I like that as sort of an organizing framework of how do you make it easier to decompose and recombine and do all of those different pieces. [0:23:19] TDK: Yeah, I would say as a product management leader and having worked with hundreds of product managers over the years, it's not just the UX folks that are taking the moonshots. I find it's pretty common for PMs to not shoot for the M in MVP, minimum viable product. There's just this tendency to have this, what is thought to be a critical mass of features, that in the end you have to challenge folks on saying, "Isn't there value to just this or just that?" and that constantly have to work with product managers and product owners in order to help them get there. There's the philosophical elements of how do you deliver valuable increments and what is value? And really pushing each other to make sure that you get there, and the engineering teams doing that as well. And then providing the tools to make it so that it's just second nature. It's easy to do. Let's remove the overhead. Let's make it easy to keep these acceptance criteria, move these to another story. And that's where Atono comes in to help streamline that process. [0:24:26] TDK: Another thing that is kind of rolling around in my head is that one of the major values of splitting stories is so that they're independently shippable. But once they're shipped, all of your stories can be quite fragmented. We also have ambitions for post shipping, recombining stories that were split up for the purposes of independent shipping. Just a slightly more concrete example, you would have an epic that was decomposed into 10 stories. Perhaps not all 10 stories got shipped. This is agile after all. Maybe we decide that three of them are too expensive, or don't provide enough value, or just run out of time, whatever. Seven of those stories make it across the line and into customers' hands. At that point, for purposes of archaeology, it could be more difficult to find the information that you need on that if it's split across seven stories. Recombining those into what they were before they were split could have some value. And again, we can approach that in an automated fashion based on the way that we've modeled user stories. Same thing for collecting usage data perhaps. If you're looking at usage data across seven different related stories, it's maybe not as convenient as a combined story. I think there's a whole other side of the deployment universe that's sort of greenfield. And we're going to explore that. [0:25:48] KB: That is interesting. Let's maybe talk then about some of the tooling that you provide post-deployment or after deployment, which is, I think, a differentiating factor here. A lot of these tools don't worry about that. You ship it and you're done. Maybe at best, you have like a tag back for tracking or something like that. You've talked about analytics, but you also mentioned something about feature flags and rollout stuff. What are you doing there, and how do you think about that and its connection to stories? [0:26:14] TDK: Yeah. I'll just take you through a typical workflow. When a product owner has authored a story, they have the ability to say add a feature flag to this story. That means that I want to control the deployment of this story. Maybe I want to release it across my stages individually. Maybe I want to roll it out geographically or to just specific customers. I have an early access program. I can say that this story is going to be flagged. And our flagging system is based on open feature, which is an open-source flagging framework, which is a couple of advantages. First, for customers, it's makes it easy to swap in and out flag providers. If you decide Atono flags, you want to use some competitor, or you want to roll your own, or something like that, it makes it much easier to do that. Most feature flag products now adhere to the open feature standard. And the second advantage of that is that open feature provides wrappers for just every single language or framework under the sun. That's a nice freebie for us. In any case, once a product owner has indicated that a particular story is to be flagged, there's a reminder that gets put into the story itself. When developers see the story, they'll see, "Oh, this one requires a feature flag." That reminder includes link to some sample code. If they haven't done it before, they can see how to integrate feature flags into the feature. And at that point, right from the story, you can start flagging the story on and off. Let's say it's completed and it's in the dev environment, you can start turning it on for individual dev instances, or you can turn it on for the dev environment in general. I think it's fairly standard feature flagging semantics. I think the main thing is it's just very convenient to have reference for exactly what functionality you're turning on and off from the place you're doing it. It also democratizes flags. In some environments, you would have product owners having exclusive access to turning on and off flags. In other environments, it's the developer's responsibility. In some cases, it's the operational - it's an operations engineer responsibility. We do provide a permissioning model that allows you to lock that down if you want to. But if you want to have it be a team responsibility, it can be open to the team that implemented the feature. In any case, all of those roles are able to see the flag status very easily. They can see per environment or per slice. And when I say slice, I mean a combination of location, customer, environment, however complicated you want to make your rollouts. You can see exactly where that flag is on or off. And then another topic that I think is relevant is feature flags as technical debt. Making it a team responsibility to not continuously accumulate feature flags. We put a lot of effort into showing the status of a feature flag, which is a little bit complicated because you have the configured status, which is your indication of where you want the flag to be on or off in different slices. But there's also the runtime status. Meaning, which of these environments are actually making flag requests? And so that gives you an idea of what's happening in real life. Between those two data sets of what's happening in real life and what you've configured the flag to do, you can make decisions about whether or not you can remove a feature flag. For example, if the flag has a mixed evaluation status, it's been on and off in different environments in the past, but you can see that all current flag evaluations are coming in only for prod and were always returning an on status, we know that that flag is safe to remove. Surfacing that data, I think, is really important to be able to confidently remove feature flags over time and not have them just constantly expand your test matrix with these flags on, these flags off, which can be extremely cumbersome over time. [0:30:39] KB: Yeah. Well, you anticipated a question I was going to have because I think feature flag systems have a tendency that a lot of systems have of they just accumulate over time, and it gets slower and slower. And so, yeah, I was wondering both, yeah, how you thought about it. But then also, you had mentioned before some sorts of automation around story splitting and generation. Is there some sort of automatic detection, and not only elevating it, but even creating, like, "Hey, this is ready to be cleaned up. Let's prioritize this," something along those lines? [0:31:10] TDK: Yeah, no automation for that yet. You can go to a feature flag list page and sort by various criteria and make decisions about that. But yeah, certainly something that's on our radar. [0:31:20] DP: I would mention also that a lot of the ramifications of this stuff is on the cultural side. Unpacking, but kind of in reverse order, on the cleanup of feature flags. The added visibility that folks have does encourage the hygiene that we all want and hope for. I think you mentioned it earlier, as tools tend towards the more specialized, and you've got feature management tools out there that can do all kinds of crazy, different rollout deployments in automated fashion. But when it comes down to the real basics around why do we use feature flags, we want to reduce risk, and we want to get increments out faster. And so really optimizing on that comes back to the democratization that Tobias is talking about where I need to make sure that that's not in the hands of a PM who maybe is stingy about a feature flag because they know they're going to have to pay for the removal later or some of the other weird things that we run into that are cultural in some of the companies that we worked at. You need a feature flag. You should be able to make a feature flag. That feature flag should be visible to all. That the status of that feature being visible helps encourage hygiene, but it also helps keep the whole product team feeling like they know where their code went, and what it's doing, and is it being exercised. There's another Atono-centric bit with this is that there's also announcements that go out through the chat integrations, so that folks can also - you don't have to go to the story to see it. If you're involved in any of the chats related to the stories, it announces itself as there are changes to the different slices that you're putting in for feature flag evaluation. Just that sense of like I am part of this bigger thing is brought to the entire product team as opposed to the folks that can just get to the feature management tool. That's something that I've heard quite a bit is that as the feature management tools have gotten more complex and more specialized, they've gotten more expensive. And so I was talking with a company, they have 400 developers. They can only afford 20 licenses for the feature management tool. That's crazy. This is something that actually is reducing risk. And yet, you can only put it in the hands of 20 out of 400 developers, you know? That's not good. It's just built in. It's just part of the product. There's no additional licensing. We believe it's a fundamental thing. We believe it's so fundamental that that's how it needs to be. [0:34:03] TDK: Yeah. I think it just fits into the theme of removing friction. You don't want to have any friction, or you want to have minimal friction in using something that, like you say, reduces risk, allows for healthy software practices. You don't want to put up a pay wall in front of that. [0:34:19] KB: I'd love to dig into something that I feel like I've heard pieces of a theme across both of you talking about, which is the connection between sort of product development philosophy and the tool you're building to enable things. You've talked about how you obviously are inspired by different pieces. You think things work best in different areas. I think the tool is incorporating some of that. But to what extent is there an education piece? How does that connect to the tooling? How do you think about that sort of interplay between this is how we do things, our culture, our approach, our processes, and this is the tool that we're using? [0:34:56] TDK: I can cover a couple of aspects of that. First of all, in terms of the tool itself and having a good feedback mechanism, of course, we've used it ourselves for most of the time that we're building it. And we have a couple cross-functional teams that are very vocal about any sort of issues they have with the products. I will get an earful if something is not working quite perfectly or is not quite designed perfectly, so on and so forth. That's been good as we've been spooling up. And then, of course, we get feedback from customers directly and so on. We also try to build in an opinionated fashion, as I mentioned earlier, with regards to workflow. If you use the product just out of the box, you will be guided to our opinion of what the best workflow is. That manifests itself all over the product. In a number of cases where it really is just opinion, we do allow for customization for our product to be used within organizations that just have different cultures or just different opinions on the way things should be done. But generally speaking, the path of least resistance in the product will guide you to what we think is optimal way. That being said, of course, you do need to educate people. There's a whole slew of publications that we have, starting with our help documentation integrated very closely into the product. When I mentioned earlier about having a dialogue pop up and give you a code snippet if you want it, that links into our documentation on like, "Okay, why use feature flags? Back up a little bit." There's also quite a bit of blogging that goes on in terms of the way that we see things philosophically and why we have made certain choices. How we try and get out the word along those lines. But this is actually Doug's purview, so I should probably seed the floor. [0:36:47] DP: Sure. [0:36:49] TDK: Anything you want to talk about in terms of, yeah, articulating best practices on the product side? [0:36:53] DP: Yeah. I mean, there's a little bit of a story behind the story in terms of us saying, "Ah, opinionated this. And that's why the product works that way." I think we've had an interesting evolution in terms of the different sort of compute and delivery models that we've been able to see. We took a company from a "Here's the software on a DVD. Go install it on your servers," to "Hey, we'll host it for you." [0:37:18] TDK: Doug, you should never mention that. We're from the DVD era. [0:37:23] DP: I said we're from the 1900s. [0:37:24] KB: I mean, I still remember wiring up servers in order to test hardware. I'm in your vintage, though many of the folks listening may not. All right. Let's talk slightly different in some ways more cultural zeitgeisty topic. How do you see product development right now shifting with all of these different AI tools and advances? You mentioned you're building some AI into the product. I know it's definitely changing the way that I think about organizing teams because some things that used to be expensive are not expensive. Some things are still just as challenging. How do you see this current moment in terms of the way that we build software evolving? [0:38:04] TDK: Right. Big question, obviously. And I'll couch it with first saying that AI is only as good as its training data. And if you are trying to innovate and do something that no one's ever done before, you can't do that with just AI. That's sort of maybe my self-serving position on it. But that being said, there is a huge number of applications that I can see being useful. One thing that we found to be very interesting and reinforces our focus on high quality user stories is a feature we call Ask Capy. We have a capybara as our product mascot. And you can engage with Capy on asking questions about the product that you've built or the products that you've built. And you might initially think, "Well, why do I need AI to tell me about a product that I built? I built it. I know what it does." But of course, that's not the way that software teams work. There's lots of roles that maybe you are a product owner like me and you can't remember the details of a story that you wrote a year ago, or maybe you're a test engineer and you need to quickly find out whether a negative test result that you have is actually valid or not. Perhaps you're a support engineer and you need to know how the product works with regards to customers' expectations. Being able to quickly find user story references using technology like RAG is extremely useful to get very fast answers. It's just beyond search. Search gets you so far and then you can start reading, and digging, and doing your archaeology. But having that sometimes technical, sometimes old content be summarized for you in a concise fashion is just exceedingly useful. That's an example of where I think AI really shines in this space, in the product space. Lots of other examples. I won't go through our whole AI roadmap. But again, like what I started out with, AI only being as good as it's training data. You can really amplify the value of things like that if you have high-quality user stories at the right level of granularity and so on. More broadly, one thing that I've found a little strange in the industry right now is the focus on agents and the anthropomorphization of your interaction with AI. I do feel like in a lot of products that I use that agents have been shoehorned into the product, and the interactions feel very unnatural to me, actually. Just as an example, we're building a feature where you can have a suggested size for a user story. Based on other stories in your backlog that have been sized, here's three stories that we think are bigger, three stories that are smaller. This one's probably this size. Interacting with an agent in natural language to do that feels actually more cumbersome than just press a button and give me the references. I've used this before. You don't need to be quite so friendly, or cute, or things like that. Now, this is just my opinion. I think that we should just strive to have the best interactions for whatever the underlying technology is. We should just have the most natural and unencumbered interaction as possible. I do see that as the agentic trend seems to have gone quite a lot further than I expected. [0:41:35] DP: It's tough, though, because we have arguably one of the coolest mascots in existence, the capybara. Who doesn't want to talk to a capybara? [0:41:46] KB: That's a good question. I suspect if you over-index on agents, you might find out who doesn't want to, right? [0:41:53] DP: Yeah, good call. Fair point. [0:41:54] KB: We're definitely in a hype cycle, and everybody's trying to jump on the hype. What I'm curious though is like - so through your customer base, you probably get a lot of window into what people are asking for or trying to do differently. And I'm kind of curious, are there things that feel qualitatively different about what people are looking to do in product development now? Or is this another example of, "Okay, here's a technology, and we're jumping on the hype wave?" Because I've heard opinions on both sides of that, of like this is changing everything, or this is yet just another version of what we've seen every 5 years, we have a new underlying technology? [0:42:31] TDK: I think it's qualitatively different when it comes to prototyping software. I won't name any particular product names, but autogenerating, let's say, mockups, graphics based on prompts, like verbal prompts, is - I mean, you can't really compare to what that would look like 5 years ago, all the way through to like prototype software being generated. In a similar fashion, I do think that that changes the game to some extent. I also understand, I mean, directly from developers out of some of the frustrations when it comes down to where the rubber hits the road. Autogenerated code sometimes is more trouble than it's worth. Debugging something that was - I mean, no developer likes any other developer code, but they like AI-generated code even less. Yeah, I understand the skepticism too, but I think it is qualitatively different. But again, returning to my initial statement, that if you're innovating and doing something that no one's ever done before, humans will always be involved. What do you think, Doug? [0:43:37] DP: It's definitely on the hype cycle scale. I mean, we just see organizations being pushed to, I don't know, AI something, go buy some AI. I think that there's a lot of organizations just trying to figure out how to do it, how to get something in there, how to check a box? Ultimately, we're more concerned about user journeys. What is it that takes time out of my day? How do I get rid of some of that toil? How do I help streamline stuff? And there are different mechanisms for getting there. Do people really care what the algorithm is behind the scenes? Some of these problems, if I'm trying to suggest risk rating to a bug, how much of that is an AI problem or not? Could be. Maybe it isn't. Ultimately, somebody needs a technology that suggests the risk rating. Don't worry about what's behind the scenes. I think that right now because of where we are, you have to AI this, AI that, pull it all the way to the top. But I think in the fullness of time, we'll shortly see people being sick of the term AI this, AI that, and they just want those things done for them. The feature will just be suggested risk ratings, not AI suggested risk ratings. [0:44:54] KB: Yeah. Well, and I think to your point, right, focusing on the user, that hasn't gone away. There's no AI that's going to take the need to understand what does your user need and want. And how can you incrementally deliver value towards that? We're getting close to the end of our time here. Before we wrap, are there any things we haven't talked about yet that you think would be important to leave our audience with? [0:45:18] TDK: I don't know how much we covered feature engagement as an important you cultural mechanism as well as metrics for decision making, but there's a pretty natural progression in my opinion from the feature flagging concepts about rolling something out and getting it out there, and that having cultural ramifications as there is to, "Well, okay, now this thing's free. It's in the production world. Who's using it? How much is it getting used?" And there certainly is the value related to the investments that you want to make. And how much should you invest in something? But just as much in terms of empowering the product team to understand how that feature has gone out into the world and helped change the world, the teams don't usually get to see that stuff. Who has access to Pendo? Who has access to Amplitude? Who has access to all of those different tools? Maybe it's a PM. Maybe it's somebody on the customer support side. It depends on why you're looking at those consumption-type metrics. But bringing that all the way back to the entire team in the environment that they're used to working in is something that truly helps the teams feel like they're part of something bigger as opposed to a bunch of mercenaries that chuck some code out the door and never see it again. [0:46:46] KB: That actually feels like a great coda. [END]