[0:00:01] SF: Ellen, welcome to the show. [0:00:02] EB: Thanks for having me, Sean. Glad to be here. [0:00:03] SF: Yeah, thanks so much for being here. How about we start with some basics? Who are you? What do you do? And how'd you get to where you are today? [0:00:10] EB: Yeah, absolutely. Yeah, thanks for having me. My name is Ellen Brandenberger. I'm currently the Director of Product Innovation at Stack Overflow. I have a long-term background in product management, particularly kind of like in education products. But also, working on things that help folks solve problems and find solutions to their problems. I've done a lot with kind of like skills-based learning and adult resources and knowledge management over my career. And so, that's a big theme in my career. But also, helping work with startups and helping work on early-stage products. I got the opportunity to really combine those sort of three themes. Product management, knowledge management and early-stage products with my role at Stack Overflow. My role at Stack Overflow is really I lead a team that thinks about what are the unmet problems for developers and software engineers within our ecosystem? And how do we start to identify which problems are most painful? Which have the biggest opportunities for software engineers if we solve them? And then how do we go about discovering and delivering the right solutions to those problems that really add value for our community and for our customers? [0:01:26] SF: Yeah, I actually started my engineering career in the e-learning space. Working on learning management systems for different B2B businesses and stuff like that. That was like 20 years ago. [0:01:40] EB: I was going to say, I think the industry has both come a long way and also has still a long way to go. It's a fun fact. [0:01:45] SF: It must be, I imagine, rewarding working at a company like Stock Overflow where I feel like, generally, at least my impression is, most people in the engineering community have a pretty positive reaction to Stack Overflow. I'm sure there's some people out there that are disgruntled for whatever reason. But I think, generally, the impression I get is that most people are like, "Oh, this is such a great thing, resource for me." And I'm sure it's rewarding to work on a product that is generally positively received. [0:02:17] EB: Yeah, I think that's really fun. I always sort of joke, there's two reactions to Stack Overflow. One is from developers and folks in the community, which is much like what you just described, like, "Oh, you work for Stack Overflow? I've used you guys so many times. I use you every single day. I love the product." Or the other reaction is from people who don't work in tech, which is usually a bit, "What is Stack Overflow? What do you do?" Right? It's a bit of an extreme in either direction, but it's really rewarding I think to describe what that looks like for folks who don't know. And then for folks who do have that positive reaction, it's really fun to have a conversation about how Stack Overflow impacted them and what it meant for their career or their own development. [0:02:58] SF: Yeah. I mean, I think it's a product that has such an impact on like the engineering industry where everybody basically knows what Stack Overflow is. Yet, as soon as you move outside of that, nobody knows probably – my mom and dad don't know what Stack Overflow is, for example. [0:03:13] EB: Yeah. Typically, those questions are from my parents. Yeah. Exactly. [0:03:16] SF: Yeah. Exactly. I want to talk about some of the stuff that's going on in the world of generative AI and the impact that's having a software engineering. And also, get into some of the things that are coming out new at Stack Overflow that relate to this space. But one of the things that I saw in the recent Stack Overflow developer survey is that 83% of the respondents said that they've used ChatGPT. And there's of course tools like GitHub Copilot and a ton of other innovations that are happening in the Gen AI space. And I wanted to start off with what are your thoughts on how generated AI is going to impact the software engineering industry. [0:04:00] EB: Yeah. I think the data point you just pointed out is a really important one, right? ChatGPT has come onto the scene really, really quickly and been both a disruptive force to the industry overall. But also, sort of encouraged a lot of us to ask questions that we wouldn't ask otherwise, right? In our developer survey that we ran earlier this year, we also asked developers, like, "Do you trust and do you plan to use AI?" Right? And so, 70% of our developers in our community do plan to use AI in the near future, but only 42% of folks trust those resources. Right? And so, as we look at the broader space, our thesis kind of at Stack Overflow around AI comes down to a couple things. Number one is it sort of lowers the barrier to entry to developing software in some way, shape or form, right? I think Ben Popper, who is our Director of Content, wrote a great blog recently, which is basically his experience kind of hacking together an app as a non-developer using ChatGPT and using other Gen AI resources out there, right? Folks like Ben or others in the sort of broader technology space might have more of an opportunity to leverage code or leverage development skills in their day-to-day work, right? That also means that folks who are already in the industry have a new set of tools, right? Mid-level folks, senior folks can start to leverage technologies like generative AI to either accelerate their work, accelerate finding solutions to the problems they face in their work or accelerating sort of those mundane tasks that really, they'd rather not focus on. And instead, give us time back to really think about the hard engineering problems. Broadly, my thesis is kind of like I expect it to make the industry more accessible. But I also expected the industry becomes bigger and more fragmented kind of as a result, right? What a developer is doesn't necessarily look one-to-one with how we talk about that today in the future. Yeah. [0:06:14] EB: Yeah. I think that's a really – that's something I also have thought a lot about, is how does this sort of change the definition of a he developer? And you mentioned this idea of like lowering the barrier to entry. These emerging AI-based programming system tools, they really help democratize some parts of software development. If you look at platforms like Snowflake, it's suddenly a business person doesn't need to rely, say, on an analyst. This isn't necessarily directly engineering. But, essentially, you don't need to rely on the analyst to run a SQL query for you to ask some sort of question about the data that you're storing. You can just ask that in plain in English. Then, suddenly, the world of development or technical practitioners starts to grow. Not necessarily shrink. When we have these new types of technologies, sometimes there's this like reaction that, "Oh, no. It's going to take away jobs." But in reality, it's like expanding the number of people who can actually have access to this type of thing and allow people who are maybe more technically proficient to do their job more efficiently and focus on like larger, gnarlier, more difficult to solve problems. And I think one of the things that it does besides potentially impacting the growth of developer is – back to like what you were saying, is maybe this starts to change our definition or view of what a developer or an engineer or a builder is historically. Because, suddenly, we're going to have people who can essentially do some of the stuff. Like you mentioned, Ben Popper, being able to put together some type of application but not necessarily have an engineering training using these different tools. [0:07:50] EB: Yeah. And I think that also goes back to another theme which comes up, which is there's more use of tools. But it doesn't necessarily in and of itself grow the understanding of what those tools are doing or how they're best leveraged, right? That's another sort of like – I don't see it as a gap just yet, right? But a lot of our enterprise customers, particularly at Stack Overflow, are really worried about like, "Are developers going to start writing code they don't understand?" Right? And that's another area where I think AI can really help actually versus heart, which is helping us understand what the underlying technology is doing and aggregating knowledge and aggregating resources that explain those things, right? Going back to sort of Stack Overflow again, that's one of the things that our community has done over and over and over again millions of times over the last 10 years or so. And so, it's not just the democratization. It's building that understanding and that context as well. [0:08:51] SF: Yeah. This steer from the enterprise of people potentially writing code that they don't understand, I mean, I think that's a potential fear that exists today too. People have been copying and pasting code from even books before the internet existed that they don't necessarily understand to do something. [0:09:07] EB: Or from Stack Overflow. Yeah. Yeah. [0:09:09] SF: Or from Stack Overflow. I mean, I think maybe this creates a situation where it's done more scalably. And there's other potential downsides. And that's one of the things I wanted to discuss, is while you mentioned that 42% of people who responded to the survey don't necessarily trust the AI today, which is probably in some ways that feels good to me that they don't necessarily trust it. Because there are problems that exist today that we need to think through and try to solve. But what are some of the limitations of AI today in terms of its ability to help developers do their job? [0:09:44] EB: Yeah, absolutely. And I will say this is just my experience, which is as limited as sort of the industry itself, right? We're all really learning about this topic kind of as we go. To your point, 42% of developers trusting AI I think is healthy in a lot of ways, right? There's still lots of things to be figured out. Also, as a bit of an optimist and with a skeptical group like developers, 42% is actually pretty good. That was higher than I sort of guessed when we ran this survey initially. I think I maybe bet on 30%. It was good to hear that piece. But I think in terms of limitations, right? I already talked a bit about challenges with context or understanding. There's also the idea that AI in some content can be a bit of a black box, right? Where is this knowledge coming from? Where is the underlying data set being based off of? How can I know that I can trust this thing or this tool that I'm using? How well does it help me given the context that I have as a human being, right? A good example there might – going back to the enterprise, being, "Yes. This is the industry best practice. But it's not the code base of the giant enterprise that I'm working with." In the latter, I really know better. And so, that's another limitation. Other challenges start to look like, and these are all very high-level, but how do we socialize and standardize best practices in a world where AI is telling us what it thinks is right? And then last but not least, I think bias and sort of other pieces around how we recommend content become a huge challenge. And, ultimately, a developer is only as skillful as sort of the content they're using to find a solution, right? A lot of those skill sets come in finding that solution. And so, if the AI is recommending distressful content, that has a pretty negative impact. [0:11:53] SF: Yeah. And we've seen in the industry some recent exploits to that end where, for example, it's not uncommon for systems like ChatGPT to have hallucinations. And if you're asking for code, then it could hallucinate, for example, package name. And then people can go and exploit that by creating a package called that thing that does something malicious. And that's actually an attack that has happened recently. And that kind of supply chain problem for attacks is already like a big issue in the industry. And then, suddenly, if people are doing it by taking advantage of these hallucinations, it becomes even something that's even bigger and more prevalented or dangerous. [0:12:33] EB: Right. Exactly. Yeah. And that's just one of many challenges I think the industry will face over the coming years. [0:12:39] SF: Yeah. For the last 15 years, I think Stack Overflow has played this critical role in the lives of engineers. Me personally, and basically everyone that I've ever worked with, has grown their ROI on it or things like debugging how code snippets. Understanding weird errors and other things. What role do you see Stack Overflow playing in this new world where someone can essentially ask a chat bot for help as easily as another person? [0:13:12] EB: Yeah, absolutely. It's a great question. It's actually a question I get almost every single day. It's good to hear from you as well. But, fundamentally, my answer is I don't think it's an either-or scenario. I think it's really about both, right? There's a role for both things as we sort of talked about. Our community and the people within our community have a really, really strong presence particularly around answering millions of questions and building up our knowledge base over the years, right? You guys really good at building that new knowledge. Generative AI is not good at that, right? If you look at even OpenAI, it has a limitation I think most recently up to early last year, right? We can't even show you that new knowledge, validating that new knowledge. Verifying it. Choosing independent resources. Understanding that knowledge and its interpretation. And then making connections between pieces of knowledge, right? Humans are really, really good at sort of saying these two things go together. There's an error over here. It might be caused by this, right? Making those connections is really where like asking humans or even asking a community can be really, really valuable. And I think chatbots and AI are also valuable for a lot of other things, right? One of the things we think about those tools is being really strong at Stack Overflow is helping us discover content. Discover the right answer. Aggregation of content. Driving comparisons between pieces of content. Really thinking about explaining existing answers or going deeper providing context to a really specific scenario or situation. And so, I think there's actually a place for those two things to complement each other, right? When you start to talk about some of the industry problems around trusts, attribution, accuracy, feedback and recognition, it's really about how do you help technologists find a solution? When those two things come together, you sort of remove the burden on the technologist to have to choose which venue they're going to kind of go to. Instead, sort of say, "How do you leverage the best of both?" [0:15:25] SF: Mm-hmm. Yeah. I imagine what you're saying here is how do you bring these worlds together? If I go and ask an LLM for help on writing some piece of code, then I might also want to ask the community, "Is this the best practice? Is this the most optimal solution?" Those types of things. Because the LLM may or may not tell me the answer to that truthfully. [0:15:46] EB: And it can also break through with hallucinations as well. Yeah. [0:15:49] SF: Mm-hmm. Yes. Yeah. And I talked to someone from Menten AI that is using generated AI for essentially drug design, for example. They're able to generate new types of molecules. But then there's a lot of – obviously, there's a ton of human work from trained scientists afterwards to take that sort of generated molecule and actually get it to something that's going to be able to be used by humans to actually address some sort of disease. But it can serve as the inspiration for something that maybe a human wouldn't have thought about. And I think we're in a similar space from a coding perspective where potentially these types of coding assistants can actually be a breakthrough in terms of innovation coming up with an idea that maybe we hadn't thought of before. But that's just a starting point. You still need a level of expertise to evaluate that solution. There was a recent breakthrough on matrix multiplication that was due to AI as well. And that was something that had believed to be – the optimal solution exists for 50 years. And it wasn't how we applied AI to kind of come up with something completely new and innovative. By then, it took experts to figure out what is actually the big o representation of this algorithm. And is it a real breakthrough and so forth? [0:17:04] EB: Yeah. No. I think that's a great point. I think you sort of point out two things. One is the human in the loop, right? Having that human in the loop of software development feels like it's as important as it ever was even in a world with AI. Just what the human does maybe shifts a little bit in that conversation and maybe goes back to that efficiency that we talked about earlier in the conversation. It also reminds me of an anecdote a product manager on my team says pretty regularly. And he's sort of very regular user of ChatGPT and he now says there's no such thing as writer's block, right? I always have a place to get started if I – give me the bullet points to get me started and thinking about this. And it's a brainstorming tool as well and as you start to think about innovation. [0:17:53] SF: Yeah, it's always the blank page problem essentially for everyone. Because there's pockets of time when you just don't feel that creative or that motivated. And sometimes it can kind of get you over that stage. Can you share some insights into some of the new products that Stack Overflow is watching or developing in the generative AI space? I know that your CEOs made some recent announcements. But what can you share about what's actually going on at Stack Overflow in the space? [0:18:24] EB: Yeah. For those of you who don't know, about two weeks ago now or maybe a little bit more by the time this comes out, our CEO, Prashanth, shared at a conference for developers in Berlin a new concept for us called Overflow AI. And Overflow AI is essentially an aggregation of the generative AI solutions that we're working towards at Stack Overflow. And so, my team has actually been kind of – I didn't include this in my introduction, but my team is largely focused on building, deploying and testing those solutions at Stack Overflow. It's a cross-functional team of folks from software engineering, design, data, data scientists, product managers, developers, analysts, you name it. I think I said developer's choice, but this is a developer podcast. So, important. But essentially those folks are – and will be building towards a set of alphas that we're releasing in the upcoming two to three weeks available for the most part to the public, which leverage some of the new generative technologies towards solving core problems for software engineers and for users of our site and for our platform. I'm happy to kind of walk through a couple of those if that's helpful as well. [0:19:49] SF: Yeah. I would love to get a little bit deeper essentially on what is sort of the user experience or what problem specifically are you trying to address through these technologies. What can I do essentially as a user of Stack Overflow? [0:20:01] EB: Yeah. Essentially, I'll touch on two or three of them. There's like six or seven. If folks are interested in learning more, here's my welcome plug to go check it out. And you can sign up for those alpha products yourself at our Stack Overflow Labs page. And sure, we can link in the notes. But essentially, we heard from developers in our user research that there were sort of six core things that they struggled with. Take generative AI out of the picture and sort of help developers and technologists understand how we can give them – like, we wanted to understand how we could provide more about ourselves. The first is problem-solving, right? We hear from many of our developers that they need to address problems in a new landscape or in a new role. Learning is another big one. How do they leverage a new technology solutioning, right? How do you actually find a solution to a technical problem? Onboarding, which is when I'm new to a context a solution or a company, how do I come up to speed really quickly? Sharing their knowledge and then, ultimately, innovating as well. Those are sort of the six core problems that we centered on in terms of – that we thought were sort of both well-suited for Stack Overflow to start to solve for that were relevant to our audience and that were also well-suited for potentially building AI products on top of. One of the first things we thought about was search, right? Our existing search on Stack Overflow has its challenges. I think there's a fair amount of memes out there about the quality of our search and how hard it is to find content. And often, a lot of our user feedback is I have to search the exact terms in order to get the content that I want. Or if I search from Google, that's great. But I can't search in the search bar, right? One of the things we're doing is starting to rethink our search from the ground up and really moving towards semantic search as a first step. But in steps two and three, really starting to tailor and personalize our search results for the user. That looks like everything from your skills as a developer, to your proficiency in a particular topic, to what questions you've viewed recently may all play into that set of rankers. But really rethinking and re-enabling developers to ask natural language questions in our search box as well as providing and returning summarized answers across multiple resources that highlight the best practices across a variety of answers related to the search itself. That's kind of one big thing that we've done to start. I'll pause there. But search is sort of the first problem to solve, which is really about how do we discover the right content to help me solve my problem? [0:23:08] SF: And is it this piece that you talked about where you could essentially get a summary that maybe connects multiple answers across multiple questions. Where the generative AI comes in, that goes above and beyond sort of conventional information retrieval, ML-based techniques for search? [0:23:25] EB: Yeah. That's right. The way you can think about it is our viewpoint is not to post generated content on our site, right? Our community has done a really good job of like aggregating and validating information. Instead, we're sort of using an aggregated search summary to sort of say, "Across that aggregated, validated, attributed content, these are the most likely or most relevant pieces of content that we see." Right? One of the pieces of feedback that we get is often – the answer that has the most upvotes is often the oldest, right? Here's maybe the most relevant or the most frequently leveraged recently might be one way we could think about improving that ranking. But it's also pulling in that set of data points as it's starting to aggregate answers. [0:24:16] SF: How do you determine something like most leveraged versus essentially most upvotes? [0:24:24] EB: This is one I'm not super com – sorry to cut the line there. I mean, I think like we're testing a bunch of things. I'm not quite ready to sort of – our community is going to kind of first react to that pretty strongly. Sorry to pause. Can we skip that one? If you don't mind? [0:24:43] SF: Yeah, sure. No problem. [0:24:45] EB: Yeah. Yeah. [0:24:49] SF: You mentioned essentially the improvements around search. What are some of the other integrations of gen AI or places where you're making investments from a price standpoint that users will be able to experience with some of these tools that are available in the lab? [0:25:05] EB: Yeah. We're also thinking about a couple options on our Stack Overflow for teams product, right? The first is what we're calling enterprise knowledge ingestion. This is a concept that is essentially many of our customers on Stack Overflow for teams. For those who don't know Stack Overflow for teams is sort of a private version of our Stack Overflow public website used by enterprises and other businesses. We have about 10,000 customers. But enterprise knowledge ingestion is really getting at solving a core problem that organizations have, which is really onboarding, right? They'll be really excited about the idea of a community internally built around asking and answering questions about their existing code base. But when they first adopt the product, there's not really any content in there, right? How do we help them identify which resources they already have might best kind of kick-start the development of that internal knowledge base, right? We're using gen AI to index and recommend which content you know for an individual enterprise is most relevant to that instance and then create early not Q&A pairs for that community right in the platform. So, starting to push on that as a solution. And so far, we've received some really positive feedback from our team's customers that'll look at sort of external products like Confluence, Google Drive, GitHub or ServiceNow with more Integrations we're building out over time. But it's also sort of helping organize the community itself. Recommending tags and other pieces of content to organize that community for our customers as well. [0:26:55] SF: How does sort of community play a role with some of these tools? How do you bring community and sort of gen AI or some of these – I guess it's not necessarily about gen AI so much as the solution that's essentially available, but through the AI system. How do you bring these two worlds together? [0:27:14] EB: Yeah. I think about that question from two angles, right? One is how do we bring the community into the development of these products? And then, how do we bring the community into the loop when using those products, right? And I think the first really looks like our methodology for building these products, which has been you know identifying and working with our community to identify those unmet needs. To use a research, iterate, take their feedback, right? Standard kind of product development practices. But on the community features side, I think it gets more nuanced, right? I'll give one example. We are building, and we'll launch later this month VS Code extension in the IDE, right? If you think about that user experience really looks like enabling developers right from their IDE to think about and both ask questions and get an answer from generative AI from Stack Overflow's knowledge base or from a combination of both get an answer or find a solution to their problems. But then we're actually encouraging folks to start posting their learnings back to Stack Overflow, right? Bringing the content or the conversation that they've had out into the public to then get feedback and validation or invalidation from the larger community. Aggregating that knowledge so it can be reused and then also aggregating that knowledge so it can be validated or invalidated is a core principle of that feature and that sort of product in particular. That's just one example but speaks to kind of like how we're thinking about like putting human in the loop of each of those processes. [0:29:06] SF: Yeah. And by bringing this world of the Stack Overflow community directly into the tools of the trade that engineers are using like their IDE, then you're also lowering the barrier to entry to actually using Stack Overflow not only as a tool for helping you like solve a problem but for actually contributing back to it. Because I don't have to like basically stop my workflow and go to Stack Overflow to post something. I can contribute directly from the tool that I'm in all day. [0:29:32] EB: Yeah. Plus, long-form documentation takes a long time. And that's sort of central to why Stack Overflow is successful in the first place, is because it was brief in its ability to ask and get answers. [0:29:44] SF: And then what about in terms of the ML toolchain? Are you leveraging existing open source products or existing toolingers? Or is a lot of this stuff built essentially from scratch or custom-built in Stack Overflow? [0:30:03] EB: We are leveraging some off-the-shelf products and some custom-built products. I think we're also exploring how we built our own products internally in terms of like advancing our internal knowledge of generative AI on our teams, right? Our teams are still new to space. It is a healthy mix of the two. But right now, it's – yeah. I'm going to – that sort of I need to stop there just because of my own comfort. Yeah. Sorry for that. [0:30:33] SF: Yeah. So much of like the power of AI, and in particular LLMs, is about the quality of the training data. And Stack Overflow probably owns one of the richest, if not the richest data set of engineering-related content in the world. How does kind of this massive knowledge base play into the AI plan at the company? [0:30:57] EB: Yeah. I mean, I think, ultimately, we think of our knowledge base as owned by our community. And if you think about the Community Commons, ultimately, our community owns our broader data set, right? But within that, our data set is very rich, right? Not only do we have a viewpoint on which queries are most frequently used by developers, but we also have validated information around what acceptable answers look like. Voting, commenting, all of that rich metadata kind of allows us to think about our data not just from a sort of quality perspective, but also from a nuanced perspective, right? There is validation. There's trust. There's attribution to our data set that's kind of baked into its core in a way that a lot of other things don't, a lot of other industry partners just don't have. Right now, our plans to use that data or to use it to our community's advantage. What that looks like is, as we learn from our community which AI features that we build or broadly which features on our platform we build that are more or less useful , we're going to feed that back into how we recommend both solutions to users on our platform, content to users on our platform and continue reinforcing our product quality with that data set overall. The exact pieces of that plan are still being worked out. But high-level, we're using it to reinvest in our community. [0:32:37] SF: Does the sort of high-quality of the data that you're putting into the training of these systems help solve some of the challenges that LLMs tend to have around like hallucinating information or some of the challenges that people might have with actually trusting the information that's coming from the AI? [0:32:54] EB: Yeah. I would say we're earlier days in answering that question. But the short answer thus far is yes, right? We are seeing some signal that our data is kind of better at predicting what a technologist might need in a given scenario than almost anything else out there except for maybe ChatGPT at this point. That's sort of the extent of what I can say right now. But I would say um folks are internally excited about what that could present as an opportunity for us to better serve our audience, right? Yeah. that quality pairing that I talked about earlier is really central to that and the validation from our community is also really central to that. So that piece is really at the heart of what we're thinking about. [0:33:45] SF: And how big is the team that's working on these new products? And sort of what is that makeup of the team? How is it structured? [0:33:53] EB: Yeah. There are about 50 people across the teams that are working on this full-time. I would say about 100 people in the company have been involved with AI-related initiatives in some way shape or form. In some way shape or form, it's one-fifth of the company dedicated. We have about 10% today. And in terms of structure, it's myself and an engineering leader, our CTO, Jody Bailey, as well as partners in data engineering, data science, product design and product research who are leading that effort. And then we have about five teams that are working on developing the products across the organization. That's the kind of dedicated team structure. But one of the cool side effects that we've seen from this is as those teams have started working on generative AI products, it's sort of expanded across the organization unofficially, right? It's been pretty fun to watch. As my team started shipping early-stage solutions, we started seeing teams across the organization who are not in the dedicated set of teams being like, "Hey, that looks pretty cool. Why don't we try that too?" and started adopting AI products as well. I expect that we'll move from a more dedicated team there to expanding and thinking about generative AI. And it's positive use cases for our community more broadly across the organization going forward. [0:35:20] SF: Yeah. I think that makes a lot of sense. I think one of the things that we didn't really touch on earlier when we were talking about a sort of big-picture impact that this might have to like the software engineering industry is that not only does it create situations where people can operate more efficiently using these tools to do their job or lower the barrier to entry to being able to do some level of coding. But it also means that, essentially, by you leveraging some of these third-party APIs and tools that exist from like Open AI and other platforms is you can build and bake AI into your product without necessarily being an expert or academic. 10 years ago, that was not necessarily the case. Now you can essentially make a handful of API calls and be doing something where you're generating an image based on a plain text prompt or something like that. And it feels like magic. And it is in many ways. But it essentially allows anybody to kind of be like an AI engineer to some degree. They're not necessarily building the AI themselves, but they're leveraging it in a tool. It's another tool in your tool belt as a developer. [0:36:29] EB: Yeah. I think that's a great point. I talked to someone in the industry recently and he was sort of saying like the first wave of excitement around generative AI was really around code completion, right? And he was sort of mentioning he's seeing the emergence of what he's calling a second wave, which is like what you just described, which is how do I adopt AI in my developer workflow, right? Pretty soon, a lot of us are going to be building AI products. And what does it mean to actually be building those AI products? How do I leverage them? What does that discussion start to look like? And I'll give a quick plug for two things we're working on in here is actually led right into that. But we did also announce recently a stock exchange site for discussions about generative AI. For folks who have questions on its use, you can go to genai.stockexchange I think is the right URL. But we can correct in the notes if not. Where folks are starting to ask questions about how they leverage average gen AI in their workflows as well as community discussions within our collectives, which are sort of subsets of our knowledge base on Stack Overflow about machine learning, right? We're seeing increasingly folks in the developer space interested in discussing that application almost as much as understanding which AI products are out there to be leveraged. [0:37:55] SF: Yeah. That's great. Because I think right now, the source of information where I get a lot of that stuff is Twitter, which is not necessarily the most reliable community to be getting my tech information from. I would love to go to a place that maybe has a little bit more control and quality control over the information I'm getting. But yeah. I mean, I think a lot of this – I've said similar things in the past. It reminds me of big paradigm shifts in development in the past, like the introduction to the internet or the introduction of smartphones, where essentially part of what you need to learn as a developer is not necessarily – you don't need to necessarily learn how to build a huge neural network from scratch. But I need to know what it is I can do with these tools so that I can essentially build better applications that lever some of this technology. [0:38:50] EB: Yeah. I don't need to build the magic. But I need to know how to leverage the magic to make the magic happen. Yeah. [0:38:58] SF: What are some of the hard problems that you've had to solve along the way with some of these like new product investments? Are there like particular challenges that come to mind that you had to work through as a team? [0:39:10] EB: Yeah, absolutely. I mean, I will say this. This is one of the biggest challenges of my career. I mean, I've only been in product for about 10 years. Maybe that's saying a little bit less. But ultimately, we face a lot of challenges in this work and I think we'll continue to do so as well. The big one overall is – it sounds cliche, but just like everyone on this podcast, gen AI is fairly new to all of us, right? We're adopting new technologies and we don't know what we don't know. There's a lot of us wrestling with how do we implement tests or validation against core use cases that we don't even know they really exist, right? Everything from red hatting, to thinking about trust and safety issues, to thinking about our community's reaction to particular types of gen AI and use cases for gen AI. Starting to think clearly about like those implications even before the industry standard existed I think was a really big challenge for the teams. And I think we've learned a lot, right? One of the things I really appreciate about this space is that folks share what they're learning. And so, looking at other companies who are trying and failing with similar things and sort of stealing those learnings has been a big opportunity for us. We talk constantly internally about what worked What didn't work? Do we need to pivot? Are we able to find the right solution? Or should we abandon this solution given our inability to touch on some of those risks? Sort of learning about the industry as we build in a new industry has been a big challenge. The other is – and I think this is new to all of us. But what is really the value of these products, right? I think the entire industry is very excited to leverage gen AI. But as a product person, I've always a healthy skeptic for let's make sure this adds value. But you know none of us really know how much value things will look, right? Because we're playing in a new space and we're playing in a space that doesn't have a lot of concept of – we talk about AI-summarized answers. And we think that will add a lot of value. We're seeing a lot of positive signal from our users. But we don't know how exactly we'll land, right? Dealing with the scope of unknowns I think has been another challenge. And that's more of a human challenge than a technology one. [0:41:50] SF: Yeah. And I think that's another sort of byproduct of any like new big like paradigm shifts in the way that people are interacting with technology. Everybody's basically at the forefront of this. And we're trying lots of things. And we'll probably look back at this time in our history five or ten years from now and laugh at a lot of the product ideas that came out of that. And just like we laugh at like the dot-com boom and bust of f.com and all this sort of stuff that in retrospect didn't make any sense. But I'm sure it made sense to somebody at the time at some level, right? But people were just kind of throwing things at the wall to try to figure it out. One of the other things that you mentioned there I think is a really interesting space and opportunity actually for new startups or new types of tools is essentially what is the developer tooling that needs to be created to take advantage of some of these new ways of doing development? You mentioned testing, for example. We see that as well as like when smartphones and people are building mobile apps came in, suddenly using and applying things that had historically be done for desktop applications didn't necessarily work that well for the mobile platform. And then a whole new set of tools came along that were developed that led to explosion in the industry. And I think we'll probably be in a similar place when it comes to leveraging some of this AI technology as well. [0:43:12] EB: Right. There are some things that still apply and there are some things that no longer apply. And we're trying to figure out what fits in each category at the moment. Yeah. Definitely. [0:43:21] SF: Yeah. Absolutely. Well, as we start to wrap up, Ellen, do you have anything else that you'd like to share about some of the things that are going on at Stack Overflow or just general thoughts related to the space? [0:43:31] EB: Yeah. In terms of Stack Overflow, we are launching a number of Alphas for our generative AI products under the Overflow AI umbrella over the coming weeks. If you go to the stack Overflow Lab's page, which we'll link, you can sign up to either participate in those alphas or get early access. I encourage as many of you as possible to do so. As well as sign up for a newsletter in terms of understanding more about what's coming up next for us. We'll be collecting feedback on those alphas in August, September or October. And then if we see value and our community sees value and those things, we'll bring them to scale over the latter part of the year turn and into early next year as well. Be on the lookout for those things both for Stack Overflow public and Stack Overflow for teams. And then in terms of the overall space, I think I'm personally really excited going back to the very beginning of the episode when we talked about access, right? It's really exciting to me. And I think I'm certainly watching out to sort of see if that pieces proves out, right? Does AI actually make things more accessible? More fair? More democratized across the industry? That's my hope. Whether that actually happens I think is a different question. But I'll be watching out for signal around that theme in particular. [0:44:54] SF: Well, Ellen, thanks so much for being here. I think like so many people in our industry, I've been using Stack Overflow for years. I'm excited to see some of these new product lines. And I need to get on this alpha list myself and also join the newsletter and hopefully start avoiding Twitter and using Stack Overflow for information and knowledge exchange related to the gen AI space. [0:45:17] EB: Sounds like a great plan, Sean. [0:45:19] SF: All right. Thank you. [0:45:20] EB: Thank you for your time. Have a great day. [END]