EPISODE 1842 [INTRO] [0:00:00] ANNOUNCER: Welcome to the pilot episode of SED News, a new podcast series from Software Engineering Daily. Join hosts Gregor Van and Sean Falconer as they break down the week's most important stories in software engineering, machine learning, and developer culture. In this episode, Gregor and Sean discuss the CoreWeave IPO and the company's recent acquisition of weights and biases. Dig into Anthropic's Model Context Protocol, surface highlights from hacker news, and reflect on Microsoft turning 50. We'd love to hear what you think of the format. Reach out on Bluesky at @softwaredaily or on X at @software_daily, at @gregorvan, or at @seanfalconer. [EPISODE] [0:00:54] GV: Hi, and welcome to SED Weekly News. This is a new format that we're trying out at SE Daily, where we're going to be sort of digesting some of the week's events from software. It's myself, and I've got Sean Falconer, who I'm sure you all know, say hello, Sean. [0:01:09] SF: Hey there. Hey, everyone. Hey, Gregor. [0:01:11] GV: So, hopefully, as listeners, you've probably heard both myself and Sean normally interviewing guests. Today, we're going to be really just talking about some of the things that we've been seeing in the last couple of weeks in sort of the mainstream news around software and also things like hacker news, kind of trying to bring to the surface things that we see day to day, but don't quite make it into the sort of regular episodes with our guests on Software Engineering Daily. So yes, I mean, just to roll into this, how's your week been, Sean? [0:01:38] SF: It's been good. I was traveling ahead of this week. Also, one thing I would like to point out, Gregor, is this is the first time that I'm hearing your voice at normal cadence. Usually I'm listening to this - [0:01:48] GV: That is a good point. Yes, yes. [0:01:49] SF: So, it's a little off-putting, but okay. [0:01:52] GV: I don't speak as quickly as the recordings make out. So yes, and likewise for you, Sean. [0:01:59] SF: Yes. How about you? What's going on in your world? [0:02:01] GV: Yes, just I'm normally in Singapore, as I often mention in the episodes. However, this week I am in the Highlands of Scotland, which I'm from Scotland originally. So, quite nice to be recording this from up here, kind of where I come to do some thinking and clearly record new formats of Software Engineering Daily as well. But it's been good. It's been nice to do some dev up here and just kind of keeping on top of what's going on in tech. So, I guess with that, maybe we should roll into, these are more like the mainstream headlines that we're looking to cover. One of the big ones that came up this week was CoreWeave doing an IPO. Did you catch on that, Sean? [0:02:37] SF: Yes, I mean, I feel like CoreWeave's been in news a number of times recently, like they had public acquisition of Weights & Biases. I've had Weights & Biases of the founders on SED in the past. So, they're a company I've known about for a long time. I thought that was an interesting acquisition. Now, this IPO, at a time when I feel like a lot less companies actually are going public. So, I think that's an interesting move from their spot. [0:03:00] GV: This was an interesting one. The financial news kind of picked this one up, but I think it's definitely interesting from just being on the software side. At the end of the day, CoreWeave, my understanding is they had a lot of GPUs left over from their crypto days. So, these are one of these crypto-to-AI pivot companies, I believe. [0:03:20] SF: Yes. There's a couple of those out there. [0:03:22] GV: Exactly. So, they have a lot of GPUs, NVIDIA being the supplier of those. But curiously, NVIDIA is also a customer of CoreWeave. And I think maybe what the financial news kind of heard a few things and they couldn't quite understand why NVIDIA would be a customer of CoreWeave, CoreWeave renting out these chips. I think maybe as software developers, we can probably put the pieces together a little bit faster on that one where NVIDIA, we did an episode with NVIDIA and they are delivering in inference for their like products. But I imagine they're not actually running that themselves all the time. And I guess CoreWeave is now stepping in to be that kind of in front of that runs the actual chips rather than NVIDIA having data centers themselves. So, I thought that was kind of interesting. Financial news you just picked it up in a big way because there's a lot of debt. And so, I think to your point, Sean, you're saying that not a lot of IPOs are happening. I suspect this is maybe just a necessity for CoreWeave at this point. They have a lot of debt, it sounds like, and maybe no other way to kind of handle that. The other thing that was brought up was quite interesting was the fact that Microsoft is also quite a big customer of CoreWeave's. And I think what a really good piece of commentary was made about that which was just that like Microsoft CoreWeave is kind of like an off balance sheet asset for them which is, I mean, this is very financial, but I think it's kind of again interesting in software terms where you've got a tech company that Microsoft could clearly run this themselves but have decided like well here's this other company doing the thing that we could do, but actually is very costly, why don't we just let them do it, and then if it fails, then we can come in there later, but it was never on our balance sheet. I think that's kind of interesting. [0:05:05] SF: Yes. There's some weird, I guess, like business strategy mixed up in all the technology that's going on here. I think that's why really the like the big FAANG tech companies are such a, I don't know, like the beast to take on in the world of business. They had their hands in so many things in so many different ways. And when they get to a place where they can kind of turn on that engine of growth that they need to, they just have kind of like unlimited rocket fuel to throw at those types of things, which is, I think there's pros and cons to all that. In terms of like CoreWeave's IPO, and then also these like relationships with NVIDIA and Microsoft, what impact did that have in terms of their public offering? [0:05:43] GV: I think that was the story looked good from that perspective. I think a lot of people that were, I guess, looking to invest in this, kind of just saw it as a plus. They're like, "Oh, wow, NVIDIA is a customer. Microsoft's a customer. Fantastic." I think it's actually only the sort of financial news that's maybe taken another look at that and gone, "Well, why are they customers and why are they not doing this themselves?" That's where I think from the software side. We know why renting GPUs, that's a whole business in itself. There's a reason why Microsoft and NVIDIA wouldn't particularly want to do that themselves. Microsoft probably more so because they're already a cloud provider. But NVIDIA starting to get caught up in that side of things as well, the service side, that makes sense to me that why they would be a customer of CoreWeave. But I think it was that debt thing is where what kind of sunk this over anything else. [0:06:32] SF: Is CoreWeave like the kind of biggest company or like the well-known company that's kind of associated with AI that's gone public in the last couple of years? Are they kind of the first one to make a move like this? Obviously, there's a ton of really hot AI companies, but most of them, from what I can recall anyway, are also operating privately. [0:06:53] GV: Yes, I think that's a great point. I mean, off the top of my head, I can't think of any major AI company that we would actually sort of say is core AI. Of course, we've seen a few kind of say, "Yes, we're blah, and were blah plus AI, or AI for such and such." But I think this is one of the first ones where their whole play has been, we are an AI company, we're only AI. And obviously, as we know, the history is slightly different. It was, "We're crypto and now we're AI." But yes, I mean, this is the point like any other company that we could probably just kind of rattle off. They're also private. I'm like, OpenAI announcing 40 billion investment, like not that long ago. [0:07:29] SF: Yes, I wonder I wonder if, obviously, there's a huge variance in terms of what it means to be an AI company. Are you running inference and GPUs, or are you doing something that's more software-based? Are you building models? All these types of things. I wonder if there will be, because the IPO didn't go that well, if there'll be any sort of downstream negative effects to the companies that are still running, privately in terms of just the macroeconomics around this, does it hurt valuations, does it hurt their timelines of potentially going public? [0:07:57] GV: Yes, for sure. I mean, I think I am interested in the sort of always the financial side of these things, but more so in the technology. I think my take on this is just that the average investor is still institutional, are still pretty clueless, really, on a lot of tech and AI. So, yes, I think, unfortunately, CoreWeave not working out, at least just if you look at the just the headline, "Oh, it went down and such and such." Unfortunately, this will have a knock-on effect for companies that are doing great things, but just there's still this sort of outside of the tech world. Is AI a fad? Is AI just the next crypto boom thing? I think on the tech side, we can safely say it is different this time. At least that's what it feels like. But yes, unfortunately, to the slightly outside world, it might just have a knock-on effect at the moment until we maybe see other company IPO. [0:08:47] SF: Yes, I guess it might kind of give some of the naysayers fuel to their fire that complain about what is like, tokens are too expensive, inferences too expensive, this is a really like a business model here and stuff like that. But to me, that's a little bit like complaining about the cost of transistors in 1972, like the cost of these things will go down over time and then the economics change drastically. In relation to this, one of the things I wanted to chat about that also came out recently on the AI front was Meta's releases Llama 4. As far as I understand, they skipped over Llama 3, they just went straight to the four. I think there was a time when a model announcement was a huge deal. Now, there's like a model announcement every day of the week. It could be Sunday afternoon on a holiday. It's like Christmas day and someone's releasing a model. But I think what was interesting about the Llama 4 was one, this was kind of long anticipated. I think Llama 2 was showing its age in comparison to some of these other models that have been released. Essentially, there's three model versions are Scout, Maverick, and Behemoth. And some of the I think the key things that they have that differentiate from their prior model offerings is multimodal support, which is a big deal. I think, obviously, like all the sort of premier models are now multimodal in some fashion, huge context window. So, Scout has like 10 million tokens, something crazy like that. A mixture of experts architecture, which has kind of become the industry standard for a lot of the foundation models, but Meta had not used that in any of their open models. It's just inexpensive from its performance perspective. So, some interesting things there. I'm hearing I think mixed reception. Obviously, there was a lot of like hype around it when it came out, people excited about the sort of low cost, the size of the context of windows and stuff like that. But other people that I trust that are deeply involved in the world of AI say that from their perspective, there's not necessarily a lot of innovation there outside of the context of window size and the fact that it's multimodal. [0:10:43] GV: Yes, I mean, I guess in the last also to your point, models every week. We've seen GPT 4.1 come out and that sort of strangely has superseded 4.5 and they sort of rolled that one back or something to that effect. And 4.1 is now the shiny one and multimodal, I believe is part of that. I mean, what's your take on, we're definitely seeing general models winning at the moment. There's a company, Harvey, which is a legal AI company. I think it's kind of starting to come to light that sort of their highly tuned model for law is not quite hitting the mark, because I would say I've had some behind-the-scenes comments from lawyers saying they go home and use GPT. So, what's your take on Llama 4? Is that in this direction where just a bigger, better general model is winning the day? [0:11:32] SF: Yes. I mean, I think there's two parts of this. So, there's the open-weight models, and there's certain criticism that comes along with those where I mean, the advantage of the open weight models is you can pick it up, run it wherever you want. A lot of people use those to fine-tune, but I think a lot of people feel like it requires fine-tuning. At least that has been the criticism of Llama 2. I haven't played around with Llama 4, so I can't necessarily say from my own experience, but definitely there is a substantial performance difference that I've seen from using Llama 2 sort of base model, versus using any of the other premier models of like GPT or Claude and stuff like that. It's just clearly a difference in performance. There's reasons why you might want to use that model, but a lot of people end up having to like fine tune it to get it to the performance that they need for the specific application. So, there's sort of that contrast of the open weights versus the other models. And there's also these two world views or sort of debate that's going on in the industry of, is it going to be a world where we bring the data to the model, or do we bring the model to the data? [0:12:36] GV: Yes, that's a good point. [0:12:37] SF: It's not 100% clear what the right way is right now, but I think if you're looking for the best performance via simple API call, you're probably going to get that from your OpenAI model, quads on it, 3.7, like those types of models, kind of just out of the box give you the best performance. I think a lot of people start there and they build their application there because they know they can get decent performance. And then over time, they might start using one of the open weight models where they try to fine-tune it to get into the performance that they want or adjust prompts and stuff like that. [0:13:07] GV: Got it. Where are you running Llama? [0:13:10] SF: Well, I mostly use the smaller models so I can run those locally. And then also the company that I work for, Confluent, we have an early access version of one of the smaller model models that you can use via like native inference. So, if you're running something on Confluent Cloud, you can basically run native inference directly on Confluent Cloud. You can also call out to the bigger models, but one of the models that we support sort of locally is the 7 billion parameter model for Llama 2. [0:13:36] GV: Nice. Yes. And just to go back to that point you made bringing data to the model or model to the data, I think that's a great point. It's something I've been thinking about more. One example I about this week was probably one that other people have thought about, which is like Stack Overflow. At what point does Stack Overflow, for example, stop having the data to train the models? So, where did the models suddenly get their code and debugging data from because no one's producing that anymore? I think, it's an interesting one. [0:14:01] SF: I mean, that's the big challenge that a lot of people have pointed to is, is there more public information that these models can suck up? And there was some news that came out last year or late last year about how the promise of some of the next generation models haven't lived up to the promise, like essentially the performance gains are starting to slow down even though they're accessing more information and they're trying to figure out like where do we go and get more data? There's a lot of research and testing around how much of this can we synthetically generate but there's also pros and cons there if you're using the model to generate synthetic data and in training on it, does that degrade some of the performance over time? At some point, the public information on the Internet is going to be surpassed by, essentially, the majority of it is going to be AI-generated versus human-generated. What does that do? And I think the next generation in maybe the arms race of around AI is who has the best data sources? All these companies that have been around for a long time are sitting on mountains of probably really high-value human-generated data. So, is there a startup out there somewhere that becomes like a marketplace for data sharing that help train these models or some other way to tap into those types of sources? The other thing is, you know, if I'm a big company that's been around for a long time and I have that data and I want to start to leverage AI, that's sort of my value to the equation. That's my proprietary information that I want to hold on to. So, maybe I want to use that as part of some fine-tuning process or at least use it as part of whatever sort of RAG process I'm doing. [0:15:33] GV: Yes, completely. So, we're going to move on. We've done the main headline topics, and now we're going to move on to a sort of all-encompassing topic that's been in Hacker News a lot in the last couple of weeks, being the main news, I guess, even. But it's certainly a topic that probably none of us have been able to escape from, which is MCP, MCP servers. So, as I mentioned, I'm sure as a developer, you're listening to this. You must have seen something on MCP in the last while LinkedIn, Hacker News, et cetera. [0:16:03] SF: Unless you've been under a rock somewhere. [0:16:05] GV: Exactly. I was just thinking, yes, exactly. And unless you've been under a rock, MCP just seems to - I mean, as soon as it hits LinkedIn, that's like classic thing, if your hairdresser starts talking to you about X, then it's probably - so I wouldn't be surprised that my hairdresser started asking me about MCP servers next week. So, I think the key thing is that a lot of people have different ideas and definitions around what MCP even is, we'll get into, I think the one that's been bandied around a lot is this is the USBC of interacting with AI services. And yet then there's lots of other people that say, "That's a rubbish way to describe it." So, I think let's just start there. Sean, do you want to just walk us through this? What is MCP? What is an MCP server? [0:16:46] SF: Yes, so MCP, which stands for Model Context Protocol, was announced from Anthropic last November. And Anthropic pushed us forward, but it's not an Anthropic technology, it's essentially a proposal for an open standard. The problem that we were trying to solve was that when you're building like an AI agent, part of an agent typically is tool use, and a tool could be a function that's going and executing something, some sort of deterministic process, or it could be a function that's going and gathering data from some sort of place. Before we had something like MCP, every agent framework that you're building on, or if you're doing this from scratch, you're kind of doing that in a new way. You're going and writing code to talk to an API, or you're going and writing code to talk to a database server or whatever it is, which is fine if you're doing three tool integrations and that's the end of it. But as people build more and more agents, you want to get away from having to go and essentially write a lot of bespoke code that you then becomes a potential technical debt in your stack because you have to manage that and stuff like that. It's just like the value of using an API gateway or something like that, to interconnect different systems. So, what Anthropic did is they're trying to solve that problem. They propose this standard, which is based on a client server model, where if I have an MCP client and I'm a data provider like Slack or something like that, or Google Drive, I can create essentially an MCP server that adheres to this protocol, stand that server up somewhere, and now that client can talk to that server and gather data from Slack and stuff like that. The client, which could be essentially an agent, doesn't need to know anything about Slack protocol or what that is, it essentially is just expressing in natural language what it is that it needs to gather. And the servers can define, they can have multiple tools defined, and there's essentially the standardization around how you expose that. So, you have tools, you have something called resources, which are essentially data that you're providing. And then you also provide prompts that are predefined as well that these clients can use to interact with. That was announced in November. I was really excited about it when it first announced. I read it, I understood what it meant potentially for the industry. Even in my day job at Confluent, sort of working on AI, that was something that I have worked on champion. We actually released an MCP server open source project a month or two ago that allows somebody to talk to data in Kafka, also manage Confluent Cloud directly through natural language as a tool interface, and you can plug that into whatever MCP client you want. But what's really interesting is, so MCP's success, like any standard depends on adoption, because if there's one MCP server out there, it's not much of the standard. So, had very steady significant growth of a lot of open source projects and stuff like that. But really, I feel like over the last month, it really exploded because OpenAI came out with their new agent framework that supports MCP. So even though, arguably Anthropic is maybe their biggest competitor, rather than OpenAI coming up with their own version of MCP standard, they said, "We're going to support this as well." Then AWS and Bedrock also came out with their agent platform and they announced support for MCP. And then last week at Google Next, Google announced their new agent framework, and they're also supporting MCP. So, you have these like really, really big players in the industry, that are also supporting the standard, which I think has really elevated it into the stratosphere of conversation around AI. [0:20:11] GV: There's like platforms like Smithery, for example, where you can go and basically just find MCP servers that do whatever you want it to do. But I think that's maybe been some of the criticism as well, which is sort of like who's running these servers and like who's behind them, because I think that was maybe the slight mental switch I had to make when say looking at something like Smithery, which was I was like, okay, where's the repo? It's like, there's no repo. And then, "Oh, okay." Then we've got to think about, "Okay, well, hey, here's one that's like, this can help you interact with Google Workspace." It's like, okay, but then my API keys are going where exactly in this kind of thing? So, is that something you've kind of, I guess, hit up against like as you've been using them? [0:20:51] SF: I mean, not so much in my use, but I understand that. It's kind of a little bit of the wild west, like any sort of new technology. It's clearly not great if people are just like standing up MCP server. I mean, maybe that's fine if you're doing your hobby project and stuff like that, but obviously you have to be aware of what you're consuming. If it's just like some random person standing up a server that's like going to go and talk to your Google Workspace, like clearly like, where's the API key staying? What's their login? What's the agreement that you're making about data sharing stuff? People get so scared about the idea of sharing something with like an OpenAI model. I'd be a lot more scared about sharing my information with a random MCP server that's running in some data silo in Northern Russia or something. So, I think these are things that you have to be thinking about. Just like when you go and you grab a random GitHub repo or NPM resource or whatever it is, you have to be thinking about, especially in production scenarios, like what is the potential supply chain issues that could happen and stuff like that? [0:21:49] GV: Yes, I think that's a good delineation, running MCP locally versus production. I kind of get the impression that at the moment, MCP for the average person, average developer, it's probably more of a local endeavor. The easiest way I understood to get up and running with MCP at all is, for example, download the Claude actual native app and then have it authorize into any given tool, like Slack or Google Workspace or whichever. But at least that's all running locally as well as you probably trust Anthropic at this point with any kind of data like that. Then outside of that, sure, you can run any kind of local MCP setup and not worry too much about maybe where things are going. But then in production, I think that's where there's maybe just a ton of question marks at the moment. [0:22:38] SF: Yes. And I think that over time, obviously like the number of MCP servers and clients that are available like on GitHub is open source projects is like exponential, right? Obviously, that's, I think like wider technologies, that's like the tip of the spear. But over time, more and more people who have - I don't think Snowflake has this yet, but let's say Snowflake, as a Snowflake user, you can expose your warehouse as an MCP server because you're building some agent, right? So, there'll be, I think, these more trusted sources, like I'm already a Snowflake user, I'm going to use their built hosted MCP server to talk to my Snowflake rather than some third-party server that's running somewhere where I need to give them my credentials to talk to my Snowflake and stuff like, so I think it'll be these more trusted versions of these servers, more and more available from the regular companies that you would expect. [0:23:28] GV: Yes. I mean, just then in terms of actually what does this enable, I think you touched on when we were just chatting earlier about agent to agent and how that's been mentioned pretty recently. So, maybe just talk us through that. [0:23:40] SF: Yes. So, agent-to-agent was another announcement from Google last week. It's complimentary to MCP. I see there's like a million articles that talk now about explaining the differences between these things. So, I think like MCP was really focused on the tool. To use your analogy or the analogy that you alluded to earlier, like being this USB, it's really about like a standardization or like how I go and gather data or how I execute some sort of function. What agent-to-agent is trying to do is also propose an open standard, but solving a different problem. It's actually a problem I wrote about a month and a half ago in this article where I talked about AI silos. So, if we look at the problem that Google's trying to address is essentially it's probably a future problem whereas enterprises adopt more and more agents and more and more agentic software. I have my Salesforce agents, I have my Glean agents, and I have my Cortex agents, and then maybe I'm building my own agents. I'm creating these islands of essentially independent agents that don't talk to each other. So, just like we've created tons and tons of data silos, we're essentially creating a future of these AI, like intelligence silos, where they have no way of communicating to each other. So, in the article that I wrote, I talk about this problem, and I've proposed a solution through data streaming technology. What Google has done is they proposed a standard to solve this problem, and essentially that's this agent-to-agent protocol. So, it's more focused on how do we take these crews or meshes or swarms of independent agents and make it so that they can all talk to each other. So, I can go and I could build an agent using Microsoft AutoGen and I could build one in LangGraph, and presumably if my Glean agents also support this, I could have them all have conversations with each other. [0:25:19] GV: It sounds like a great, I don't want to say dream in the sense it's not possible. I just mean it sounds like a great sort of vision and a great future. What kind of applications can you, applications to, to the kind of the real world, if you like, can you see this helping with? [0:25:31] SF: I think they're trying to address a problem that maybe doesn't fully exist right now. Because most businesses, at least the ones that I talk to day to day in my job are just kind of like, I just want an agent. I just want to like solve this. I want to do loan underwriting or claims processing, right? We have a bunch of forms that we need to fill out. We want to automate it. So, they're starting there, this is kind of solving, I think, a future problem. But that future problem, if you believe the vision that at some point, enterprise is going to be like having thousands of agents running around, doing various time tasks, or software is some sort of an agentic workflow, then this is going to be a real problem probably in the next few years. I think fundamentally what it is, and I talk a little bit about this in my article, is it's kind of like what HTTP was the web, where you have the standard of protocol that suddenly websites can talk to each other. That's what they're trying to create. It's a big idea. It's a big vision. But it's a successful, they're creating essentially the HTTP of interagent communication. [0:26:28] GV: That's a great way to frame it. Probably kind of forget before HTTP, that was such a difficult concept to think about how computers would talk to each other over wires and obviously dialogue connections at that point in time. But that's all kind of roughly been solved. But yes, here we are in the AI era, and everyone's kind of doing everything in their own way. That's kind of nice in a way. I've always thought that two years ago, even with ChatGPT, it kind of reset the baseline a bit for all developers. Everyone in tech just had to like come back to the starting point again for a lot of things, which I think is really nice that everyone can kind of level up that way. But at the same time, it does mean that we're kind of missing quite a lot of infrastructure that we're maybe used to having such as just a protocol that enables us to link things that I think we can agree are probably going to need to be linked in the future. [0:27:16] SF: Yes. I wouldn't be surprised. So, there's a lot of technology partners that were part of the agent-to-agent, including Confluent, the company I work for. But I wouldn't be shocked if there's more competition for standard around this than perhaps we see with MCP. I'm sure other big players in the market might come out with their own version of this. So, you might end up in a situation where if you want to build something that's kind of like the Switzerland of this, you end up supporting maybe multiple protocols, depending on how people want to connect these different things. [0:27:44] GV: Well, that's been, I think, a fantastic whistle-stop tour of MCP. As we'll mention, there is an episode coming up on SE Daily with Anthropic around MCP, so look out for that. But hopefully that either you've been under a rock, and this is the first time you're hearing about this, or you have been reading about it, but still maybe just a little bit confused about exactly where does this sit and where are we at with any of this. Hopefully, that has just shed a bit of light onto that. So, we're going to move on to Hacker News. I think this is a nice place where we get to just dive into a couple of things that I've caught our eye on Hacker News, things that don't make the mainstream news, and it's always difficult to be on top of everything that happens on Hacker News, especially these days, I feel like there's so many things getting submitted. We're just seeing kudos ratings going through the roof on certain things these days that we never saw before. Yes, so I guess I've got a couple of picks. This might sound like mainstream news, but actually it's a kind of thing that doesn't often make it, which was actually just that Google announced Sec-Gemini v1, which is a model, as it might sound, Sec Security. It's a model all around security. Why is this interesting? Well, yes, if you're wanting to query an LLM and ask it about CVEs and what does this mean for me? A lot of models will kind of good generic answers around this, but we haven't really seen any of the big players actually say, "Hey, we've got a model that is all about security." And especially Google, they bought a company called Mandiant a couple of years ago, big security company that goes and like deals with cyber-attacks effectively, amongst other things. So, there's a lot of data, obviously with Mandian, of actually what causes problems and what is the fallout. In theory, this model has captured quite a lot of that data. And that's been a bit of a holy grail actually, have a company that actually deals with the cyber-attacks and actually be able to expose that data to the kind of wider world in a more structured way. This seems like a pretty interesting way that in theory it's been done. So, that kind of caught my eye. [0:29:42] SF: And also Google recently had the acquisition of Wiz as well. [0:29:45] GV: Right. Well, yes, super recent. They're second time lucky, yes. [0:29:48] SF: Yes, exactly. So, I wonder if this is part of like a sort of a broader move that embed like AI in everywhere of like enterprise defense. Is that sort of where Google sees potentially like a big business or a lot of value? There's certainly a lot of companies that are working on, not exactly, I guess, like, perfectly related to security and around, like, incident management, our SRE automation, all these types of things. I'm curious, from your perspective, do you think that people are already in certain industries kind of resistant to the idea of, like, using AI? Do you think security inherently has, like, more resistance just because of the, like, if things go awry, it's like really bad? [0:30:26] GV: Yes, having worked in security for a little while, but I think crucially, I didn't start my career in security. I've talked and worked with a lot of security people and I think it's just fairly agreed that you will still find a lot of slightly protective people in security, where it's a craft, and I think any craft being told AI can come and do their thing. Obviously, we probably think of this more in the creative industries, anything artistic or film or potentially music. But actually, cybersecurity practitioners really think of their work as a bit of a craft. And to be told that AI can come along and do that, there is a lot of resistance is kind of what I've seen. But there's no way if we look at kind of just the volume of cyber-attacks and the means and ways that these can proliferate, there is already a lack of human resourcing around this. And the only way I think this can be vaguely covered off is if AI is allowed to be involved. I think it's one of these things where we're always going to have to accept that AI, it will be the classic, "Oh, that wasn't correct." Yes, you're totally right. That wasn't a correct response. But at the same time, I don't really see a future in the security world where AI isn't a part of it. [0:31:35] SF: We also think that I don't think attackers are going to wait. I'm sure they're already leveraging a lot of these things to make their attacks more sophisticated and stuff like that. Or somebody, even from an education standpoint, you could leverage a lot of these tools to learn how to essentially exploit a system that maybe before took a lot more work to build up that skill set or learn how to do that. You can use some of these models to help you do that faster. In some ways, the attackers leveraging these tools to do this at scale and much faster become enforcing functions for also the security people to leverage these tools to get better at their job and be able to respond faster. [0:32:11] GV: Yes, absolutely. It's just a great thing to see Google. They are saying it's a new and crucially experimental cybersecurity model, but it is there. Yes, so maybe anyone, as you're pointing out, Sean, even wanting to just start learning more about cybersecurity, there is a model now backed by one of the big players, SEC-Gemini v1. Maybe Moving on, I believe something Microsoft's 50th birthday caught your eye, Sean, what caught your eye? [0:32:36] GV: Yes, there's two things around Microsoft turning 50, and then also they released their original source code for Altair BASIC, which is really interesting. I mean, I think that one of the things with Microsoft that I find really fascinating is they started out, of course, around this mission of putting a computer on every desk on the planet, and they largely accomplished that. I think they were also the first company to really put value in software rather than the hardware. They had the famous deal where they licensed DOS to IBM rather than like sell it to them. IBM didn't even care that there's like, there's no value in software. So sure, go ahead. And suddenly, Microsoft becomes the most profitable company in the world. For a long time, they had, they were like, it's amazing, like sexy company to work for. And then they had that era from, I don't know, it was like 2000 to the late, maybe 2008, 2010, something that where they had, it was like the anti-trust era. They had a bunch of flops around like their phone, the Zoom, which is the famous iPod competitor, something like that. They were like, people basically thought, "Okay, this is going to be a company that's like the Kodak of the software industry." And they had a business that was based on selling essentially shrink-wrapped software. And I think what's really amazing about that is that over the last 10 to 15 years, they've had a real shift in, I think, both their public perception, where they make their revenue from. They did what a lot of big companies can't do, the sort of successful pivot of their business from essentially, like literally, you can't get more on-prem than a CD that gets delivered in the mail to install Windows or Office to being a huge cloud company, and also now, one of the leaders in AI. So, I think that's really, really fascinating. [0:34:15] GV: Yes, completely. And far more detail than we will ever go into on SE Daily, but there's another podcast acquired who do these long-form histories of companies and yet they did Microsoft and all the points you just mentioned there, Sean, they cover an amazing detail. I guess one of the things I hadn't maybe realized, and I think they do a good job of kind of pulling this out, which is just that the Steve Ballmer years, you might have your own opinion of Steve Ballmer from like watching videos or so and so forth. That was actually when Bill Gates wasn't having a great time in the world. And Steve Ballmer actually kind of steered the company in a pretty good way during those years. And maybe we wouldn't have hit this 50-year mark if that hadn't happened. So, these kind of little bits of history that we kind of forget about. [0:34:57] SF: Just like Microsoft's gone through this rebranding, like change in public perception. Bill Gates has also gone through like massive change. He's like everyone's favorite grandpa now, dedicating his life to making the world a better place, which is great. But 1983, Bill Gates was like, you could probably couldn't find a more determined person on the planet. If you were not on that side of Microsoft, like probably considered like a tyrant, there's all kinds of crazy stories about how vicious he could be with his employees and, of course, how driven he was. He was very famous for saying the words of, "This is the dumbest idea that I've ever heard." That's not his public perception now, which is really interesting as well. [0:35:38] GV: Yes, so kudos to, I believe, it's Hacker News username, EvgenyZH. Thanks for posting the link to original MS source code. I like this a lot because it was, first of all, the website that Bill Gates and his team obviously have put together for this is just super nice. [0:35:55] SF: It's awesome. [0:35:55] GV: Yes. This is obviously audio only. Just go and Google that and find the website and just so nice, I mean, you would expect to see that on some kind of super nice media blog or something or the way they kind of long form content. Then, yes, I think it's kind of fun because you literally can download the source code and it's a PDF. [0:36:13] SF: Of like a dot matrix printout. [0:36:16] GV: Right. Exactly. This is the thing that was, yes. It was like, you have basically, yes, big photos, I guess photos of that PDF, of exactly dot matrix printer with still the bars on either side with, that would be how the paper rolls out of the thing. I mean, it's not like I've read the source code cover to cover, but it is genuinely interesting just to kind of read through a bit of it. And one major thing kind of jumped at me was just actually the amount of comments. I don't think we kind of remember that like code back then was pretty unreadable. So, I think comments was just something that had to be part of the furniture. There was also kind of a fun one where someone had then like actually handwritten over the comments, because one of the comments was wrong, and they clearly thought this was important enough where they'd taken a pencil and they'd scored through. Just the example is kind of fun because it was the original comment said, "Number should be printed in E notation," and they scored out little bits of it and it turned it into, "Should the number be printed in E notation?" So, this is like, again, that's what the function was doing was, "Should this be an E notation?" Anyway, take a look at it. It's just kind of fun to kind of see where Windows has even come from. [0:37:26] SF: I love looking at some of the history of how some of these, like the challenges that they had to do back then. They were writing a full basic interpreter without ever touching the machine that it's actually run on. They don't have modern debugging tools. It's all just like their brain power and like printouts. They did this over an eight-week period. There's a really great keynote that I saw a couple of times from one of the guys who was the creator of doom, which goes through their like year-long journey. He wrote- [0:37:56] GV: John Carmack. [0:37:58] SF: Yes. And he wrote a book about it as well, which I - and it's just like one is like, they were all in like their 20s. They're probably just like slamming red bulls working like 80 hours a week, but like the timeline that they like built all this stuff, and a lot of it was like nobody knew how to do sort of types of like 3D stuff that they were doing at that time. It was all brand new. And they're just like hacking away on this stuff. And they had all these basically announced that they were building this game and would have all these features before they wrote a single line of code and then they had to deliver that in the timeline is really, really, really cool. Even the stuff that Bill Gates did back then is running on four kilobytes of RAM, like what, half an emoji? [0:38:38] GV: That always gets me that we just forget like how obviously lucky, but lazy we are basically, when we build software. We virtually don't even think about memory management, most of us, I think. [0:38:47] SF: Unless you're trying to run a foundation model on a phone. [0:38:50] GV: Maybe there's a bunch of Rust developers all like shouting at their phone right now, giving them podcasts about this. But most of us are not thinking about this stuff. And exactly, as you say, the pioneers here had to work with just crazy low amounts of memory and resources, quite frankly. So, that was kind of interesting. Then just a final one, just to round out, it was just a short one. This is why I love Hacker News. There was the user INGVE, I don't know how to pronounce that, he posted a nice article by a guy, John Collinsworth. So, he's actually a Deno developer. And as in he works at Deno as one of the staff developers. But yes, it was called the blissful Zen of a good site project. I think, these kind of articles are always nice to read, hear from other developers, hacking way on things. There's one quote, and it was kind of nice. He just sort of said, "I felt something in that freedom. I felt a simple understated joy that I hadn't felt in a long time, a candle in a long-darkened room." I'm sure we can kind of resonate with that, Sean, just side projects. I mean, I know you post quite a few things on LinkedIn, just kind of piecing together AI workflows and wowing everybody. The fact that it's just a side project. So, like, it's kind of nice, right? Just a little thing. [0:39:59] SF: Yes. I've always been huge in the side projects. I feel like it's a way to sort of express your creativity sometimes that you don't always can do in sort of the workplace because there's just certain things that you have to do there. I spent seven years as a founder of a company, and I think like one of the things that as the CTO, that company I started to feel frustrated in the last few years was I felt like I wasn't continuing to learn because we had our tech stack and I could saw like the outside world moving very quickly and all this new technology. I just didn't have time to like kind of learn about it, but I was excited to learn about it. It didn't make sense for us to like throw out what we were doing and like start from scratch to try to adopt that technology. So, when I took a step back and stayed on as an advisor for that company, I had a period of like four months before I joined Google where all I was doing was like side projects. It was so much fun because I really sort of rediscovered my love for like building, where there was one week where I built Tetris every day in a different programming language just because it was just a lot, a lot of fun, a lot of learning, and I think it kind of reinstalled the joy of what brought me to study computer science and engineering in the first place. [0:41:09] GV: Yes, for sure. There's nothing quite like it when you know there are no constraints or bounds, or demands, I think is probably the better one. Like there are no demands of you to produce this thing. It is simply just so long as you can find the time, just you can put the time in, but you can just go off in all these different directions. As John says in this article, he actually finished it up by saying like, the important part is that you explore that little corner of the map and then covered what was there. It's okay if it's nothing, the exploration was a success. I think that's just kind of captures it really nicely. The side project he was actually even talking about, it was, I say, it's pretty cool. It's a SvelteKit blog starter repo. I think that's kind of nice just to have something that did actually even make the light of day. [0:41:52] SF: But I don't think you should do side projects with some sort of ulterior motive in mind. But I do think that there ends up being some positive consequences to it that might relate to job opportunities and things like that. Anyway, especially if you're interviewing places. These give you stories to tell. When people ask like, what are you up to? What are you doing right now? Like you have something to talk about, which is really valuable too. And it also shows that your passion for what you do goes beyond just like a paycheck. [0:42:19] GV: Couldn't agree more. Yes, there's nothing more disappointing than when, yes, you're maybe looking to hire someone and you kind of look around and don't find anything. Or they can't maybe produce anything that wasn't somehow tied to their employer. So yes, it does speak kind of volumes in that sense as well. So, just kind of wrapping up, I mean, we hope, obviously, as a listener base, you've enjoyed SE Daily weekly news. This has been obviously a slightly different format. Just looking ahead, in terms of what we know, we've got coming up on the kind of regular schedule, obviously related to MCP. We have got Anthropic and MCP with Jordi. That's coming up in a couple of weeks. I believe there's an episode with you, Sean, OpenTofu. [0:42:59] SF: Yes, so for those that aren't familiar with OpenTofu, essentially it was a spin-off project from Terraform, when Terraform, they changed sort of their licensing around it with the, I think, pre or post the acquisition of HashiCorp into IBM, but some people essentially spun off the OpenTofu project as like a truly open source version of that, that is compatible with Terraform, but they're also addressing some of the problems that is there with Terraform. [0:43:26] GV: Awesome. Yes, another one that's coming up is I talked to the chief security officer of Coinbase. So, yes, very interesting individual. He worked at Palantir for a long time, which is also another super interesting company. But yes, we just get to hear about all the kind of ins and outs of what it takes to secure, arguably, probably the largest crypto service. At the beginning of that episode, he points out the fact that Coinbase these days is two major products. I won't go into them right now, but there's probably more to Coinbase than maybe meets the eye, and I think that makes that one a super interesting episode. Yes, so anything else just over the week ahead, Sean, do you want to call out before we wrap up? [0:44:03] SF: I don't think so. Hopefully, people enjoyed this. I certainly had fun, and that's always great. I was teasing about not having heard your voice at normal speed earlier, but it's always great to chat with you. [0:44:14] GV: Likewise. Yes. So obviously, listeners do get in touch if you've enjoyed this one. And of course, any feedback on any of the episodes, guests, I'd love to always hear from our listener base. So, thank you so much for tuning in and I hope to see you again on another SE Daily Weekly News. [END]