EPISODE 1931 [EPISODE] [0:00:12] GV: Hello, and welcome to SED News. As I'm sure some of you know already, this is a different format of Software Engineering Daily, where Sean and I - I've got Sean with me, I should say. Say hi, Sean, as usual. [0:00:25] SF: Hey. Hey, Gregor. Hey, everyone. [0:00:27] GV: I often forget to let Sean say hello. Yes, we are here, a slightly different format where we touch on some of the main headlines in tech. We then go into a bigger topic in the middle, and then we take a fun spin, look at Hacker News highlights that Sean and I have picked up over the last couple of weeks. As we often do though, a bit of a catch up, I think Sean and I have both been wrapped up in conferences over the last couple of weeks. Yeah. [0:00:56] SF: It's conference season. [0:00:57] GV: Exactly. Yeah. Where have you been and how is it going? [0:01:00] SF: I was in Las Vegas recently for Cloud Next, which was fun. I'd never actually been to Cloud Next, even though I worked in Google Cloud and I've partnered with them a number of times. But for whatever reason, I just never ended up at Cloud Next. But it was good. Then, I was actually supposed to be in India this week. Thankfully, that trip got postponed, because I would have been back-to-back to back, because I leave for Boston for IBM Think Sunday night. There's a lot going on. We're thick into the spring event season anyway, with May, June, and so forth. You're in San Francisco where I live. How are things going? How are you enjoying it so far? [0:01:35] GV: Yeah, absolutely. No, it's nice to be back. I think last time I was in SF was end of 2024, actually. Yeah, been a while in the tech terms. But really nice to be back. Always like the weather here. Nice change from Singapore. I was at Stripe sessions, which has grown enough. It's in Moscone West, which is the second largest, I think, probably, venue in SF. Yeah, really awesome production. I think, just to call out one little detail, I'm not sure if people watch the Stripe podcast. I'm sure many of you do, and it's called Cheeky Pint, and it has John or Patrick Collison sitting in a mock Irish pub, having a pint of Guinness with a guest, and they had actually put together a full mockup of that set in the venue, and people could go in and get a free pint. A few little mini-pints of Guinness, I just thought in a mock Irish pub, I thought that was an amazing detail. [0:02:30] SF: Yeah, it was cool. [0:02:30] GV: Yeah, very cool. Yeah, it's exhausting, as I think, you know, Sean, going to conferences. I was on the Superbase booth for quite a few hours each day. I like getting to talk to an amazing bunch of people, some of them SE Daily listeners as well, which is always fun, get to meet them in person. [0:02:45] SF: Yeah. I was going to mention that at Cloud Next, I met one of the fans of the show that works for IBM and came up and let me know that they listen. One of the things I wanted to share with you was that they say, "I don't listen to every episode anymore, but I always make it a point to listen to the SED news episodes that they release." [0:03:03] GV: Oh, I like that. [0:03:05] SF: Yeah. So, it's always nice. You never know. We're speaking out to the ether. You don't always get the feedback. It's always nice to hear people are finding these shows fun. [0:03:13] GV: That's awesome. That's really great to hear that. Yeah, if anyone is listening, it was great to meet you. And equally, I met a whole bunch of people that maybe weren't listeners, or I didn't ask. I didn't make it known to every single person that I was that random voice on SE Daily. Yeah, just had some amazing conversations. Really smart people in the Bay Area. They come from all over the world and they congregate here and I think that's amazing. Obviously, yeah, recording this from my hotel room in a nice makeshift setup of the microphone that I use normally, but it's propped up in an ice bucket, but seems to be working. Great. Well, moving to the headlines. We've got a few things to touch on, especially with the last two weeks, there's been quite a few meaty headlines. These are not from say, the last 24 hours, like we wanted to dig up some things from the last couple of weeks that given it is conference season, I don't know about you Sean, but I basically can't focus on anything else, whether it's one of our conferences, so I end up missing literally all the news for a week anyway. This is quite good to catch up. [0:04:15] SF: It's not why you need an AI agent doing your processing your news feed at all times. [0:04:20] GV: Definitely need a post-conference agent to basically just pick up all the pieces of my life that have gone on hold for the last three, four days. Moving into the headlines, the first one, there will be a couple of security ones here. There are quite a few security headlines actually, touching across different areas. First one is Mythos. This is a large model that was released by - I say released. We'll get to the release part in a second, but released by Anthropic, basically saying that this was a security-focused model that could effectively exploit virtually any system, especially legacy systems. These really deep-seated, very, very critical bugs that have probably been sitting in legacy software for almost decades. Suddenly, these can be almost zero-date, and obviously, these legacy systems do sit in some pretty important places. Yeah, effectively, Mythos, they say autonomously discover previously are non-vulnerabilities in every major operating system and browser. It can carry out multi-step cyber-attacks that humans would take days, if not weeks. I think a 27-year-old flaw in OpenBSD was one of their standout that they were talking about. But yeah, let's talk about the rollout of that. They're saying that it's going to be very controlled, because obviously, the power of this model, especially in the wrong hands, would be pretty terrible. I think, they're saying that they're only releasing it to major tech and financial firms. I think they mentioned Amazon, Apple, Microsoft, JPMorgan, Chase. They're calling this Project Glasswing. The IDB patch critical vulnerabilities before the bad actors, now that they're aware of this can go exploit them. It's a funny one. It's almost chicken and egg. Do you release the model, or do you keep it back? I think there's maybe also a bit of someone analogized it to the luxury fashion where like - [0:06:11] SF: It's the Birkin bag of AI models, right? It's scarcity creates demand. [0:06:14] GV: Yeah, like I that. [0:06:16] SF: I don't know that maybe it's part of some grand launch strategy marketing ploy. But even if it's not, if you tell everyone it's too powerful to make it available, it's terrifying. Then you only want the biggest companies use it. It ends up driving, I think, a lot of demand, because people want the thing that they can't have a lot of times. [0:06:36] GV: Yeah, absolutely. We touched on last month SD news, where politics comes in to the - especially the two big players, Anthropic and OpenAI, and it comes in here as well. Anthropic is still feuding, if you like, with the Pentagon over refusing to let its just normally available models be used for autonomous weapons and surveillance. Do they then release that to the US government? Because it would seem a bit strange that, okay, this crazy, powerful security model can be released to someone like Amazon, but it cannot be released to the US government? I mean, that seems strange. But if you just forget who the president of the United States is for a second. Yeah. I don't know what you thought of that, Sean. [0:07:16] SF: Yeah, I don't know. I haven't heard as much about the political story around Anthropic the last few weeks. It's like, maybe there's just so many things going on that has been only buried in my news feed. I think, certainly, historically, that would seem a little bit strange is like, hey, we trust these large corporations, but we don't trust the US government with this, or trust our own government with this. I don't know all the ins and outs there. I think Anthropic has been pretty good at having a certain philosophy about controls around models, even delaying launches of models, because they haven't met their security bar and so forth, or there are certain guardrail expectations. They stood behind this philosophy as a company multiple times. Just following their track record, the assumption that I would make here is their intention is still true to that vision. Even though, I think as an outsider, it does look funny, because like we said, there's all this analogies around fashion and if we tell everybody, also, that it's too dangerous to use, and then all it does is make people want it even more. What is the ultimate goal and intention behind this? I mean, it seems primarily the good intention, glass half full perspective is like, "Hey, we created this thing before some bad actor might create it. We're going to give it to key players, so that they could patch things ahead of someone being able to exploit it." [0:08:41] GV: Yeah, absolutely. There was rumblings that a contractor, it was all unnamed, a contractor of one of the companies that had been given access had then passed this on to effectively, the dark web and access to this model was now available. Anthropic, I think, absolutely refuted this and said that they had not seen any evidence, whatsoever, that people that were not supposed to have access were indeed accessing it. Yeah, there's always going to be motives for someone saying that they do, in fact, have access to this. For example, just simply charging someone money and then running away. We don't know about these very unverified claims are running around as well. We haven't seen the sky fall yet, basically. [0:09:24] SF: Yeah, we haven't seen the sky fall yet. I mean, that's a good measurement. It's like, it's suddenly, all these major websites going down, because someone has access to this and exploiting them. I do think that the finding of the 27-year-old flaw in OpenBSD is pretty staggering. You just think about the amount of security engineers and engineers that have looked at that over the years, and then only to have AI model figure it out. It's a humbling moment for humanity. It's like the Gary Kasparov losing to IBM Deep Blue, or the work that DeepMind did to beat the world's Go champions. Now it's like, okay, well, now we have a model that can find flaws and open source that, essentially, thousands of really, really gifted engineers have looked at and handcrafted and have been blind to it. [0:10:10] GV: That's a really good point. It's also, I guess, that so much of this legacy software, well, engineers, no matter how smart they are, they're just not going back and combing over all the code that was written 27 years ago. This is just - [0:10:24] SF: Yeah. You probably assume it works, right? [0:10:26] GV: You assume it works and unfortunately, that also means, yeah, you assume if something was a problem, especially after 27 years, it would have been discovered and obviously, that is clearly not the case. Yeah, good call out. We'll see how this develops, how access is through this Project Glasswing, How is that rolled out? Who else gets access? I think that'll be interesting, once you move away from say the hyper-scalers, like who actually is supposedly allowed to use this model. Yeah, we'll see how that goes. Moving on to, this was the context.ai breach, which one of the main recipients of bad news on that front was Vercel, effectively. Effectively, what happened here was an attack chain started when context.ai and employee there was infected with Luma stealer malware after downloading of what they thought to be Roblox game cheats. I mean, this is why I just think that people that are on the other side of the coin that wish to do malicious things, they do think of pretty ingenious ways to get people to install things. This could have been for that person's child, for example, who knows. Anyway, this harvested credentials, included things like Google Workspace, like Datadog and that kind of thing. Then the attacker used compromised auth tokens from context.ai and then managed to pivot into a Vercel employee's Google Workspace account, and then into Vercel's internal systems. Yeah, they certainly lit up on our screens at Superbase because I think a lot of people know that our dashboard that all our users use is actually on Vercel. we suddenly jumped into action and had to do a whole ton of credential rotations. If I just looked at all the steps, all the list of things that had to be done, that was a good chunk of our front-end teams day just coming through that. That's why it's great to have a team that can just jump in and do that. This must have affected tons and tons and tons of people. It's just - [0:12:15] SF: Oh, yeah. I think it's the classic human hack of, hey, let's dangle something out that somebody might want Roblox game cheats or you go back to the early 2000s, what you had to Anna Kournikova virus. It was supposedly pictures of this tennis player that people were attracted to. Then inevitably, some subset of people are going to download, or click on the thing. [0:12:38] GV: That's far too obvious. Now, you definitely couldn't get away with just photos of somebody. Yeah. [0:12:45] SF: Yeah. I think the thing here you see a lot of times in these reports is as a consequence, like Vercel is now defaulting new environment variables to be classified sensitive, so they're encrypted at rest automatically. I think in all these circumstances, and you worked in security for a long time, it's like, why wasn't that the default from the start? This is often the case in these kinds of exploits is we saw this as well with snowflake when it was a contractor got access to an account that they had access to through snowflake and that didn't require two-factor authentication and a bunch of other stuff. Then that allowed them to get access to some subset of data. It wasn't necessarily that snowflake was explicitly vulnerable, but this person actually had proper access, but there was no two-factor authentication on by default. Then the reaction from snowflake was to make it so that two-factor authentication, it becomes the default, and that was forced on everybody. It's like, well, we could have had that in there in the first place. This comes up over and over again, and simply, I think companies consistently fail to make secure by default the actual default. It's a simple concept, but it's missed over and over again. I'm not sure why that is. Perhaps, there's just not enough of a financial reason, until all that financial reason is a news headline, or it's just something that we tend to miss. [0:14:05] GV: Yeah. I think, certainly now, yeah, I think, I guess, just from the pure security perspective. But security, I've often said, security is just flaws in how humans operate, whether it's fail to protect, or that kind of thing. I wonder at Vercel, for example like, was this on the roadmap and it just, it was going to come in two months. It's that kind of thing like, when is the moment that it was too late to implement this? I'm sure this was not a small lift for them to need to implement it, so they've probably done - I can imagine it is sitting on a roadmap somewhere on their side, and it was just maybe they were just going through all the checks and balances of like, okay, what's it actually going to take to do this and make sure that no customers are affected, downtime, etc. Downtime, I think, is usually probably the thing that gets in the way of, okay, how are we going to do this in a zero-downtime manner? Then bang, you get hit with like, who could have predicted that somebody three arm's length away would download some Roblox fake cheats, and then that's what leads through these supposed layers that you thought you had. I guess, the advice there is if you're very aware of some major thing that should be especially encrypted across your platform, basically, you should just drop everything and move on to that. [0:15:12] SF: Yeah. I mean, it could be like, in these roadmap conversations, I think that sometimes it's easy for teams to potentially put punt on some of those security features, because they're not necessarily revenue generating features. That's what comes down is like, oh, we invest time and resources in this thing, where we know it's going to drive revenue. It's very unlikely that we have the security exploit. We can deal with this later and then the can gets kicked down the road over and over again. I don't know necessarily the story of Vercel, but I just think having been part of product organizations and some of that decisioning in the story that we consistently see at these companies that do get exploited, it seems to be the case that it's always fairly easy to make the argument that like, "Oh, we can deal with this later, until it becomes a thing." [0:15:56] GV: Yeah. This has happened before, as some listeners will know what I'm about to say, but the kicker here is that again, Delve was the compliance company that had issued certifications for context AI. Delve, positioned itself like Avanta. The only slightly minor detail there is that they were forging a lot of their compliance certificates, unlike Vanta. After we did that episode, I think it was last month, we looked at Delve. I did look into the whistleblower nature of it. As a whole website, where someone's documented everything and calls that the founders have gone on defending, saying, no, no. Almost like, in a very, I would say, dismissive way, like don't be ridiculous, no one - this is just complete false. Then suddenly, it's just all very clear that this is real, that they are forging everything. It could be that somebody on that whistleblower side, or just somebody, a bit of a vigilante is thinking, "Well, we're going to show Delve how bad this could get. We're just going to target companies that were using Delve." I'm not saying that that's been confirmed by any means. It just seems interesting that we've now seen two major breaches over the last, I guess, two months, both of which Delve were the compliance people. [0:17:11] SF: Yeah. I mean, it's a small sample size. I think I'll hold off my tinfoil hat and call it a - [0:17:17] GV: I think that's good to say. Yeah. [0:17:18] SF: - conspiracy theory for a little bit. If we're five months from now and every month they've reported another Delve-related data breach, then I will join your circle of conspiracy theorists. [0:17:30] GV: Yes. Well, you never know. We'll keep tallies and see how we net out at the end of the year, Gerker's Delve conspiracy count. Moving away from security, these were more of a macro headline that, again, has touched, especially financial news at layoffs, unfortunately, again. The two standout companies were Snap and Meta. What's interesting, I guess, is more the communication around the why. I think commentators are always digging in on the why's, and the why's are usually derived from, especially these public companies. It's from the public statements that they put out, maybe as part of earnings calls and this kind of thing. Snap is famously unprofitable. Evan Spiegel said that they were hitting this critical moment, where they really have to do something to make the company a profitable one and doubling down on AI and their specs product. That led to a thousand job cuts, which apparently is roughly 16% of the workforce. They basically had all the pressure from investors to start massively improving their financial performance. They do look a bit strange, quite frankly, next to a lot of their, when I say peers, companies that might have IPO'd around the same time, Meta being one of them, but we'll come back to Meta in a second. Yeah, not being profitable in this era, I guess, for the age of the company. Yeah, I can see why they were getting some heat from investors after that. It's not to say, I mean, I always feel, I can very publicly say, I very much feel for anyone that's been affected by these layoffs. It's a horrible situation. But, I guess, from a logical standpoint, it might make sense that at least investors were getting a bit unhappy with the Snap performance. [0:19:08] SF: Yeah. I mean, I think being a public company in the public market right now is a tough spot to be in a lot of ways. Then on top of that, from an employee perspective, some of these companies, when they announce layoffs, they see stock bumps. The stock goes up after saying like, "Hey, we're laying people off and we're focused on efficiency." That creates, I think, a certain cascading effect where, when maybe the stock is not doing well, then the investors are putting pressure on the company to make a change. If we're rewarding companies for downsizing, at least from a stock valuation perspective, then that's the easy decision by a company to make, because it's going to positively reflect in the stock. Then on top of that, I think with social media companies in particular, there's a lot of, in terms of historically human capital deployed to review and curate content. There's just a lot of process there. I remember going to Hyderabad when I was at Google in India, and there's a huge amount of people that work there to review all the YouTube videos and make sure that someone's not putting something up that's awful. There's a lot of people to do that. I think all these social media platforms have some version of that. If they can offload some of that to using AI to do it, then you can do a lot of this stuff in a more efficient way, where you're taking out some of the tasks that historically have required a lot of human energy around it. It is one thing. I do wonder, broadly speaking about the future of social media. We have social media companies that are weighing off humans to build AI that would generate content that are used to be made by humans on their platforms. The feed is increasingly AI generated content. It's served by AI-curated algorithms. The humans whose attention is then sold to by AI optimized ad systems, like at some point, who's this for? You have a AI circle of content, to optimization, to serving ads. It's like, the human element maybe is the people who are passively consuming this stuff. It's all AI generated content. What are you signing up for as a user? [0:21:23] GV: Yeah, absolutely. I mean, we should bring Meta into this as well, being effectively the most popular across a couple of platforms, probably social media company in the world, excluding X, I guess. Snap said, just to compare and contrast, Snap said, we're doing this - they didn't exactly say to double down - Well, they said, they're going to double down on AI and they're going to invest a lot more in this moonshot of the specs products. We've talked a lot about that idea in past episodes. Meta, however, saying they're actually doing it and being almost, I would say, more "honest," and saying like, "We are doing this to offset the investments that we're making in AI." I mean, that's a very polite way of saying, we're losing money over our AI infrastructure, bets, all this stuff. We need to do something to shore up that and continue to be profitable, to look profitable. Yeah, they just said, so they're cutting 10% of its workforce, which is much larger in comparison, so it's, I believe, it amounts to about 8,000 employees. Also, not hire 6,000 roles that were open. I mean, I can't even fathom that a company has 6,000 open roles. That's just something I can't get my head around. Yeah. I mean, I hate to say I prefer this one, but I do appreciate that they're at least being clear that we're doing it to offset the fact that we're losing money somewhere else, which is, I think what everybody's been trying to get some of the companies to just admit, which is, are you actually making any money from this capex that you're investing in? [0:22:50] SF: Yeah. I mean, I think we're in a place where the world's changing very quickly. I think a lot of companies feel the existential threat of what the future might look like. They have to make fairly big bets in investments to survive this digital transformation that's happening, this paradigm shift around AI. You want to be part of the winning side of that, which takes investment in a big bet. It's hard to do that as a public company, because you're under such scrutiny, and people are ultimately looking at how much money you're making, the profitability, the bottom line, while you're also trying to do these innovation bets. Some companies have the luxury of a really healthy revenue generating business, like a Google, for example, where they can spin off innovation arms that are well funded, and it doesn't deteriorate, essentially, their core business. For other companies, especially smaller public companies, that's hard to do. You have your core business, and then you're also trying to change as a business. How do you fund the innovation, while also protecting and growing the core business? That's a difficult balance to make. If you want to survive the existential threat, something has to give. [0:24:01] GV: Yeah, exactly. Again, obviously, very sorry to hear anyone potentially listening. If you're caught up in this, unfortunately, it looks like this was nothing really to do with anyone's skills, etc. It was just a purely financial thing that is, unfortunately, it seems to be part of the landscape as of the last five years with big tech. Just very briefly, before we move to our main topic, I slightly - this dropped, I believe today, actually, which was that co2, which is a huge investor, they've actually got a plan to buy up land for data centers. The question is why. I think people are speculating that this is actually for Anthropic. It's interesting that rather than just invest more money in Anthropic, they've actually just gone straight to the infrastructure themselves, or the base of it and just said, "Well, maybe we'll just buy land," and then give that or lease that, I guess, to one of our investees. That's an interesting strategy there. [0:24:53] SF: A lot of companies are trying to pivot their way into being AI companies. Maybe they need to pivot their way into being real estate companies and just own the land that companies need to build their data centers on. [0:25:04] GV: Yeah. Similarly, I think it was yesterday, it's been rumored that Anthropic are going to be doing one more funding round. Possibly a 9 billion valuation. Sorry, 900 billion valuations. With not like at 100. [0:25:18] SF: You need to add the zeros. [0:25:20] GV: Yeah. No, clearly, it's the end of the week, my brain can't remember 9 versus 900. This is being floated that this is probably the last raise pre-IPO. I mean, I think that probably was said last time as well, but I'm sure this is the last, last, last, final, final, final, the documents on the fundraising final, final, final. [0:25:40] SF: I mean, there's a lot of companies that are still private that I can remember talking to interview processes several years ago and how they're like, "Oh, yeah. We're 18 months from IPO," and this was five years ago. They've raised multiple rounds since then and stuff. A lot of it depends, I think, on you want to time the public offering to what's happening in the market. Then also, there's a lot of things that you have to do to get ready to go public, too, which take time. [0:26:03] GV: Our main topic today, we're really doing just a deep dive. I say it's a deep dive, but it's both a high level, a high-level deep dive, if that's possible, on what does - effectively, we've basically seen about 700 billion in AI capex happen since the AI boom. I guess, here we're just taking a pause and just looking holistically, where are things across a lot of the big players? We do this every so often as we do, just to take a pause and touch a lot of the big names and what they're doing and why. We feel this is important, because of just the speed of which things are moving. Keep saying it, but a month literally right now is easily what six months might have been pre-AI. Try and actually get a handle on what this scale even is. It's like hyper-scaler capex. Actually, for 2026 alone has been reportedly 650 to 700 billion across Amazon, Google, Meta, Microsoft. That was a Oregon Stanley report that has made this guess, if you like. In a single week, Google committed up to 40 billion to Anthropic. Amazon committed to 5 billion to Anthropic. There was a probably 100 billion in EWS spent over a decade. NVIDIA crossed the 5 trillion in market cap level, which again, it's just hard to fathom at all. If you think about it, Anthropic is now simultaneously backed by Google, by Amazon, we've just touched on, it's probably going to be touching a 900 billion valuation. This is just the largest for structure investment cycle in the history of technology. It's crazy. Yeah. [0:27:40] SF: Yeah, absolutely. I mean, it's pretty staggering, the numbers. I still remember the days when, and people still use this terminology, but we were referring to the unicorn startups that were valued over a billion dollars. It's like, that started to get silly after more and more companies were valued over a billion dollars, but that used to be a big deal to be valued over a billion dollars. Now there's so many companies valued over a billion, it dilutes the idea of a unicorn and it becomes meaningless. We probably need to shift that. Maybe it's a 100 billion dollars, or we're going to get to a trillion dollars in terms of valuation. There's just literally been more money going into AI compute in the last year than an entire cloud built out over an entire decade. It's a strange world where we have Google and Amazon, both investing in Anthropic, but they're also competitive with it. It's almost like they're hedging a bet somewhere. In case we don't win the model wars, we still have a skin in the game. [0:28:38] GV: Yeah. I mean, if we then look at it a little bit more strategically, there is a vertical integration play. If you want to use that slightly, what feels like an archaic term these days, vertical integration. If you think of model labs becoming actually infrastructure tenants. It's like Anthropic's 100 billion AWS commitment. It means that clause training and inference potentially structurally tied to Amazon's chip roadmap, so that'd be training and graviton. Meanwhile, OpenAI, that remains quite despite some news of them starting to part ways more and more, still deeply integrated with Azure. Google is both an Anthropic investor and building competing models, as you've just touched on, Sean, like Gemini. That's on its own TPUs, which we did a bit on a couple of months ago. Then meanwhile, Metas confirmed it will use "hundreds of thousands of AWS graviton chips." I think this is the thing like, bets do have to be made, because it is quite difficult to unwind, or just shift over the underlying chip infra that these models are being trained on. It's not just like, "Oh, I'm going to go run it on my other machine somewhere." It's, I think analogous to an Apple did move all their hardware off Intel onto their own chips. It's just, that takes probably a good year, a couple of years of planning to actually go there. [0:30:04] SF: This stuff is too big to be fully self-contained within one company. Inevitably, you're going to get people who are competitive with each other, also completely dependent on each other from chips to cloud infrastructure, to the models themselves, to the actual applications, and so forth. That's just inevitable, because it's so big. It can't be, essentially, self-contained within one company. We're well beyond that at this point. [0:30:29] GV: If we then take an early leap over to, I guess, the human side versus again, looking at this across multi facets, we have touched on this either last month, or two months back, really just looking at like, what does the hiring landscape look like here? There was some data from TrueUp that 67,000 open software engineering positions across 9,000 tech companies, which has doubled since mid-2023, which was a low of 30% in 2026. Coming back to these conferences we've been at, it's interesting. I did actually get quite a lot of questions from, I would say, especially younger people attending, which I love to see and it's really great to see people still studying, or maybe two years out. They were asking me like, what do I think about engineering degrees and will engineers be needed? I did just say, absolutely. I mean, it's just that the concentration of where engineering will sit will be in these companies that are simply doubling down on AI. The I guess, net consumers of these platforms are maybe going to decrease the reliance on human engineers. I think that it was going to be far outweighed by the net increase by these huge players and all the ecosystem around the huge players needing just more and more engineering. [0:31:46] SF: I mean, there's just a lot of stuff to build right now and there's a lot of experimentation. You need engineers to build it. I think the responsibilities of an engineer and what day to day might look like for an engineer is certainly shifting. Clearly, there's hiring is up 30%. I just think it's not business as usual. It's a little bit different. One thing that's interesting, too, is IBM, they're tripling their entry level hiring around junior engineers into it's also going after junior. One of the topics of conversation has been around the agentic engineering and what it means to be an engineer is some companies have focused on, "Hey, we're only going to hire senior engineers, because we want engineers that have some maturity in their career, so that we're going to rely on their judgment when it comes to evaluating what's coming out of the agentic engineering productivity tools, and so forth." I think IBM and Intuit and a few others are taking a different approach where they're like, "Hey, these junior engineers are actually super valuable, because they're AI native. They're grew up in adopting these tools and technologies faster. We want to invest in them." In some ways, I think this is nice to see, because I've been a little bit worried about if everybody's hiring senior engineers, what's the next - how do you become a senior engineer in the future? What's that mean for the next tranche of users? I think one disconnect though is a lot of the type around AI and the things that you hear is like, and there's going to be massive job loss and engineering is going to go away. We are actually seeing some impact in terms of enrollment into computer science programs. What's that mean for the next tranche of people who are going to be entering the industry? Suddenly, we have not enough engineers to go around if this hiring trend of 30% increase continues to go up. [0:33:38] GV: Absolutely. I think the role types, this is slightly new as role, called technical ambassador that, apparently, OpenAI are hiring thousands of these. It's really this bridge between what's being built. Then, almost solutioning with potentially non-technical stakeholders in these companies. Because the spending power can be so huge, but there's always this massive gap of, but what actually is this going to enable us to do? Can you show us examples? And so on. [0:34:08] SF: I mean, applied AI is, I think, that function at Anthropic, for example. I think there's a lot of new forward deployed engineers, which was the concept of palantirs now, a very popular role. I think that's a sign of this transitional period that's going on, too, where companies have capital, they want to deploy behind AI. It's really strategically important for them. But they don't actually a lot of times know, or have the resources and know how to make that thing into something that delivers value to the company. When they do invest in a particular platform, or technology, they need people from that company to come and hand hold them to get to a place where they can be successful. [0:34:46] GV: Yeah. Final macro area on this. Yes, we're going to go back to security for a couple of minutes. We've talked about this several times, but I don't think it's - sometimes you can never talk slightly too much about security. These tools, they are expanding what we'd call the attack surface faster than actual security tools themselves can keep up. We saw that with Vercel side of things, where basically, because they were just going fast with tools like context AI, but there was basically over broad auth permissions there. Would that have been authorized if we weren't adopting so many AI tools, but instead, that we're saying, "Oh, but I can't move fast, unless it has access to everything," especially all about to leadership. I'm not talking about Vercel specifically. I don't know the ins and outs there, but I know that leadership generally, I think is just under pressure to say yes, because if that's the leadership saying "No, no. Your tools should stay very scoped and it can't touch your email and it can't touch your Slack messages." Well, then what's the point? I need my tools to know everything. That's what keeps me ahead of everybody else. Yeah, it's just one of these things where I think it was Cisco's State of AI security 2026. They're saying that 83% of orgs plan to deploy agentic AI, but only 29% report being ready to secure it. Obviously, there's quite a disconnect between what people want to do and then having the means to actually keep up on the security side. [0:36:12] SF: Yeah, it's a huge challenge right now, because these silos and swim lanes that companies build up around the different parts of their company as they grow become barriers to AI, essentially, being intelligent and be able to draw interesting results across disparate data sources and things like that. You want to give the AI system access to those things, but then you're not necessarily set up in a way to be able to do that successfully. Either you slow down and you try to figure out a way to do that in some way where you can control it, or you just open things up, and then you take on a lot of risk where you might be exposing information that you don't want to expose. There's so much pressure for companies to be delivering value around AI and be able to press around it, and so forth, that there's probably a lot of companies bypassing, perhaps, their normal standard procedures around, even vendor procurement and stuff like that. It's similar to what we talked about with engineering and the CircleCI report. There's pressure on engineering organizations to be putting out product faster, but not all the validation and verification of the AI generated code is necessarily there right now. You either end up not putting out more product, because you're spending the time to validate that the thing that you generated quickly is actually working. Or you skip that step and you're pushing out a lot of code that then potentially leads to further security problems. [0:37:45] GV: To recap, we're basically being trying to highlight here that people, just think of their cloud provider as this mutual infrastructure, but actually, it's not. If you look at it, really, we've got this is a Google, Amazon, Anthropic triangle with model choices and cloud choices converging. If you are building AI features, most of us are these days, the infra vendors chip investment will probably shape the model performance. Just stay abreast of this, because looking at which cloud you sit on and how are they investors embedded with, especially on the infra side, I think it's going to - If we come back without being biased, but Claude is still having a moment right now, and it just seems if you're not using Opus for a lot of stuff, then you're being left behind. That's something to bear in mind. [0:38:35] SF: One of the things that relates to across some of the things that we're discussing in the main topic is we have a lot of money, some subset of that 700 billion dollars just flowing into these code generation tools, but there's still a significant gap in the downstream, validation and testing. I wonder, I know there's some companies working on that, but it's always like, are we ignoring the real problem, where we're so focused on the code generation and essentially, compressing the time to POC, but there's all this work that happens after the POC stage to get that their production. Eventually, we're going to need some way to accelerate that, and also, have some confidence that's actually correct in order to take advantage of the speed that we're getting code generated. [0:39:22] GV: Absolutely. Then yeah, exactly, that bottleneck that you've just been touching on, Sean, not to be a dead horse, as they say, but security, that's an unglamorous part. I think the Vercel breach is a really great example of just, it really has to be kept on top of effectively was a bit of a mission for our team to have to scramble within less than 24 hours, to do what they had to do to keep things secure. I hope that we don't have one of those per week, or something to that effect. I think it is really important that anyone building with AI just has this in mind. I guess, moving on to what we often think of as our favorite part of the show, Hacker News highlights, where we get to just bring in something that's maybe piqued our interest from the last couple of weeks. Do you want to kick us off, Sean? What did you come across? [0:40:07] SF: I was really hoping I could find a doom running on a lawnmower sprinkler system, or something, but I went in a different direction. The first one I wanted to mention was, I just thought it was interesting. I come from this background with my PhD research, and so forth. This isn't that complicated, but I just love when people find new ways of representing data in interesting trends and things like that. I'm a big fan of the book Freakonomics, which dives into a lot of these weird numbers - [0:40:37] GV: Oh, me too. Me too. I love that. [0:40:38] SF: - trends. But this is US gender ratios by Metro, which was posted by NSokolsky or something. Sorry if you're listening, I butchered your name. But, essentially, shows a breakdown of gender ratios by state, city and metro. Some interesting things there, like Washington, D.C. in the U.S. is the most female-heavy city. There's only 45% male, 55% female. Then the opposite end of the spectrum is Colorado Springs, where it's the most male-heavy city. There's a lot of guys walking around in Colorado Springs, apparently. It's 55%. Then not too shocking in a lot of ways. Silicon Valley is pretty male dominant. It's the third highest region in the US, and probably is there's a lot of engineering jobs here. Engineering typically skews very heavy on the male side. You end up with a lot of maybe a somewhat male dominated area of the world versus other parts of the world. Then some cities are 50-50 almost exactly. [0:41:37] GV: I find it fascinating why, say D.C. is actually, I guess, that's 55% female and 45% male. Yeah, very fascinating. I wonder if I can find the same for European cities, or that kind of thing. I wonder if there's any massive, massive disparities there, or something like that. [0:41:54] SF: Yeah. I don't know. Would interesting to look at it. [0:41:56] GV: Yeah. On my side, yeah, I think the first one, one of these feel-good developer, just a little blog post by somebody. The blog itself is matthewbrunel.com. This was posted by SpecX user, user SpecX. Could be the same person. Who knows? But this was called using coding assistants tools to revive projects you never were going to finish, which I think is fairly self-explanatory. I think it's just nice that pulling a quote from that, Matthew said towards the end, "In my mind, there are different buckets for personal projects. One is things I do to learn and grow. The other is things I really wish existed. This project falls into the second bucket. Using AI cutting assistance to revive those projects, it's a form of wish fulfillment. I never would have gone to, but now I can have the project. I think now I can have the project that I wasn't able to have before. One less metaphorical book sitting right on the bookshelf." A bit of a long quote, but I think we can all identify with that, things that we started. Quite frankly, it's not that we didn't want to finish it, but things get in the way and we did obviously see that like, oh, this is going to be way more time consuming than I expected. It would be fun, but I just simply don't have the time. That's pretty cool. We can actually, even if we're not coding day to day, but able to bring these tools and I can think of at least five projects that probably fit that bucket, probably just should quite frankly, hook up cloud co2 and be like, hey, here's where I was trying to get to finish it off. Please upgrade some packages, secure the whole thing. There we go. I think that's probably something I'll be doing on a lot of travel I've got for the next couple of weeks. [0:43:38] SF: Yeah. I think that there's a - it's almost a sub-class of that second bucket, where you have these project ideas that are like, "Oh, yeah. I would like to do that, but I just don't even have the time to start it, because it's going to take too much time." But now you can prompt your way to doing that fairly quickly. I was using Claude to build some games for my son based on ideas that he had. I certainly could do that. I had the programming skills to do that, but it would have taken a reasonable amount of time and energy to crack that out for some little game that he may or may not even play with, versus being able to take his ideas and turn it into something and have Claude churn away at it and pump it out. Then have him explore. It was a good way for him to see how you can turn ideas into something that manifests itself into a computer software, or some product, or something. [0:44:28] GV: What was your second one, Sean? [0:44:29] SF: Yeah, so the second one was this headline that was came up this week, which was brand at 4.1, which is IBM's 8 billion model, which matched their previous 32 billion Mixture-of-Experts funnel. They have three different open-source models. One is 3 billion, 8 billion and 30 billion under Apache 2, trained on 15 trillion tokens. I think it made a lot of headlines, because they were able to match the performance of the 32 billion model with the 8 billion across nearly every benchmark. It was really the way they were able to do that was they really focused on data quality over the parameter scaling. They had this fly phase training pipeline to do that. I think that's really interesting, because over the last year plus when it comes to models, a lot of times we talk about the limitation of the models, like we're running out of data they're trading the models on. Where are we going to get all this data? I think one of the things that they were able to show with this is that, hey, we can really actually drastically improve the quality of the model while keeping the model size reasonable if we really focus on data quality, enduring the training process, enduring reinforcement learning, and so forth. They were also really open about some things that went wrong in training and how they fixed that too, which I think was a little bit different and refreshing. [0:45:51] GV: Yeah, that's fascinating. Just again, the leaps of progress is just insane. Yeah, that's really cool. Yeah, I guess, thanks to SteveHerring1 user for putting that in. Yeah, I think on my side, the second one, at least when I found it, okay, I hadn't gotten tons of points. I wonder if this is one of these Hacker News articles where I'm not sure if it's well known about this, but you can submit an article to Hacker News. It might not do very well. You'll get one to points or something. Every so often, someone at Hacker News will actually reach out to you and say, "We're going to repost this, because we think that this is really interesting and that this should actually get more airtime." It's interesting that happened to one of mine. I posted something about remote controlled telescopes. You could bring your telescope to this facility and put it somewhere. Then they would store it and run it and you could remote into it. Yeah, it didn't go anywhere the first time I posted it. Then Hacker News said, "Hey, we're going to repost it." It went to the top, which is interesting. I think it could be one of those, where it's only got 28 points, but it was on the front page. It's called Cheating at Tetris. The website is chalkdustmagazine.com. The user was T-3. Nice short name there. The TLDR here is if you can imagine you get to pick which Tetris pieces your opponent must play. Could you force them to lose, basically? The article works through this, basically, mathematically, as you'd expect from, so how do you cheat at Tetris? You've got these, I think everyone roughly, maybe you remembers Tetris, all these shapes that fall from the top of the screen. You need to rotate them. The whole point of the game is just if you make a total line, that line disappears. But if you don't make a complete line, it starts to stack up. Then if you hit the top of the screen, it's game over. You've got these different piece types, different shapes, roughly. You can think of them as the letters, I, J, L, O, S, T, Z. They can be played indefinitely without causing game over. Picking just one piece won't work. If you were to just mix just the S and the and the Z blocks, since they don't actually fit neatly together, they say that the best the player can do is to create separate columns of each. But because the board is 10 cells wide, fitting exactly five to cell wide columns, you always end up with an odd split, three of one to the other, which causes imbalance that slowly fills the board. The article goes on to identify which piece combinations can mathematically guarantee a loss, regardless of how well the player plays. I do encourage you, if you're a Tetris fan, to go and look at that. It's very hard to explain all the permutations, obviously, on a quick Hacker News highlights section here. But yeah, chalkdustmagazine.com, go there and you can check out that fun piece. Yeah. I guess, looking ahead for the first time in history, Sean and I are probably going to meet in person, which can be fun. We're going to grab coffee, hopefully this weekend in SF. Apart from that very exciting event, anything else prediction-wise over the next month, Sean? [0:48:49] SF: I don't know. I don't have a really good prediction. I mean, I could go with the lazy one that I did last time, which is we're going to have more security breaches, which I think will actually be the case. I don't have an out there prediction this week. [0:49:00] GV: Yeah, unfortunately security was hovering my mind as well. Just like, I do sound like a broken record at this point with that one. I often just look around the room and see where I'm looking at. For example, I've got a power bank, which I'm aware does have some processing power in it somewhere, because of its USB-C capabilities. I'm going to say, that somebody hacks Doom onto a power bank, like a power plug that has USB-C outputs. Someone's running Doom on a power plug, or something. That would be fun. [0:49:30] SF: I guess, the one tying back to one of the topics we talked about is the Anthropic ethos model. Is it going to get more widely available by the time we talk next? [0:49:41] GV: Yeah, that's a good one. That's a good one. Yeah, maybe I'll just take a bet that they open it up to say, 10 non-hyperscalers, but the 10 are slightly, people have a lot of opinions about that choice, for example. Let's see how that goes. Thank you everybody for tuning in. Hope this has been helpful and just interesting getting to catch up on what's been going on in tech. So much is happening. Always useful just to have us try and condense it for you and give you a quick summary. Yeah, thanks for listening and we'll catch you next time. [0:50:12] SF: Thanks, everyone. Cheers. [END]