EPISODE 1904 [EPISODE] [0:00:13] GV: Hello, and welcome to SED News. We're back after the holidays. I'm Gregor Vand. [0:00:19] SF: And I'm Sean Falconer. [0:00:20] GV: This is another, one of our slightly different formats of SE Daily, where we take a spin through the tech headlines. We look at a slightly more meaty topic in the middle, and then we tend to round out with some of our favorite highlights from Hacker News. As always, we just like to catch up on what we've been up to. Obviously, we've not actually done an SED news in about eight weeks, given the holidays. Yeah, Sean, how was your holiday season? [0:00:46] SF: It was good. I went back to Canada, where I grew up, for the first time for the holidays in 14 years. I've been maybe purposely protecting my family from what that experience is like. They're California natives, so they had that experience in their first Christmas holiday for winter in Canada. But it was very nice. We had a white holiday, so that's always fun. How about you? [0:01:11] GV: Yeah, still in Singapore, so I hadn't accrued enough leave really to take a lot when I started a new role back in October, or something, that's just the realities of what happens when you start a role. It was nice. Still nice to be over here. One thing that did occur, though, was I did hop back to the UK just for a week to see some family, and I purposefully picked flights that were Starlink-enabled, or at least I thought they were going to be Starlink-enabled, and they were. We talked a bit about this in the last SED news. I have to say, it was an absolute game-changer. Yeah, I think I did send you, other than the plane, a screenshot of the speeds, Sean. [0:01:52] SF: Yeah. I saw that. [0:01:53] GV: Yeah. It was just mind-blowing in terms of the quality of the connection and the speeds, and the fact is what's like gate to gate, so you literally get on the plane, and you haven't even - you just sat in your seat, and it's already available. [0:02:08] SF: That's great, because I find it really frustrating a lot of times when you want to use Wi-Fi on the plane, and you have to be in the air for some reason before you can actually enable it. I don't know what the rule is. So, I was in Anaheim this past weekend for my daughter's birthday. We went to Disneyland. As I was in the airport at John Wayne in Orange County to fly back to San Francisco, I got a message from United saying that there would be free Wi-Fi courtesy of Starlink on the flight back. [0:02:36] GV: Oh, nice. Okay. [0:02:37] SF: Yeah. That was pretty cool. [0:02:39] GV: Yeah, so my understanding is if Starlink supplies Internet to an airline, apparently, the contract says, they're not allowed to charge consumers for Starlink. Basically, it has to be free, which is great. Yeah, just the other bit is just, you simply, at least the Amazon, Qatar Airways, and you just connect to the Wi-Fi, and that's it. There's no logins, there's no signups, or anything. It's just, you literally just hit hey, connect to plane Wi-Fi, and off you go. [0:03:06] SF: Least amount of friction possible. [0:03:08] GV: Exactly. No friction. I did genuinely get a bunch of work done. Certainly, I had two eight-hour flights, and the first eight-hour flight, I could actually just sit on the laptop and do a whole ton of stuff. Absolute game changer. Definitely would suggest anyone looks out for flights with that in the future if you plan to want to get some work done, basically. Yeah, I believe some people highlighted this to me at work, but yeah, there's been a bit of an acquisition on your side, Sean. Is that right? [0:03:34] SF: Yeah. I mean, the big news back at the beginning of December was Confluent is being acquired by IBM. Now, there's of course, a lot of things that have to go on for that to - a deal of that size and magnitude to go through. But it's shaking up my world a little bit over the last month and a half or so, but it's been an exciting times, and we'll see where things go. At some point, probably this year, I'll be getting paychecks from IBM. [0:03:56] GV: Wow. Yeah, that is a bit of a change. On my side, so I work for Supabase now. I think some people know that already. Yeah, I was at the Supabase offsite. Supabase is a predominantly remote company, but they get ready together once a year. This was about 200 people in the same place, at least half of which hadn't ever met each other. Always a fun one. Yeah, a week with just getting to know a ton of people and definitely worth the effort on everyone's part. When you're growing that fast and almost half the company hasn't met the other face-to-face, so it really eases a lot of communication, I think, as well. [0:04:34] SF: Yeah, absolutely. [0:04:34] GV: Going on to the headlines. We've got quite a lot to cover. We're going to be covering a little bit from almost pre-holiday as well, just so that we haven't missed some of the bigger things that happened, and make sure that we dive into them a little bit. One of the more recent things, we'll start with maybe more recent ones. Yeah, Tesla has killed Autopilot. This was their driver assistance system. It was never meant to be self-driving. That was not allowed to be said exactly. Yeah, I mean, given in the wake of Waymo, especially coming into the fold, it's interesting where something that Tesla kept pushing as their big differentiator, yeah, just killed it. It's been in the vehicle since 2014, I believe. Yeah, that's quite an interesting thing. Just riffing off of that, ironically, very recently, I think a couple of weeks ago, someone actually basically had a Tesla drive himself from LA to New York, completely unassisted by a human, including all highways and charging stops. It clearly works, but there's something here where Tesla has just decided, this is not where they're going to go. [0:05:45] SF: Don't they have another self-driving mode that's not the assisted version, but it's full self-driving mode? [0:05:51] GV: Oh, I see. They're going to be just deprecating, right? [0:05:55] SF: Yeah. I think what they're doing, they had, essentially, they're assisted, and then they also had a full self-driving mode. I believe they're essentially, in the software terms, deprecating the assisted mode and going 100% into full self-driving mode. Which I think probably makes sense, in some not that an expert in self-driving cars, but I think, presumably, they see that's where the future is going, and they want to put most of the resources behind that and have the people who are driving Teslas start to move towards enabling their vehicles for that mode. [0:06:29] GV: Nice. Okay. Yeah, that makes more sense. Yeah, there were another couple of things on the mobility front, just that Waymo's being investigated at the moment for some infractions, I think, of Waymo cars driving past school buses on the wrong side, or something like that. Again, I don't think this has any real bearing on are self-driving vehicles going to stick around? I think the answer is yes. I think something at this stage would have to go incredibly wrong for that to not hold up, I think. Only more cities want to have these vehicles on the roads. I believe, in Singapore, they're going to be trialing it pretty soon, and I really look forward to hopefully having self-driving cars here. [0:07:08] SF: Yeah. I mean, it's pretty amazing when you just pause for a moment and think about the fact that we have a robot car driving around and picking people up and dropping them off and stuff. It's like flying. When you fly, it becomes this normal thing. But it's pretty amazing, the fact that you can go in this metal vehicle and fly through the air and stuff. Now, and a lot of it, actually, is autopilot at this point, too. Of course, there's a pilot there to do a lot of the takeover necessary and all this stuff. But it's like, these technologies, like my kids, when they're adults, self-driving cars are probably not going to be anything special. It will just be the normal thing that they grew up around. [0:07:50] GV: Yeah. No, you're totally right. Yeah, it's strange, actually, that we're able to just talk about it so nonchalantly about self-driving cars. Yeah, you're right about the flying. There was a quote that I read recently about flying, which was not that long ago. Probably 70 years ago, which is really not that long ago. Apparently, there was a quote where a pilot would say, a good landing is one where the people are able to get out the plane alive, and a great landing is where you can actually reuse the plane. Given that we're now at the stage where we're able to reuse the plane on thousands of flights per day, that's just insane. Moving on. Apple, they're going to be actually using Gemini. Reportedly, it's going to be that Gemini-powered Siri will land in February. This is interesting, because as I think most people have caught on, Apple's AI foray has not maybe gone to plan, or maybe it's going exactly to plan, which was just to sit back and do nothing for a while. Although Siri was supposed to be that, I think, to some degree, and clearly has more. I think in many people's eyes has failed, because what Siri promised was actually more like what things like ChatGPT ended up becoming. And so, people could see the stark difference between the quality. Yeah, what's your take on them actually reaching for Gemini at this point? [0:09:10] SF: I think it's interesting. I think Apple certainly has never had a strong presence when it comes to AI. I think when you were back in - when assistance, when you had Google Assistant, you had Alexa, you had Siri, Apple's, I think, was by far the least useful and most frustrating one to use. I think it makes sense for them to perhaps not try to own the model part of it and team up with another company. Apple's good at user experience, or good at hardware design. Maybe a better move for them is to focus on the form factor around AI. Whereas a company Google is really good at performance, they're good at AI. Maybe it's not as good at historically, at hardware design. Can you bring these two giants together to create something interesting? [0:10:02] GV: Yeah, absolutely. I think this is a good move for them. Really, it's just could Apple let their ego not get in the way and actually partner with somebody else? I believe they let go, or shuffled on a couple of people on their side, the senior AI people, I believe, the person that was behind Siri in the first place. I think he has gone on to retirement, or something else. Yeah, there's clearly a big shift around on their strategy, which is just to partner now and just get moving with the level of AI that most consumers would expect at this point. [0:10:39] SF: Yeah, it's interesting, though, what you were saying in terms of, I think all those, whether it was Google Assistant, or Alexa, or Siri, like Siri, they were this promise of AI that you would interact with all the time and have a conversation with. I think part of it was maybe the technology just wasn't there, and they ended up being glorified, like timer settings, give me the weather, and set a reminder, really, really basic stuff. Maybe part of it, too, was because they were audio-based, or maybe the killer app for audio just was never broke through. I think like ChatGPT, and that interface eventually is what the original vision and goal of those products were is just that ChatGPT had a couple of things probably going for it. One was, of course, a much more powerful AI model. Then, they didn't try to go audio first. They actually went with a typing interface. Maybe that form factor is something that just, for whatever reason, seemed to work better for people having conversations and so forth, versus actually having an audio-based conversation. [0:11:44] GV: Yeah, it's a really good point. Because I think especially with films like Her, where that was almost 20 years ago at this point, and you've got Joaquin Phoenix being this person who has this voice assistant AI thing, which is basically like a human. I think that would maybe everybody's framing of like, oh, this is what the future will be when it comes to AI. Apple likes to try and wow people. That's maybe where they're going with this. Yeah, you're completely right. Then ChatGPT, maybe, and then people had this bad perception of "chatbots," because we had been told that this customer service is an AI chatbot, but actually it was just a bunch of if statements. It's a double thing. There were, ChatGPT clearly thought, well, yeah, voice is not the thing. Actually, if we can show just how well above anything that a human has seen so far on the chat front, then maybe that's the way forward. Just tracking back, it was John Giannandrea, apparently who was Apple's AI Chief, who's just headed out. Yeah, so let's see. Apparently, February, we should be able to see some of these updates coming into Siri. That was reported by Bloomberg and picked up also by TechCrunch. Check out those for any more details on that. Moving on. This was something that did touch the news before the holidays, but it's an interesting one. We'll lead a little bit into more of the main topic today, which is just around, so the hiring environment and what a developer might expect today in 2026, to be thinking about when it comes to landing a job, or moving jobs in 2026. Yeah, this was Wall Street Journal reporting that OpenAI is ending vesting cliff for employees. For those not fully understanding of roughly how equity works when you join a company, there are various means of equity restricted stock units, or what are called ESOPS as well. Generally speaking, if you join a company and they offer you equity, there is what's called a vesting cliff. That is where there is a longer period at the start of your tenure, where that stock is not really allowed to be yours, until after a certain point in time. Usually, that's one year. It's saying, hey, if you managed to stick it past one year, you now have the right to apply to get this stock in your name. There's lots of other complications there, but that's a basic way of putting it. What OpenAI is saying is you're not going to have to wait one year. You're going to be able to join the company. I think it's almost monthly, or at worst, quarterly, you're going to have these chunks of equity vest, which is just a hugely different way of doing things compared to virtually any other tech company out there. Yeah, what did you think of this, Sean? I mean, you're right amongst the companies in the valley, and this felt like quite a shake-up to how they all operate. [0:14:44] SF: I would assume that OpenAI's approach to this is that they feel like it's going to give them some kind of competitive advantage to attracting top-tier talent. There's such a competition around hiring the top-tier AI talent right now, just like we've covered before with the big alleged bonuses that Meta was giving out for various AI leaders and stuff like that. This is the latest move that OpenAI is making to try to hire the best talent, make it really attractive. If you can start testing from day one, I guess, it reduces some of the fear that you might have with joining a company, where maybe you're like, "Well, what if I really don't like it? Then I'm like stuck there for a year until, or I walk away with nothing for the investment time." I get it. Maybe that helps them close more candidates. It does certainly change, I think, the nature of how people potentially think about their jobs, right? When you join a startup, I think part of the fun part of joining a startup is trying to be part of something bigger than yourself. You're joining for the mission. Obviously, everybody wants to have some - be rewarded for the time that they're putting in and see some financial payout. It's just certainly an early stage. Yeah. You're really, like, the chances that payout actually happening are pretty slim. You're buying into like, hey, I really believe in the mission. Even if this fails, I really want to try to make this happen. Then there's also, of course, you've learned a lot during that process. This changes the nature a little bit, I think. It's almost like you're not getting a bunch of people who necessarily are there for the mission, but you're getting mercenaries that are there because they care primarily about the financial outcome. I just wonder how that changes the psychology, or does it create certain biases even towards the type of employee that you're going to attract? [0:16:43] GV: Yeah, for sure. I'm on the fence here. Apparently, the application's Chief Fidji Simo had said that this was to encourage new employees to take risks without fear of being let go before accessing the equity, which is what you also touched on, Sean. Yeah. I mean, that makes sense. I guess, as someone who has built a company in the past, I think I would just feel very uneasy about bringing people into the company. Within one month, they suddenly have chair of the company, and they could, in theory, then walk away if they want, and they walk off with that equity. Very different scales here. I think the speed at which OpenAI has to develop world-beating products, I guess, a year is just maybe not realistic anymore. Someone maybe can come in and make a huge impact in, say, six months. Actually, that's good enough. If they can't hold on to them, so be it. Yeah, it's a very interesting way to differentiate and potentially, rather than dangling bonuses that people like Meta were doing, and well, that didn't at least play out quite so well, as a bit of a mess over there in terms of AI hiring and strategy. Well, this is a different way to do it. But then that said, it's not like this costs them zero. OpenAI is projected to spend 6 billion on stock-based compensation this year. That's nearly 50% of entire projected revenue. I mean, that's just like, it still costs them a lot of money to run this. [0:18:09] SF: Yeah. I mean, they're basically spending 50 cents for every dollar that they earn just to keep the people building the product. We're going to talk a little bit about the code red scenario and the pressure they're getting from the hyperscalers, in particular, Google, and this hyper-competitive world of AI, people are just full steam ahead, full pressing the gas on the gas all the time. You're competing against some of the most well-resourced companies in the world. They're also trying to own the same space. It just seems to be a place where, for some period of time, people are just going to have to burn a lot of money to try to own that space and do as much land grab as possible. It's like in the early days, in the ride-sharing wars between Uber and Lyft, there was such a - both companies were spending a lot of money and in the red massively to try to land grab across the world, all these different spaces, and really roll out globally. It probably takes a long time for them to rebalance the earnings there. It's like, if you can dominate that space, then of course there's going to be a return on that investment eventually. [0:19:16] GV: Absolutely. Yeah, I look forward to seeing how this plays out. I had a friend that worked at Meta a while back. They also had a slightly more generous stock scheme than most. I think it was something to do with, it was maybe six-month cliff, and then it was monthly, something like that. Interesting, that person didn't stick around that long at Meta, and I think did quite okay out of the shares vested. Then again, they probably contributed a lot. Yeah. I mean, do you value ICs who come in and do a huge amount, but then maybe get restless more quickly? Or do you value people that are in it for "the mission and plan to stick around"? Or you're trying to get them to stick around for a longer period of time? I think that remains to be seen. As you call that, Sean, even though you can have vested stock, the likelihood that you're actually able to benefit from this is a pretty low percentage in the grand scheme of things. Again, for anyone not familiar, you have to have what's called a liquidity event, which basically just means some way that these vested shares can be bought by someone is that one of the investors is that the company themselves buy you back is that the most commonly understood one is IPO, which is these days not unlikely, but it's more unlikely than it used to be. Some companies are very unusual, but are very friendly in this way, where they allow some portion of your vested stock to be bought back during any funding round, for example. When you go from series C to D, actually, they set aside some money, and they say, "Hey, if you've got some vested stock, then let's just say 25% of whatever's vested will allow you to cash that in." That's an incredibly friendly approach and does not happen a lot. [0:21:10] SF: Yeah. I mean, Databricks is doing that. Databricks seems to be, at least on a path where they're never going to go public, so they need some way of creating essentially a liquidity event for their own employees. Then, of course, there's secondary markets, too. [0:21:23] GV: Yeah, true. Yeah, as I've also seen some of these friendly policies actually explicitly say you cannot use secondary markets. It's like, hey, join a friendly policy, but you can't actually use a secondary market to do that as well. Yeah, the high level here is for anyone thinking of joining a company, and we're going to get onto this, maybe slightly more of the main topic, but anyone joining a company and equity is part of it. Do you think very hard about what do you think the prospects of this company being? Do you trust the management to make good on what they say they'll achieve and how they plan to make liquidity happen? Because often, you can be negotiated down on salary quite big time. If you're made to believe that the equity is more valuable than it really is. Yeah. Shopify famously now allow all employees. They're a public company, so it's a little bit different, but they famously allow employees every year to basically, apparently, use a literal slider to choose equity and some cash. You can just choose how much of your compensation each year. Part of the reason they did that was because they had a crazy spike in the share price. They found that over a couple of years where people joined, unfortunately, the equity they were offered, the price they were offered, what's called the strike price, was just crazy high. There was almost no point in receiving equity. I think that's very smart, where they just say, look, actually, if you look at the numbers and don't think the equity right now makes any sense, don't take any. That's fine. Going back to something you touched on, Sean, and what's going to be our next final news topic - Actually, no. Sorry, penultimate news topic. What happened before the holidays that OpenAI code red? This was where OpenAI basically said that they understood Google is actually beating them on basically the approach to producing models. This is boiling down to a TPU versus a GPU. Google using and having developed what are called TPUs, and we'll get into that in a second. Meanwhile, most other producers use GPUs. NVIDIA is the most famous producer of those. I think this turning point was Gemini 3. Is that right? [0:23:38] SF: Yeah. I think it's called that. Yeah. [0:23:41] GV: This is maybe also linked to this vesting piece with OpenAI. They went, "Oh, shoot. We have to find a way to get people in the door now and not have them worry about this was one year thing. We just need them producing now." Yeah, let's just talk a little bit about, yeah, this whole GPU versus TPU thing. It's obviously been covered quite a bit before the holidays, but I think it's still pertinent to understand where these two sides are coming from. [0:24:04] SF: Yeah. I mean, I think that this is the latest in some of the unfair advantages, is I would say, that Google has over a company like OpenAI when it comes to model development and also distributing that model to a number of people. One, they have a lot of data that companies like OpenAI don't have access to, because they have a lot of products where people are generating that data, or through because of Google search, they also have deals where they're able to index data that typically is behind paywalls and things like that. That other companies don't have those deals for them, that they just carved out over 20 years of being the main search company in the world. That's one big advantage. They also have thousands of products in the B2C space that reach billions of customers. When they come out with a new model, they have so many different surface areas where they can touch users. They're much wider reach than a company like OpenAI has currently. Then, with the GPU-TPU thing, it's another thing that they have. Now, where most people are building and running models and GPUs, I think if you've been paying attention to the market at all, or what's going on, indeed, you probably have some sense of the importance of GPUs. But they were originally designed for gaming and graphics, and they're really great at parallel processing. Because they were designed for graphics and gaming, they also have to do other things, like render textures, physics, and lighting. They have in the world of AI, this architectural baggage that doesn't really help when it comes to running AI processing. They end up spending energy on other tasks that are independent of the pure AI math that they actually need. Whereas with TPU, which is a Tensor Processing Unit, it's actually a chip built specifically for doing AI math. What Google did was they stripped away all that baggage, and then they designed it to do one thing only and do it really well, which is massive matrix multiplication, which is essentially what the core thing that you need to do for deep learning. They originally did this back in 2013 because they had realized that if every Android user just used voice search for three minutes a day back then, Google would have to double their data centers as a result of that. It wasn't like they envisioned this world, or anything like that, but they just had this problem. They use the fact that they have a lot of really smart engineers and a lot of resources, and they're just like, "Hey, I know we'll solve this problem. We'll design our own chip, where we can run this much more efficiently and reduce the cost, essentially, of scaling these data centers." Because one of the challenges with AI right now is a lot of times, your training models, or your running inference, is the people running that had to buy those GPUs from NVIDIA. NVIDIA has 75% gross margin on their chips. Whereas, if Google's building their own TPUs, they avoid that markup internally, where everybody else in the market is essentially paying NVIDIA for the chips. That's a pretty massive advantage that they have. They've also restructured their company very much because of the existential threat that AI was causing the company a couple years ago. Now they just have, like, the whole company's steering that direction of owning the AI space. I think as a result, the combination of all those things led to this code red situation in OpenAI. [0:27:18] GV: For sure. I guess it's interesting that maybe open AI, they just underestimated the power of this approach. Maybe for good reason, because Gemini 1 and 2 were impressive, but nobody was - apart from I think Gemini one, it was like, large token window was its biggest advantage. People would use it for tasks, where that was really the crux of what they needed to get done before other technologies came along to help chunk things up. Gemini 3 is actually where people say, "You know what? This is actually producing way better across a bunch of factors." I mean, one of them was actually more on the image front; all these infographics, floating around before the holidays, were all Gemini 3, basically. Again, Tobi Lütke, Shopify CEO, posted a pretty great one that said, "This is Gemini 3, but I'm using this, because I want to show." He took one of his talks, I think, from he'd given to the company. He really wanted to capture that in a infographic. Yeah, I think Gemini 3 did a great job of that. Very interesting. [0:28:28] SF: Yeah. I mean, the graphic stuff has come a long way. I encourage you to play around with Nano Banana if you haven't. [0:28:35] GV: Exactly, that was it. Nano Banana. Yeah, yeah. [0:28:38] SF: Yeah, yeah. It's amazing. Because I think a lot of where, at least when I played around with some of the image generation stuff from the models, is you can get there, but it takes a bunch of prompting, and they tend to spell things wrong a lot. It takes a bunch of work to get what you need. Then, after the latest model came out from Google, it's just one shot, and you have something pretty amazing. It's just really, really obvious jump in terms of the step function of capabilities. [0:29:04] GV: For sure. The only downside was, in my opinion, LinkedIn just became an absolute garbage of all these Nano Banana images. Impressive, but it seemed like nobody could post a thing without this being part of the post, and yeah, had to shut off LinkedIn for a while. Yeah, very impressive. There hasn't really been a lot reported since this was code red, and then it all died down a bit over the holidays. Yeah, I think this is one to keep on watch. I'm sure we'll be revisiting where these two are at on this front, probably later in the year. Then, finally, just a final headline news point. This was a company called Manus being acquired by Meta. Now I had heard of Manus since I live in Singapore. We're going to get on to why that's significant in a second. I love the world hadn't really heard of Manus. I'm curious, had you heard of Manus before the acquisition, Sean? [0:29:55] SF: No, I hadn't. I saw that you had surfaced this, and I did a little bit of digging in, but I hadn't heard the company previously. [0:30:02] GV: That was my impression was that a lot of the world, and especially the US market, actually hadn't heard of them. When this was announced, I mean, 2 billion. Nothing for Meta, that's maybe a rounding error at this point. It's still not a bad exit for a company that was maybe struggling to find exactly its product market fit, in my opinion. What does Manus do? It's very much consumer-facing AI, but they focused more on production of an end product. Especially things like presentation decks, or a full website that can just get spun up and hosted, that kind of thing, as opposed to chatbot helping you understand something. It's really like, tell me what you want produced for work purposes, basically, was their angle. That does require more of an agentic approach, various processes going on, putting that thing together. Very expensive though. I think that was part of the problem. A, they were running out of money. B, consumers - I went to a conference last year. I did mention this in one of the SED news where someone actually stood up and asked very publicly to the CTO of Manus, like, "I don't understand. I spend so much money on Manus, and it just runs out every week. I've got to keep applying more credits in. Do you plan to address this?" There's a bit of tension there, clearly. Users say, okay, the outputs are maybe helpful, but I really don't understand your billing model. I think I spent too much money with you guys. They seem to be fundraising. That was the murmurs over here. Just to point out, what is Singapore connection? People refer to them as a Chinese startup, which is true. They started in China, but they did move to Singapore in 2024. The big reason really for that was de-risking and trying to move that image away from being that they're part of the Chinese ecosystem and they wanted to be part of the world ecosystem, which seems to have played out in this Meta acquisition, because I think it would have been unlikely that Meta went to try and acquire them if they were still based in China. [0:32:09] SF: Yeah. I mean, that is a challenge, for sure. I think I read that all ties to China have been either been severed or being severed. It's interesting, you said that they're running out of money and they're fundraising. I guess that's be the case because they were spending a lot on token costs. I had read that they were doing 100 million dollars in revenue. There's certainly were customers and people paying for it. [0:32:30] GV: Exactly. People paying for it. I think it was the - I'm curious to know how much this was actually costing them to run this thing? Do they have as good relationships, say, with Open or rather, Nvidia, or data centers that OpenAI say have to run these things at a cost that is bearable? I'm not entirely sure. [0:32:50] SF: The thing that I struggle a little bit with is, how does Manus in particular fit into whatever Meta's overall AI strategy is? I struggle to see the alignment, and even where Meta is going with AI is just not very clear to me. Maybe there are some grand unified plan, but it's lost on me. [0:33:09] GV: Yes. There was a few jokes flying around about this at the time. It was something like, one of the jokes was asking the Alexander Wang, who's their chief AI. It was message from Mark Zuckerberg to Alex, like, "Hey, can you get us a Manus? Can you buy Manus for us, as in like a subscription?" Then Andrew Wang replies, "Done. How much?" "2 billion." It's like, "Oh, I just bought the company instead of buying a subscription." Yeah, I think that's a lot of people's thoughts here is, where is this fitting in and why? The only real logical explanation at the moment is just the lack of consumer-facing AI that Meta seems to have produced. Okay, they are behind llama models, but they require developers to really be the ones taking those to a consumer in whichever way they want to do that. I wouldn't say, people, either they've tried to integrate Meta AI into WhatsApp, and for a few weeks, people kept, you could say like, ATAI. Someone would ask a question, what's that group, for example? Some clever person would say, ATAI, answer this question. But it gets very boring very quickly. That the whole point of a group chat: to have conversation and see what the funny answer someone gives. It's not to have some know-it-all AI in the group chat, correcting everybody. That didn't work, at least in my opinion. [0:34:31] SF: Yeah. I mean, I guess, the big question is, if Meta is to stay in their wheelhouse of social network connecting people, and you can debate the merits of the value of this connection, but it's always historically been about people connecting in some way, what is the role of AI in that world where it's historically been about people being able to connect over like, hey, you and I, we love Pokemon, let's talk about that, or whatever. Where does AI fit into that beyond being some form of recommendation system, or a way to surface other types of connections? [0:35:08] GV: Yeah. One maybe more macro thing here is, I was listening to, I think it was Tim Ferris podcast, he had Bill Gurley on, who's a very well-known investor. He got into things like Uber early. He's been visiting China a few times, which I think is very smart, because you really have to go there and understand what's going on. Yeah, he did highlight the fact that, actually, the approach of the Chinese government these days is to say to startups like, do not get too big. We need you to all compete with each other. Yeah, there should be no tall puppies, basically. I think that is what's happened here that Manus saw that and was like, "Well, there's no way we can break out if we stay in China. So, we're going to move to Singapore." I've seen this with quite a few startups, at least in the last couple of years. I'm curious to see if some kind of regulatory thing comes up in China that stops this from being possible. If you form it in China, then that's it. You can't move it somewhere. I don't know. We definitely saw a lot of these moves around 2024. I'm curious to see where this nets out now, because this is a bit of a high-profile loss, I would say, for the Chinese startup ecosystem. As we move on, our main topic today maybe a little bit shorter than usual. We had a lot to cover off in the news. It was really just looking ahead to the job market this year. I think last year was a lot of wait and see, because people were quite unsure and anxious, perhaps, about this phrase, AI taking your job, and this thing. We did see quite a few layoffs through 2025. TechCrunch put together a very comprehensive list of who had been laying off and why. I guess the standout for me, though, on the layoffs, you can put together a report of layoffs, but actually, I don't feel the numbers were that crazy. Each month, we were talking maybe five-digit thousand layoffs per month, but that doesn't feel crazy. Actually, a lot of the people laying off either these were just companies that just weren't quite hitting product market fit, or even Hewlett Packard said, "We're going to cut forth to 6,000 jobs by 2028." I mean, I think something like a Hewlett-Packard surely has room for thinning down, especially with AI being able to do things that probably somebody had to sit and manually, or maybe even enjoyed manually doing. Maybe as we look more at the developer standpoint, I mean, Sean, you did the episode with Stack Overflow, that came out towards the end of last year, the Stack Overflow survey. That's a little bit interesting just to sort of, because they get the stats on who's moving where, or what kind of technology is being adopted, and how does the job market look for someone at that snapshot in time. Yeah, was there anything else that came out there that might help us see where 2026 might land? [0:38:06] SF: One thing I wanted to say before we jump into the Stack Overflow survey, too, is that I think if you look at the overall metrics for tech employment, they're actually growing. Perhaps the skills or the nature of the jobs are changing a little bit. The roles that tend to contribute to strategic growth, whether that's AI, ML skills, or data skills, or something like that, those are definitely in demand. There are some roles, I think, that are deemed non-strategic that are perhaps being cut. I think the other thing is that not all the layoffs are necessarily tied to displacement by AI. Some of the layoffs are due to realigning resources around certain efficiency initiatives, or even if you look at Google from a couple years ago, they had the first fairly large-scale layoffs. But a lot, at least from my friends that have been there, a lot of them have actually been hired back into new roles. A lot of it was reshuffling of the deck and stuff that, as they needed to react to the market change. There's definitely some of that going on. I think that sometimes the headlines are a little bit sensational when it comes to this. Regarding the Stack Overflow survey, I think one of the things that was interesting from there, and I know it's potentially a macro issue around how we think about the impact on AI and developers, and what it might need to be a developer in the future. If you look at the history of using some of these AI tools for programming, we started out with Copilot. Were just like a super-powered code completion. The general industry reaction, and also if you go back and look at some of the Stack Overflow surveys from a couple years ago, was that they were good for junior devs, but not necessarily super helpful for senior engineers. Senior engineers were like, "Well, I don't really need this. I know the language inside and out, or whatever, and I can just program this faster myself." Then I think in the last year, there's been a change in that sentiment, and it's also reflected in the survey is that with some of these coding agents, there seems to be the shift about who gets the most value out of them. To use them effectively, you need to know how things fit together architecturally. We're seeing that really strong engineers are becoming even stronger engineers by leveraging some of these, essentially, really helpful agent interns that they can now depend on. I think there's some fear that if you're junior, you don't get as much value out of these tools. If you're always insulated from the details because of the AI, how do you ever get to a place where you're senior? It's great that we're leveling up the senior engineers, but if we never ever build the future of the seniors for the next generation of coders, what happens in the future? I think part of this might be that we need to just think about how this shifts educational focus for engineering. There's things that we've historically valued with engineers in the workplace, or even from an education standpoint, that maybe now are less valuable. Really deep knowledge of a particular language, maybe is not as valuable now as having a deeper knowledge of how things should be architected, for example, and maybe there needs to be more emphasis into that from computer science education. You need to be working, get used to working at a higher level of abstraction immediately. That could be something that needs to change, but there's still going to be this, even if the answer is education, there's still going to be potentially this risk of a generation of coders who are young in their career, and maybe they're not able to take advantage. I guess the counterargument I've heard to that is that, well, the people who are younger are also more open to using the tools and are able to level up their skills on the tools faster, so they could pick up the work and start contributing immediately. I don't know the answer, but I think this is just the things I'm seeing in a survey and some of the conversation that's going on right now in the market. [0:41:53] GV: Yeah. I mean, I think my take on this is almost not a lot should change, really, at that fundamental education level. Spending four years in CS is probably and full disclaimer, I did not do a CS degree. I was self-taught. Probably could have benefited from a CS degree, but just didn't happen to do one, and ended up just having to learn slightly the harder way, some of the fundamentals. But those fundamentals have always stood me really well, I think, even though now, especially today, I don't code daily as a job. Much, I think, either of us do exactly, like we both dabble probably and code to prove things, or just for pure enjoyment as well. There are many, many, many more people in Supabase, far more talented than I am at coding, and I'm more useful to the company as yet more of that high-level, able to look at how something is architected, or looking at what the risks are, basically de-risking projects. I have to understand coding as well and be able to converse with engineers about coding. Equally, I actually speak to a lot of the engineers, and it really is, it's at least, right, it's no more than 50-50 when it comes to who are actually using any AI tools. I would say it's those that are working on the really low-level stuff are not using them, and I think that's fair. In terms of using them maybe more for a sanity check, or especially some security checks, I've seen some really interesting progress from us using some AI-based security tooling that can really pick out some pretty nuanced security flaws, but equally having something generate a bunch of low-level code that is going to probably sit in the code base for years and years and years, the likelihood that that's best generated first by AI is unlikely. The place, it's still doing very well, like AI does very well, is SQL. Anyone who's just basically putting together complex database statements, that seems to be where it does very well. Yeah. I mean, back to more of the job side of things. I think, again, a CS degree is not as useless as I think as people are making out. I do think open source is a big driver for, or should be a driver, because that's actually where you can get real hands-on experience without already having joined a company. Many big libraries, open-source projects are basically rejecting pull requests if they have any inkling that its majority AI-generated. For good reason, I think. They're just getting swamped with what look like AI pull requests. This a way to filter out actual engineers applying their brain to this problem, as opposed to just thinking that they can out prompt somebody else. I think the super high level here is that it is not as doomsday as maybe the news makes out, and there is a lot of opportunity for engineers. [0:44:48] SF: Yeah. I mean, I think it's still very early days. I think we're still figuring out a lot of stuff, and if things are moving incredibly fast as well. Things might seem a certain way right now, but it could be in six months; things might change dramatically as well. Ultimately, it comes down to, I think, people who have solid fundamentals and a breadth of knowledge tend to have a place in tech, in the work world. There's always value behind people who can wear many hats and got a flex across a bunch of different roles, and understand a bunch of the core fundamentals of how technology works. [0:45:20] GV: Yeah. We're going to leave it there today, that we may touch on this a little bit more later in the year as well, just as we see how hiring is going. I think this is just an ongoing topic how our engineers is fairing with AI. I mean, we're still only really in maybe the start of year three of coding with AI proper. We've only seen two years of this so far, and this is still going to be a very pivotal year, I think, for how this plays out. As always, we like to just wrap up with a fun thing of looking at Hacker News. I think this is an interesting one this week where, Sean, you and I both landed on the same suggestion. Do you want to introduce it? [0:45:59] SF: Yeah, sure. It's Doom has been ported to an earbud, doomsbuds.com. I think when I saw that, it was like, this is the ridiculous stuff that we always like to talk about and highlight on these episodes. Just people doing fun stuff in engineering, whether it makes sense or not. This is these super over-engineered pet projects that people have. It's just fun to talk about, and it's fun to see how people apply their creativity and innovation to different projects. This one fits that bowl for sure to be able to run, essentially, Doom through an earbud. I was surprised also that you grabbed it, too, because it's the thing that you and I both seem to gravitate towards. [0:46:39] GV: Absolutely. Yeah. I had been swayed by another one a little bit, a couple of weeks ago, that was another one of these disposable vape things. But then, we've already covered that. This was just, I think that one going even deeper. This definitely was like, wow. They say, it only works on what's called the PineBuds Pro, which is a - it's earbuds that have open-source firmware, basically. Again, I just looked at the specs of these earbuds, and I was like, okay, well. I mean, these days they've got dual-core 300-megahertz ARM processors. They've got 4 megabytes of flash memory, three microphones. Yeah. I mean, earbuds are pretty advanced these days. Doom is this recurring theme, right? Of where can you run Doom? We actually had an episode out, I think at the end of last year, which was running Doom in TypeScript types. Not even just in TypeScript, but TypeScript types. If that piques your interest, go check that episode out. That was with one of our other presenters. Yeah, I love this Doom on an earbud. I'm not sure if it's actually still up at the moment, but it was sort of, you were able to jump into Twitch or something and get on a queue and then control Doom being hosted on this earbud. Yeah, it's doombuds.com if you're interested. We thank Aaron Dash S, the user on Hacker News, for posting that. Yeah, unfortunately, that's all we got time for today. Do join us again next month. We'll be hitting all the news highlights and probably a bit more of Hacker News than we did today. Yeah, hope to see you there. [0:48:15] SF: Thanks, everyone. Cheers. [END]