EPISODE 1861 [0:00:12] GV: Hello, and welcome to SED news. I'm Gregor Vand. [0:00:15] SF: And I'm Sean Falconer. [0:00:17] GV: And welcome to the August edition. This is, for just a quick reminder, a slightly different format of SEDaily podcast where we take a spin through the last month's worth of main headlines. We talk about a kind of bigger, chunkier topic in the middle. We then look at Hacker News, just some fun things that have been going on there. And then we make a few predictions as to what we might be seeing in tech across the following month. Talking of predictions, I believe, last time, Sean, we made one prediction, which was that July was going to be a quiet month. And I think we've seen that not to be the case. [0:00:56] SF: Yeah. I mean, that's, in retrospect, quite a bold prediction to make. [0:00:59] GV: Yes, it was. [0:01:00] SF: That anything right now in tech is going to be like a quiet month. Maybe I haven't run the math on it. Maybe relative to, I don't know, June, it was quiet. But I think relative to probably any other time in technology, there was a ton of stuff going on. [0:01:13] GV: Yes. Yeah. Absolutely. [0:01:15] SF: Which is good for us. Because if nothing happened, we wouldn't have anything to talk about. [0:01:19] GV: Right, exactly. And the thing is, I think this is where it is summer time in the Northern Hemisphere. A lot of people are kind of taking their weeks away. But yet, there's still so much to catch up on, it looks like, when people get back. Speaking of, how was your July, Sean? [0:01:33] SF: It was pretty good. I mean, I went - I think I mentioned that I was going on a personal trip, family, to Hawaii. Then I spent the July 4th weekend in the United States in a hospital because I was hospitalized for pneumonia. I wish to put a little bit of a damper on things, but I'm fully recovered now and back in full strength. [0:01:48] GV: Good. Yeah. Back for SED news. That's what we'd like to hear. Yeah, fantastic. [0:01:53] SF: How about you? [0:01:53] GV: Yeah. I mean, July over here in Singapore. Just speaking of seasons, we don't really have seasons. It does make this sort of slightly odd thing where people are talking about summer holidays. But nothing really changes here. It's just sort of another month of being hot, etc. But no, I've been just of heads down on a bunch of startup stuff. And more importantly, I decided to start rewatching Silicon Valley, the TV show. And it's also my wife's first time watching it. It's kind of her window into this world. I think what strikes me is that this was made back in 2014 was the first season. And okay, the tech has all changed, but the storylines are incredibly pertinent, I would say, even about 10 years later. Nothing seems to change on that front. [0:02:38] SF: Yeah. Well, I mean, I wonder how if you're redoing Silicon Valley today, it probably would not be a compression algorithm that - [0:02:45] GV: Exactly. [0:02:46] SF: Yeah, it's going to be some sort of AI model or something like that is the basis of the company. [0:02:50] GV: Yeah. But most of the actual sort of storylines, in theory, I can see far too many parallels as well, which is kind of interesting and fun. Yeah, let's go into the headlines. Probably the big one was around Meta Superintelligence Labs. This kind of is, I guess, moving a little bit on from one of the things we touched on last time on the headlines around the scale AI acquisition by Meta. Sorry, it wasn't an acquisition. It was a large investment that looked like an acquisition effectively. And we weren't entirely sure what that was about. However, it's become kind of clear that this is a big mission from Mark Zuckerberg to create Meta Superintelligence Labs. And Alexandr Wang, who was the Scale AI CEO, is now the Chief AI Officer and leads that initiative. And meanwhile, they've also recruited Nat Friedman, who is the former GitHub CEO, and he apparently is partnering with Wang to lead AI products and applied research. And then there's Shengjia Zhao, I hope I pronounced that correctly, who is a former OpenAI researcher, and he's just been named the Chief Scientist of MSL. Yeah, there's quite a lot to unpack there, Sean. [0:04:07] SF: Yeah, I mean, I think this kind of goes back to some of the things that we've touched on in a couple of episodes of this, I don't know, arms race around talent for AI right now. And clearly, Meta is making some big moves here. It's interesting with Scale AI, they're bringing in the CEO, they made the investment. And I think one of the kind of funny things, when you look back at some of the things that, of course, Mark Zuckerberg said over the years, he's talked a lot about how Meta really invests in sort of the raw talented people. And it's not about people's experience. And he goes back to his own experience, essentially. He founded the company when he was 19. And it's like, "Well, if I could do that with zero experience, but I was just talented." Clearly, talent is - we don't want to over-index essentially on experience. But now they're paying massive amounts of money to acquire all of this AI talent where it really is - I mean, obviously, these people are talented, but they also have core experience that Meta wants to take advantage of. There's only a handful of people in the world that have been involved with, as an early researcher at OpenAI, and have done some of the things that this pool of talent that they've hired to come in and run this lab. [0:05:18] GV: Yeah, it feels like - if we sort of, again, slightly unpack the Scale AI investment or pseudo-acquisition, if you want to call it that, it did look like - I would say especially after we recorded last month, it did very much look like they were doing this, A, to sort of cut off the data supply to other - their competitors, OpenAI and Google, for example. As well as, clearly, Mark Zuckerberg has decided that talent is the other war to be fought here, and sort of that's their way of signaling to everyone else how great they are at all this. Instead of people who I feel talk less about Llama sort of in the consumer sense, you don't sort of - my mom wouldn't be using Llama. She uses Claude. If we look at sort of - because we're going to get to this in the main segment around sort of AI search, where does Meta go from here? Clearly, this sort of looks like a way to signal to the world. Yeah, I guess what I'm getting at is it feels not like a vanity project per se, but it's a little bit hard to see the substance right now. But of course, I'm sure Mark Zuckerberg has it all figured out. [0:06:21] SF: Yeah. Well, I think a lot of it comes down to - if you look at how Meta monetizes, it's all based on capturing eyeballs, right? You need eyeballs so that you can monetize on ads. And that's been their - how they built up this war chest of this $100 billion revenue-generating business. And Google is much the same as through search. And if those eyeballs go away because they're leveraging AI tools or other types of tools, then that devalues your company. You have to make these sort of protective moves. Clearly, it seems like the future of all these businesses is going to have some core AI component, and it becomes a race to figure out what's our strategy there. Who's going to own this? Some people are going to succeed. Some people are going to be losers in this as well. And of course, these businesses want to be on the winning side. [0:07:12] GV: Yeah. Moving on, the next sort of main headline, these are things that sort of touched the mainstream media in software and tech as much as they might touch Hacker News, for example, but the Windsurf acquisition, I'll call it debacle. Interestingly, my prediction last month was that something to do with the Scale AI pseudo-acquisition would go wrong. Well, it doesn't look like that went wrong, but Windsurf definitely sort of went wrong. Where, back in May, it was announced that OpenAI was going to acquire them for three billion. And then something happened. And suddenly, Google announced actually that they are hiring the CEO of Windsurf, CEO and co-founder, Douglas Chen, and some of the top researchers. And that then, very shortly after that, Cognition, who builds the product Devin, which I'm sure a lot of people have heard of, they then said, "Oh, we're excited to share that Cognition has signed a definitive agreement to acquire Windsurf, the agentic IDE." So, i.e., they're sort of vacuuming up all the tech and the rest of the team. Yes, it says the acquisition includes Windsurf's IP, product trademark, and brand, and strong business. This is just bizarre, or I don't know. What did you make of this one? [0:08:23] SF: Yeah, I mean, it's kind of crazy. I don't know why things fall through with OpenAI. I mean, these deals can fall through for a myriad of reasons. But then Google came along, instead of making an acquisition, they basically just built for the talent and the leadership team. I'm sure they throw them a lot of money. Maybe they ran a calculation. It's like, "Okay, well, we're interested in these six people. How much does that kind of cost?" Versus buying the business or something like that. And then that leaves, unfortunately, Windsurf in a pretty tough situation. [0:08:54] GV: Yeah, absolutely. [0:08:55] SF: And I don't know - did the terms of the deal with Cognition, is that public? [0:09:00] GV: I don't think so. [0:09:02] SF: Who knows what the structure is? [0:09:03] GV: I saw that they put out a very nice video. Of course, it's almost a topic for another day, the level of production on videos that have come out in the last few months. But yeah, I don't think the exact terms were disclosed. [0:09:14] SF: Yeah. Who knows what that looks like? But what I read was that Cognition has kind of said that Windsurf has done a really good job on like go-to-market. Cognition and Devin have really focused on the engineering side. Now on the Windsurf side, a lot of their sort of engineering leadership has left. It makes a lot of sense to kind of bring the companies together. That makes sense, of course, from a public perspective. But I'm sure a lot of it had to do with acquisition and acquiring some of the brands, some of the things that are stated there. But who knows what the value of that? But, I mean, that is a crazy situation, though, to go through. And it's probably unfortunate for the employees that were left behind at Windsurf. I think that would be tough. I think as somebody who had been an entrepreneur at one point, it's a weird sort of situation abandoning your company. I don't know the ins and outs of everything there. I don't want to fault them too much. But it is a bit of a strange - on the surface level, it feels like a selfish move to kind of abandon your company, because someone's throwing some money at you and then kind of leave behind all the people that you convinced to come and actually work on this vision that you've created. [0:10:22] GV: Yeah, exactly. And then this is it. I think I'm sure in time there'll be either books or articles written about sort of what actually went on here. But yeah, I share that view as well. Whatever has happened, there's no way that this has been sort of a nice experience for a lot of the team, probably at Windsurf. It's just probably sounded like an incredibly distracting three months of, "Oh, we're being acquired by OpenAI. Oh, now we're not. Oh, now our CEO is leaving. Oh, and by the way, you're now going to join this other company." Doesn't sound fun either way. Yeah. Okay, moving on from the Windsurf acquisition, we have seen reports that Lyft are indeed trying to get into the autonomous vehicle space. This is interesting and a sort of a headline on the basis that they have fairly recently said that they weren't going to get into this space. Or at least the CEO was on another podcast fairly recently. And when pressed on this point, that sort of how were they competing with - it was never mentioned that podcasts never said Uber, but they always just said the other guys. Sort of, if you're trying to compete with the other guys, how are you doing it? And how do you plan to do it? And his answer was much more around, "Oh, brand. And we're the best service. We've got the best on-time rates and the best pickup percentages. And we've got this amazing partnership with DoorDash, and that's going really well." And sort of when pressed on autonomous, he was very noncommittal and he didn't say that they were going to launch their own sort of Waymo or anything like that. But now it's become reported that they're going to be at least trialing, I believe, autonomous shuttles. Sort of like small kind of bus-like vehicles, but in partnership with someone else. Yeah, this kind of interesting. Right. [0:12:07] SF: Yeah. I believe both Lyft and Uber at one point had their own autonomous vehicle labs. And actually, Lyft, pre-IPO. A friend of mine who's the CTO of a company I used to work for was one of the engineering leaders in that lab. Came over from Microsoft to basically lead a lot of those efforts and work on autonomous vehicles at Lyft. But I think that some of it was some posturing going on through the process of going IPO to kind of probably help drive up stock value and things like that. And then they eventually abandon the effort. I do think that the strategy they're taking now in terms of let's not sync all this R&D effort and try to build our own thing to compete with all the other self-driving car companies that are out there, and not to mention like Google with Waymo and stuff like that, and doing it through partnerships makes a lot of sense from a strategy standpoint. Because it's just like, is that your core competency? The amount of money you'd have to throw at the problem while you're still just trying to create a sort of taxi service that you're competing with Uber on to this day, probably better to focus your resources on that. [0:13:12] GV: Yeah, absolutely. And I think certainly there's a differentiator here, whether this is about shuttles and it's not about individual sort of rides, it looks like. Because I think Lyft can probably see that they're even partnering with someone would look probably quite difficult at this stage. Waymo have already kind of - they do have a great distribution right now. They've got their own app, but it does look like maybe they're going to have to partner with maybe Uber, for example, to kind of get their distribution better, for example. But suddenly, yeah, Lyft is taking a slightly different approach, which is, "Okay, we've got the distribution, but we're not going to then go head-to-head on the exact mode of transport and shuttles," is what they're trying. Yeah, interesting. There was a report, I believe, in Time about ChatGPT's impact on our brains. What was this about, Sean? [0:13:59] SF: Yeah, this is pretty interesting. My wife actually sent me this, which was this study done at MIT. The research hasn't been peer reviewed yet, but it's been submitted. And I think it's the first that particular research has released something sort of pre being peer reviewed, but they thought it was an important study to get out there. At least that's what they shared. But they took 54 different subjects, they divided them into three different groups. It's like 16 per group. And then each group had to write SAT essays. One group could leverage ChatGPT, another group could use Google for search, and then another group couldn't use anything other and what's in their brain. And they used an EEG to measure and record brain activity while people were writing these essays. Unsurprisingly, ChatGPT leads, leveraging that, you have the lowest brain activity to do this. And essentially the ChatGPT cohort got lazier and lazier over time where eventually they weren't even trying to massage the output of the essay. They were just copying, pasting it and putting it in wholesale. The thing that they really highlight in the article is the potential risk especially young children and a developing brain. Because there's one thing if you've - I guess, maybe the closest analogy is like math. If I've already learned sort of how to do math for adding, subtraction, multiplication, divide these types of things. And I know those sort of things for first principle, I learned them in school. And then eventually I start using a calculator. Well, I can use that to kind of supplement, but I still have that first principle knowledge. But if I start with a calculator and I never learned those things, then I'm never going to learn how to do that basic functionality and an over reliance on potentially using something like ChatGPT or one of these other interfaces. Maybe I never learned sort of the basic writing skills. And I also think it makes a lot of sense even for someone who's an adult if you can become over-reliant on these things, it's a little bit instead of being physically active, you lay around all day, eventually your muscles atrophy - I think it's the same thing with your brain. If you're turning off your brain, you're never ever pushing yourself to think of anything, you're just always just basically prompting something to generate output for you, then it withstands the reason that your brain kind of gets lazier over time too and has some sort of negative impact on you. [0:16:21] GV: Yeah, that's very interesting. I mean, I can totally identify with that in the sense of if I start a day and the day starts with LinkedIn, my brain starts to, I think, atrophy just even in those few minutes, whereas if I start with a book or something else. It is very important in terms of what your brains actually get fed with in the first place. And then exactly in what's being reported here, we discuss this on one of the other episodes purely around the coding side, you know, what happens if you didn't learn the first principles, the fundamentals before you introduce the tools that can then output what you ask it to do? But this feels far more important in the sense that writing is still a skill that most humans should learn and be able to express themselves through writing and obviously communicate with people through writing. And this does feel quite - not to sound alarmist, but a little bit scary, actually, because we know that adults are not great at the moment, or sort of how are they policing the usage of LLMS, and at what age? It's almost parents, I think, sort of say, "Oh, have you given your child a phone yet?" And then maybe we'll start hearing, "Oh, do you allow your child to use an LLM yet?" for example. [0:17:31] SF: Yeah, it's a tough balance because there are skills that are useful, probably in the long term for them to develop with interacting with some of these tools. And I think, also, there are learning opportunities with these as well because you can explore any subject in a very sort of psychologically safe zone where I can ask it anything. I don't have a fear of it like judging me. And I can learn that way. But if I'm really just putting in my homework and that thing can spit out the output and then copy and pasting it, but not really learning in that way. Basically, it's such a shortcut that it's not forcing my brain to kind of think through that. The study showed that people generally had a lot harder time recalling what they wrote when they use those systems as well, because you're just not putting the same sort of level of thought into it. It's probably not crystallizing the same memory pathways and so forth. And of course, doing this without any tooling had the best recall because you're forcing yourself to do it. [0:18:26] GV: And just moving on to one other headline before we go to our main topic, this was reported in the Financial Times, but it also got very high up on Hacker News as well with a lot of commentary from the community. This is that VPN use has surged in the UK as new online safety rules have kicked in. And this is really around that the UK government has implemented some new sort of safety framework that then would require a lot of websites, obviously, for example, adult websites, but also many other types of websites to make some quite sort of heavy age checks on the users. And I believe I was trying to put the pieces together here of like where does a VPN come into all this? And I think, well, it must be that the framework that's been implemented is trying to do some matching on, say, IP plus what does that person say their ages, and so on. Basically, privacy is just kind of going out the window. It's very interesting that a lot of teenagers - this applies to social media as well, things like TikTok, etc. And I think a lot of teenagers have understood very quickly that a VPN will help them at least circumvent a lot of what this is trying to do. And the article as well as a lot of commentary is just talking about how this is what happens when policy is passed with a sheer lack of understanding of technology, because to circumvent it took all of five minutes for someone. It's an unfortunate position where the UK government thinks it's trying to do something to protect people. But actually, the reality is it's probably doing more harm than good as implemented. [0:19:59] SF: Yeah, I mean, the policy's probably something like websites need to - if you're in a certain class of website, you have to check to see the region, the world that the person is connecting from, and then do the age verification. But that's an easy enough thing to circumvent. I think, I worry, even going back to the last thing we were talking about on the sort of ChatGPT. I could imagine there could be policies that could potentially come into school systems It's like, "Hey, we want to teach our kids how to use these AI systems," and then they get incorporated into school without necessarily fully understanding what the impact could be that could lead to these kind of downstream sort of cascading effects where you're actually negatively impacting people. And I think in general, we see this pattern sometimes when it comes to like the policymakers is just a lack of sort of understanding the technical details. And then maybe the intention, there's good intentions at heart. But without really understanding the technical details, they're impossible to enforce, or they actually have a negative consequence. [0:20:59] GV: Yeah. I mean, the one that sticks out to me that's still in force is all the cookie policy stuff that came in, and especially in Europe, GDPR, which relates to this. And the implementation of that has just been horrible in the sense that we've all - probably daily, we all go to a site and the first thing we see is a cookie. Which ones do you want to pick? And then of course, the default is still just to press yes to all. But there's a button that says, "Confirm my choices," which is kind of a way of saying no. It's just such a mess and has just made the internet even less fun. Again, well-intentioned. But this is, again, what happens when policy - policy doesn't consider UX. Policy just considers policy. And then there's like, now, rest of internet, go implement what we've decided, which is - I remember back in the day having to - when GDPR came in, and we were working for a very large multinational, and that was just a horrible, horrible project for everybody involved, having to implement GDPR compliant, everything. Okay. Let's move on to the main topic for today, which is we're kind of just terming this like AI search. And that is quite a broad term, and you'll maybe see why in a second. We're kind of looking at, okay, we've got the main players like OpenAI, or ChatGPT, Claude, Perplexity. But Google has Gemini, but Google is obviously the world's still most popular search engine and kind of the window into the rest of the internet for a still majority of people. But where are these all crossing over? Because it wasn't that long ago that you could use, say, Claude, or you would be using Claude, and you would kind of ask a question which would, in theory, need to go and pull some information from somewhere else. And Claude would simply say, "I can't access websites. I don't know about that." And maybe some people have even forgotten that that was a thing. Because now you can go to Claude and ask something more kind of real time, "How is Google approaching this thing?" And it will then kind of say, "Oh, I'm going to go do a bunch of searching." And then, "Oh, I found these 10 links, and let me just quickly parse those links and analyze those." "Okay." And now, "Here." There's real-time answer. That's a sort of quick history on where we're coming from and where we're going to. But the implementation across the different services is quite different. We're going to kind of look at that first. And then we're kind of going to look at where are things landing. Because there's also been a report fairly recently in Wall Street Journal, and I just reported generally in earnings, that Google's earnings from search are still very healthy. And people kept predicting that search was going to get killed very fast. We're not seeing that yet. We're going to sort of look at that as well. Yeah. Maybe, Sean, shall we start with just sort of looking at the different platforms and kind of how they have approached this, bringing real-time data, if we want to call it that, into their platforms? I don't know. Let's start with GPT and Claude, because Perplexity is an interesting one. It's a little bit different in terms of where they started and where they are now. Let's start with OpenAI and ChatGPT and Anthropic Claude. [0:24:09] SF: Yeah, I think with ChatGPT, they were, I believe, either the first or one of the first in the kind of consumer-facing applications to bring in these external search services where I asked for something that's - real time would be generous, but something that's fairly recently happening because it has to show up on the web at some point in order for the - but basically, they're doing a web search behind the scenes and then pulling in that additional contextual data so that they can properly contextualize the prompt behind the scenes and generate a response that makes sense. It's like, "Tell me about the weather in San Francisco." And it can go and do that web search because it recognizes that it probably doesn't have access to that. And it's tremendously valuable because you used to get that sort of like, "I've only been trained on data up to such and such a day, and I can't comment on these things." Or you'd have to go and copy and paste that from somewhere and bring it in. And I think one of the things that we're seeing is, with OpenAI and, I think, with Gemini and Gemini's direct interface, or even with Claude and all the different models, is trying to bring more and more functionality directly into the application. It's just like over the years, Google has done, even in conventional search, more and more to sort of keep you in the search page rather than delivering you externally. And I think as things - people get so used to, having conversations and sort of just being able to ask for stuff, I think about what does that look like sort of long term of if you have ChatGPT and suddenly websites, e-commerce sites can all just be tool invocations of some sort. Then do I ever need to go to an e-commerce site? Can I just go to kind of this chat interface and just say like, "Hey, I'm interested in buying some shoes. And this is what it looks like. And here's the amount of money I have available to spend," and blah-blah-blah. And then it can go and actually, like behind the scenes, converse with that site, pull in the details, render it. And I never actually have to leave the chat interface. Almost like the way I think about it is like how WeChat has become this like super app in China where it's almost an operating system itself where it has apps within the app and so forth. And you can do everything in terms of your personal financing and everything all within WeChat. [0:26:24] GV: That's a great example because it became clear that super apps, especially, yes, in China, were leading, I'm going to say, their internet because it is a very, very different way of operating the internet inside China. Meta, at least, were trying to do the whole super app thing. Trying to blend Facebook, and Instagram, and WhatsApp. And I think the vision there was to virtually blend them. And of course, people, basically users, push back hard on seeing bits of their data cross-platform. And so at least for now, things are still relatively segmented. But yeah, I think that's a great way to look at it. This morning - I don't use GPT as much, but I thought I would just go in and just kind of see what it does like today. One interesting thing was going in. And when you start typing the prompt box, you can choose, "Is this like research, or an image, or whatever?" But one of them is actually just web search. I press web search. And then more interestingly, then kind of a Googley type thing happens where it sort of shows you the last four trending topics. And I was like, "Oh, okay, this is very searchy." And one of them was Cat Deely. And I was like, "Oh, Cat Deely is a TV presenter. Sure, I'll press Cat Deely. Why is she trending?" And then I get this big like summary around her career. And she's trending because she's unfortunately broken up with her husband. And I was like, "Okay, well, to be fair, this is a much, I think, better experience than had I gone and searched Cat Deely on Google." However, why am I not doing that here normally is because the browser. And the browser - I use Chrome. And the first window into Chrome is the URL bar. And that's also search. Yeah, that was kind of interesting. On the Cloud side, I think, yeah, I just see it where I ask questions. And it's never clear when Cloude decides it needs to go off and retrieve information. Because sometimes it will just kind of - if the model has the data, it will just come back with, "Shakespeare's last work was this," and whatever. But if it's something that clearly needs additional context, it sort of tells you very fast, "I'm going off and doing this." And then there's the whole research modes, which both OpenAI and Anthropic have implemented in their products, where it kind of goes off for much longer periods of time, pulling lots and lots of data from lots of sources. And it tells you where these sources are, which is something that, to my understanding, a simple search right now in Google or any other pure search engine just doesn't have the ability to do that. [0:28:57] SF: Yeah. And I think the other move that some of these companies are doing as well is, at least there's rumors of OpenAI is going to launch a web browser. Perplexity, I believe already has a web browser. It's kind of like - and it kind of comes back to like what I talked about earlier with Meta is like, how do you own people's eyeballs? If you own a browser that people like to use, it's much easier to make your website the default experience, just like Google's done with Chrome. Or Apple was able to do in some capacity with Safari and so forth. But how do you keep people basically engaged with your software? And I kind of wonder, if it does become something where most of our interactions are through these kind of chat interfaces, even going back to the e-commerce one, in the future, what will it even mean to have a website? Do websites and sort of the rendering of HTML, CSS, and these kind of interactive experiences that we built up, do those go away? And these things all become some sort of more of a tool-based interface that's kind of hidden from the users, and the way into those are through these kind of chat experiences? [0:30:01] GV: Yeah, exactly. Perplexity is an interesting case where, I believe, from kind of their inception, or at least inception of a consumer product that they sort of put out publicly, they were much more, "Hey, we are the AI search engine." It was like, "Come to us and do your searches here, and you get all this added context and citations of where we got the information from." And I thought it was really nice. I'm not quite sure why I didn't use it maybe as much as I thought I would. Again, it's probably just the eyeball argument. They gave away pro subscriptions very early. I mean, I believe I still got one that's completely free. What's interesting, yeah, is that they've come out with Perplexity Comet, which is their browser offering. Just to kind of give a quick description, if you go to the Comet website, it's all very Appley, I would say. It's like very sort of thin Sarah fonts, which I know are very much in vogue right now. Again, everything, all these AI products seem to be using kind of the same font, and it says like Comet is powerful, it's professional, it's personal. It looks like a very nice experience. But again, what's it going to take to get people to kind of fully move over to a browser that's controlled by one of these players? And then a company that I mentioned briefly last time, Manus, they're ultimately a Chinese company that have done quite a lot of what we're doing right now. Chinese companies are moving most of their team and infra to Singapore because they want to be seen as kind of neutral. But they're the same thing. They kind of started with a browser, then they went away from the browser thing. Now they're back with another browser. Browser is kind of now this eyeball solution, I guess, is how they're looking at it. [0:31:41] SF: It's probably gonna take a while before this can become the one-stop shop for all user interactions, but it does seem to be the direction that a lot of companies are trying to push for. I know Google, everyone's excited about their earnings reports. And they've had a big bump in their stock and so forth. But I wonder what's Google's play there just because they have such a long history with monetizing blue links. And I wonder how they're thinking strategically about how do you shift that business model to potentially a new business model that allows you to kind of monetize in a different way. [0:32:19] GV: Yeah, I don't think we've seen it yet, but I think it's fairly understood that at least OpenAI are going to be finding ways to monetize through ChatGPT. I believe they hired the head of ads from Meta, or this person. She had at least led the ad side of the business at Meta for a long, long time. Maybe it wasn't a direct hiring, but that's sort of her background. Why do you hire someone who's got an ad background? Well, it doesn't take too much to figure that one out. And yeah, Google at the moment, according to Wall Street Journal and down their earnings, they're still doing fine when it comes to traffic. And Google itself, just pure Google search, they do a lot of this, just a summary at the top of the page, which is interesting because that stops people clicking through to links. Whereas it's been reported in TechCrunch, and this was said according to new data from market intelligence provider, Similarweb, AI platforms in June of this year generated over 1.13 billion referrals to a thousand websites globally. And that's a figure that's up 357% since about this time last year. And these are the top places they seem to be sending users to are Reddit, Facebook, GitHub, Microsoft, Canva, Instagram, LinkedIn, Bing. But I was more interested in the second kind of set of referrals. Well, it was referrals by category. By category, we're seeing things like a lot of referrals to things like Zillow, Home Depot, and Kayak. They caught my eye because as soon as these platforms, ChatGPT, Claude, etc, as soon as they're making these mass referrals to sites like Home Depot or Kayak, that really starts to take a dent out of Google's at least market power here. And what does that mean? That means that ad spend on Google potentially is going to go down. Because if they're saying - Kayak could say, "Hey, but we're not getting the same traffic from you guys anymore. We're not going to pay the same kind of money." [0:34:16] SF: Yeah. And those are a lot of high-value search terms that they make a lot of money off of. Very competitive. But I do think that give Google some credit. They have had a long history of being able to make the right defensive moves. They did it with mobile. They did it with the browser. They have been able to kind of skirt these challengers in the past. The big question is can they do it with all these kind of AI competitors and what's happening? One thing I was thinking about, do you think that we will remember this time as kind of a heyday of LLM-based interfaces? And at some point, the ChatGPTs of the world are just going to become polluted with ads, and it's going to be like the same terrible experience that we have now. Because when Google first started, there was a lot less ads there in the blue links. And then even if you think about the Netflixes of the world and streaming services, we had a heyday of streaming. And now, essentially everybody is trying to protect their core IP, and you end up with having to subscribe to like six different streaming services. We're basically back to cable packages just with a lot more stuff that's on demand. We've passed the sort of heyday of streaming. We're probably passed the heyday of search. Are we right now living in the moment of the heyday of LLM-based interfaces, and everything's going to get worse from here? [0:35:31] GV: Not to sound sort of fatalist, but yeah, I think you're probably onto something with that one. And I think the technical term in tech is enshittification. I think it's that essay that someone wrote about the enshittification of the internet, which basically just means that every product that was once good eventually just becomes terrible because they've had to pack it with monetization features, so on, so on. I mean, that's a great essay, if anyone's not read that, just to sort of understand why maybe a lot of us feel that the internet is not kind of what we would like it to be for many reasons. And yeah, I think you're right, Sean. I think we could be seeing this as sort of the heyday, OpenAI or ChatGPT at least was the fastest used consumer product in history or something to that effect. But that was exactly the way before it could access sort of outside links. And as we just mentioned, it looks like they're probably going to start bringing in some kind of indirect monetization, i.e. companies paying them to have potentially their links pushed more than others when it comes to the responses. And that just kind of kills the whole thing. Because at the moment, it feels sort of - I don't want to say neutral, but it feels that as neutral as a model can be from - I appreciate it's trained on data. And there's been a lot of arguments around where does that data come from and who tuned the model. And is it just a bunch of white males tuning the model basically? What does that do? But at least we think there are no kind of monetary incentives right now for - [0:36:59] SF: Yeah. I mean, you could argue that when they surface an answer, and that sourced from somewhere, they're doing that under the best intentions of creating the best answer possible. But as soon as you start building a business model around the monetized answers and you can influence answers based on how much money you pay, then suddenly you destroy the good intentions that were there originally. [0:37:20] GV: Yeah, maybe let's wrap that one up there. I think we've done not too bad a job of trying to cover what is quite a meaty topic in the sense that it's quite disparate in terms of all the products are trying to do right now and where they might go with this. But I think the TLDR here is Google is still doing not too bad on its search revenue. But for how long? Meanwhile, all the other LLM products are moving to this kind of, "We want your eyeballs. Basically, stop using Chrome. Stop using the URL bar. Stop searching in there. How do we get you to use our product for virtually everything? How do we become the OS of your internet experience?" Yeah, watch the space. I'm sure we'll come back on this in a few months and there'll be some major developments, no doubt. Let's move on to the highlight of the show, Hacker News highlights. I might kick off with - I've got a fun one, and then a sort of not serious one, but just this is probably more fun per se. This was posted by user Asyncbanana. I don't know if that is also the author of the article, but the article is making Postgres slower. And it was very much said at the beginning of the article that this is purely somebody who is not in a job right now. And why not just see what happens if I try and make Postgres slower? And I kind of like this, because this person did put bounds on what he could do to make this slower. It had to basically all be around postgres.conf file. So you couldn't sort of just jam it with like - you can just like sort of send like a bajillion rows of data at it and be like, "Oh, look, it's slower." No, no. It has to be like how do we configure Postgres to be 42,000 times slower? Is it all sorts of funny things, like basically increasing the write-ahead log. How many times that needs to write? You can tune that and change that. Or basically removing auto vacuum, just all sorts of bizarre things you would never do in production, obviously. But I think it was kind of also fun because if you see this as like reverse engineering, once you see just how bad something gets based on tuning it in a negative way, you can probably start to realize then how to tune it better for your use case in a positive way. Anyone working with Postgres at that level, where you're actually working with the conf file, yeah, that was kind of fun, and interesting, and just well-written as well. [0:39:35] SF: I found that really fascinating. I think we have a tendency to cover when it comes to the Hacker Sews section these kind of like weird of some engineer going really off the deep end into a project. [0:39:48] GV: Yes, we do. I definitely like to pick those ones out. [0:39:51] SF: Yeah, yeah. I mean, it's fun. It actually is very similar in some respects to one of the ones that I had pulled, which is how I hacked my washing machine. And there was a bunch of university students that reverse-engineered the smart washing machine to get Discord notifications when laundry cycles finished. [0:40:06] GV: Oh, amazing. Okay. [0:40:07] SF: They did it for fun. They were able to sniff the traffic using OpenWrt router, and then they brute-forced the XOR key that was being used using open source tools to be able to decrypt the API messages. And then they built a bot that pulls the machines, decrypts the responses, and it sends updates to Discord. Completely ridiculous amount of work that you put into that. But maybe it's up for noble products or whatever. I think those things are really fun is when people take their technical skills and they can have fun with it. I think that's usually what inspires people to study the stuff in the beginning anyway. [0:40:42] GV: Yeah, exactly. I haven't appreciated that it did involve real hacking effectively. Because when people say they hacked their washing machine, it's like, "Oh, we found this, there was actually a sort of API inside the washing machine. And we could just use that." But this was like proper hacking. That's very fun. And yeah, you're totally right. This is what, in my opinion, people should be doing if they've got the skills and the time to do it. Just have fun with this stuff. And usually it sort of ends up leading to - you can talk about this in interviews as well. No one's going to be unimpressed that you did this. [0:41:10] GV: Yeah, exactly. [0:41:11] GV: Yeah. My second one, this was posted by Vihar Kurama. I hope I got that right. I don't think this was the author posting this, but this is to do with the product Plane, and it was a blog post written about the Plane product. It's called We Built an Air-Gapped Jira Alternative for Regulated Industries. And this is interesting just from security perspective, where in order to get government contracts, Plane decided to commit to - they say they can suddenly land six-figure contracts if they could meet this spec, which was that it had to be entirely air-gapped. That their software could not make any outside requests, any pings to anything. Let's just give a really basic example here. Even if it's self-hosted, for example, well, they want to know about your licensing. There'll be pings made out to check that your license is still valid and so on and so forth. And so to suddenly take every piece of the way that you would normally design self-hosted SaaS and have to remove that and design something that doesn't do this, it's actually a pretty big challenge. But they claim they've done this. And so Plane, you can now get a fully containerized self-hosted version of plain. Obviously, I think it's just only being supplied under these fairly large contract values. You can't just sort of get this version to run yourself. But I think we are going to see more where it sounds very innocuous. Oh, it's just like Jira." It's like, "Yeah, but if it's Jira equivalent for government, well, that's like some pretty secret projects going on inside these Kanban boards." It's a very interesting use case. [0:42:49] SF: Yeah. I mean, I think once you get big enough or a part of your business is based around sort of selling into the government, some of these things probably just become things that you have to figure out. And I think it was really interesting, and what they talked about is they didn't just slap self-hosted on this and call it air-gapped. They literally have zero outbound connections. There's not even license checks or updates that are part of this. It's completely 100% air-gapped. [0:43:15] GV: Yeah. They provide a nice little table in the article just sort of explaining what you would normally do in this situation and what they had to do to make it air-gapped. Go check that out. [0:43:23] SF: Yeah. The last one I had here I pulled was this little study that was posted by [inaudible 0:43:29]. Hopefully, I pronounced the name right. I mean, this is a challenge with mentioning people's names from Hacker News, it's like - [0:43:36] GV: Just trying to get people their sort of five seconds of fame. [0:43:38] SF: Yeah, exactly. [0:43:39] GV: You hear your username badly pronounced by one of us in this podcast to get your fame. Yeah. [0:43:43] SF: Yeah. It was just a little study they did. I thought it was fun. They looked at what's the impact of variable names on like Copilot completion. You write a line of code, you give the variable a certain name. How good is the Copilot at sort of completing the next line of code? And they experimented with four different styles of variable names, very descriptive names using sort of standard styles, like snake case, camel case, a minimal name and then a fully obfuscated name. And not too surprising in the same order was also the performance. A descriptive name Copilot does best, obfuscated does the worst. Kind of similar to, I'm sure, a person as well. But they also looked at the performance versus token costs. Descriptives cost you 41% more tokens, but you get almost 9% better performance. You're presumably forgetting better performance. The person using the tools, operating more efficiently, it probably offsets whatever increase in cost from a token perspective. [0:44:47] GV: Oh, very interesting. Yeah, I mean, I think not something I maybe ever really considered was just naming could impact this. And this is just sort of this person doing this sort of study themself, I guess. [0:44:57] SF: Yeah, I think they just - if something they were interested in, and they just kind of ran the study themselves. You can go through a little bit of detail in their blog post about how they set it up and how they did the measurements. [0:45:08] GV: Awesome. Yeah. There's Hacker News highlights for another month. Sorry if we didn't feature your article. We only really have time for four. And of course, it's 4,000 per day, but we do our best. Okay, looking ahead, just a completely fun part of the show where we make some kind of prediction for what might happen the next month, and we get to see how far off the mark we were on the next episode. Sean, what's yours? [0:45:33] SF: Yeah, I'm going to go with there was a lot of noise made in early part of June that we also talked about with Databricks and Snowflake sort of buying these like Postgres database companies with Crunchy Data and Neon. My prediction is going to be that Salesforce, which is also trying to position itself as a data and AI platform company is going to buy a Postgres database in August. [0:45:57] GV: Nice. Okay, we'll definitely check back in on that one. Mine, I'm just going to go with related to what we were talking about in the main section, based on the fact that things just do seem to move far faster than anyone expects. I'm going to say, during August, we're going to see the first actual monetization through one of the platforms. Is it ChatGPT selling - they actually stand up there like, "Hey, you can now pay to do something," which influences whether your website appears in ChatGPT. It might not be that it might be anthropic, but I don't know, it always seems like OpenAI are the first people to move on this kind of stuff. [0:46:37] SF: Yeah, interesting. I think if you're not - if that doesn't happen in August, it's probably going to happen at some point in the next six months. [0:46:43] GV: August is just far too fast for everyone. Let's see if OpenAI their worst on that one. Great. Well, as usual, Sean, fun to catch up. Hopefully, we've been helpful to the audience just catching everyone up on July and potentially what's to come as well. [0:46:58] SF: Yeah, thanks. It's always great to catch up with you, Gregor. And this is the highlight of my month is to go through all these headlines and chat about these things. [0:47:05] GV: Yeah, likewise. Yeah, we will see everyone next month on another SED News. Thanks. [END]