EPISODE 1885 [INTRODUCTION] [0:00:00] Announcer: Python 3.14 is here and continues Python's evolution towards greater performance, scalability, and usability. The new release formally supports free-threaded no-GIL mode, introduces template string literals, and implements deferred evaluation of type annotations. It also includes new debugging and profiling tools, along with many other features. ?ukasz Langa is the CPython Developer in Residence at the Python Software Foundation, and he joined Sean Falconer to discuss the 3.14 release, the future of free-threading, type system improvements, Python's growing role in AI, and how the language continues to evolve while maintaining its commitment to backward compatibility. This episode is hosted by Sean Falconer. Check the show notes for more information on Sean's work and where to find him. [INTERVIEW] [0:01:04] SF: ?ukasz, welcome back to the show. [0:01:07] ?L: Happy to be here. [0:01:08] SF: Yeah, absolutely. So, we spoke roughly a year ago about the Python 3.13 release. I guess what's new in your world since we last spoke? [0:01:18] ?L: Well, not much changed in terms of my employment. A lot has changed in Python. And also, as a release manager of Python, just a few days back, I released the last Python version of my own. I've been a release manager for Python 3.8, which has been end of life last year, and 3.9 that has just reached end of life with the end of October. So, it's a bit of a milestone, I guess. You look back at the last seven-plus years doing this. I'm still involved in the release team since I'm doing installers for Windows and helping with the Mac installers as well. So, I'm not entirely going to be gone from doing releases, but it is like an end to a part of your life when you signed up to be a release manager for two versions that are now officially not supported anymore. That's a personal change, I guess, here. And yeah, we've since had the big PyCON in Pittsburgh, where we still worked very hard on Python 3.14. But since we forgot about it, it's already the old version for us core developers, because now we are very busy working on Python 3.15, which is going to be the one that is going to be released next year. 3.14 went out in October, early October. Yeah, it's in the end of users right now. So, I guess that's what's been keeping me busy for the last year. [0:02:54] SF: Okay. And then, in terms of all these changes are happening in the world of Python, I'm just curious. Since Python has seen such activity, I think, in particular to people building on top of AI models and being used to build AI applications and stuff like that, is some of the catalysts for the changes coming from there? Or are these just, "Hey, we have a road map. We know we need to make certain improvements and updates," and it has no bearing on what's going on in the world of AI? [0:03:21] ?L: There is no secret road map to where Python is going. It is largely informed by what is happening around us. Obviously, AI has a decent influence over how we think about Python. Because now, not only is Python crucial in developing AI solutions, it is essentially used in any company that claims to be AI-related somehow. It is literally unheard of not to have Python involved in that somehow. But also, Python is the language that AI writes. Very often, when you don't specify which language you would like for your solution, and you ask Claude Code or ChatGPT to help you with a problem, it's going to default to Python. Those things are on our mind, obviously. However, just like my ex-employer used to say, code wins argument. So very often, the way we function is when people come with suggested solutions to problems that they're having themselves. And whether this is a contributor who is an individual or a group of people maybe inspired by problems that their large employer has or whatever, we treat all of those things seriously. And those inform what the next version of Python is going to look like. If you do want to have input in how that works, you can simply by contributing, simply by being part of it. [0:04:55] SF: Python 3.14's here. [0:04:57] ?L: Yes. I guess if you had to describe the theme of the release, what would it be? [0:05:03] ?L: Funny enough, there's just many changes. Some of them are really big, some of them are tiny, and yet maybe cute enough to mention. But the thing that I personally think is the long-term most crucial change in Python right now is that, in Python 3.13, we shipped with this experimental feature that allowed people to disable the Global Interpreter Lock. The Global Interpreter Lock is the thing that assures that everything is going to stay correct and not crash within inside your interpreter C code when it comes to the internal data structures of the interpreter and so on. But it also is a big scaling bottleneck. So in 3.13, thanks to some gross and the big PEP 703, we shipped an experimental support for making the GIL optional. In 3.14, this mode of operation is now supported. It's still not the default. You have a specific special version of the interpreter called Python 3.14t. You can see this in your site packages, there's going to be a 3.14t directory. There's TSOs or Dlibs on macOS. So those are separate ABI. But we are now saying out loud that this is no longer an experiment. We intend to support this feature, which allows the scientific community, which is extremely interested in this, as well as others, to slowly or hopefully not so slowly adopt this and make it the default release or two from now. Release is maybe like super optimistic. Probably not. Two or three releases from now would be great. The difference between running the traditional version and the new version is not very easy to note unless you do have an application that already uses threads. In some cases, you might need to introduce some threads to applications that didn't have them before. But in other cases, you will be able to see wins right away. Funnily enough, we do have in our asyncio test suite, we have some thread pool executor tests and some other tests actually assume that there's going to be some threading going on. So we've seen performance improvements with free threading on asyncio with no changes to asyncio before we did any changes to it. Now, obviously, there's a lot of tuning inside the interpreter. So 3.14 is actually fantastic for free-threading. It reached close to single-threaded performance of the version with the single Global Interpreter Lock, but it scales. If you have multiple cores, and even your phone does, everything now has multiple cores, now your application will be able to utilize this much more fully without you having to resort to multiprocessing by hand. This obviously is the sort of theme for me that we are supporting this now, but I don't think it's going to be the theme for most users yet since we now pretty clearly target library maintainers and sort of early adopters to essentially prepare the ground, prepare the community, the third-party packages and everything for external usage with free-threading, which I strongly believe is the future of Python. It's something we should have done a long time ago, but better now than never. And it's shipped now. It delivered on its promise. So, I highly recommend you to test it if you have an application that might actually benefit from it. [0:08:55] SF: And then how does this relate also to the support for multiple interpreters from the standard library? [0:09:01] ?L: Those are specifically two separate efforts. And to a small extent, fortunately, they were competing since the entire promise of free-threading. Essentially says you can start threads now in Python, which means now they can compute in parallel, but they share data. They share everything. And this is good and bad. It's fantastic because you don't have to serialize and deserialize data that you want to share. But it is also terrible because you now share everything, including things that you might not realize that you're now mutating from two separate ends of your program at the same time. And now you're introducing race conditions. You're introducing subtle bugs that might be sometimes tricky to debug. Free-threading is a sharp tool, and you might catch yourself with it. Whereas sub-interpreters inside the same process, they function with philosophy of sharing the same process, which means they will still share the same limits on the operating system on how many processes you can spawn and how many file descriptors they can open, and so on and so on. But the idea is that they will not be sharing any data unless actually explicitly allowed. Immutable data might be able to be reused across interpreters with no or little serialization with little orchestration. But regular lists and regular dictionaries will just not work. You cannot just use them from two interpreters at the same time. They are controlled by one. And you have to have some now smart and efficient way to exchange data between them. We don't really have much in terms of library support for making that simpler than it would be with multiprocessing, where you already pickle and unpickle data because you're literally passing it to a different process. Sub-interpreters really are about isolation, and this is the core feature. Whereas free-threading is sort of about the opposite, about being able to scale your application in a better way than multiprocessing because you don't have to serialize/deserialize anything because you are sharing everything. They are kind of polar opposites, and I'm pretty sure they will be used for different use cases. Let me give you an example. Free-threading is something that is going to be useful when you have some huge data set and you want to split computing, the result for this huge data set, among many smaller pieces. Before, we always just, okay, spawned multiple processes and sort of passed the data somehow. But now, essentially, the process that passes all this data to every worker is the bottleneck, right? Because that process is just one, and it orchestrates all the work. And then when the results are computed, you have to pack the results again. And again, the orchestrator process is the bottleneck again. With free-threading, that stops being an issue because you don't have to do this entirely packing-unpacking process. That's the perfect scenario for sharing data. And for isolation, for example, imagine that you have some digital audio workstation. You have this sort of ability to instantiate virtual instruments and virtual effects. They can be written in C++ and any other programming language. Then that includes Python. All of them are, in fact, running in this process space of this digital audio workstation. If you instantiate this SO, it's going to start some code, and it's going to be running. That's great. But what if you write it in Python and start seven instances of the same effect or instrument? Before sub-interpreters, before this isolation, that would not be possible. Literally, you would not be able to do this because you would crash the process since the second effect or instrument that uses Python would write on top of the same data structures in-memory that the first interpreter already used, and that would be terrible. Now, with sub-interpreters, you can isolate them. So those plugins can live entirely independently, not even know about each other. They can be entirely isolated from each other. Different use cases. They are now, with the development stage that we are right now, both able to scale, but for different reasons, and I think for different use cases. [0:13:40] SF: And with the sub-interpreter, what is the advantage there versus spinning up completely separate processes? [0:13:46] ?L: The digital audio workstation is one example because that environment will literally just not allow you to just spawn multiprocesses since it is controlled such that the processes don't run away. If you have any sort of containerization, for example, for Mac app store applications, you can disable this ability for processes to just randomly spawn sub-processes. And for lockdown environments like, for example, the iPhone, if you run an iPhone application, it literally cannot start a sub-process. This is not an option. It is not possible. For this use case, sub-interpreters will allow you to have an embedded Python interpreter with your, I don't know, editor, or notetaking application, or any other app where you will allow users to script their iPhone application, and that's going to work. It's going to be fine. Whereas just shelling out to a Python sub-process would really be impossible. That's not allowed by Apple since, again, those processes could run away and just drain your battery when the main application dies. There's plenty of related environments with virtualization, containerization that actually also want the processes to stay single processes and not just proliferate with multiprocessing. Plus, as I said, we are still at the beginning of the road for internal reuse of our data structures, but we're already starting with that. If you stick to a subset of immutable data structures in Python, with the new interpreters, concurrent.interpreters standard library, you can already see how you can pass data between interpreters if you want to. Some of that is already pretty efficient. We're going to build on top of that, and then the advantage on top of multiprocessing is going to be pretty evident, that you can safely share immutable data without copying. Whereas with multiprocessing, unless you're using shared memory, which is yet another kind of worms, in multiprocessing, that's impossible. You need to serialize/deserialize everything. [0:15:56] SF: Right. Yeah. And I wanted to talk about also this new feature around template string literal. I guess what is a template string literal? And what problem is it solving that you can't already do with f-strings? [0:16:09] ?L: Cool. Another case where 3.14 is like the T release. We had Python 3.14t, meaning free-threading. But here, t-strings are template strings. Template strings are not strings, actually. They are a very convenient notation for creating template objects. They look like f-strings because you are using braces for interpolation. But what that interpolation is is argument to the template objects. This sounds a little weird and confusing, but if you just think about the use case for it, which, for example, in the JavaScript world is being able to put HTML in your JavaScript code with backticks, right? You can just put HTML templates. And with some brace magic, you can compose components together into something more complex. Now with t-strings, this is possible in Python. You use t-strings strings to think like you are still in a string. But what you're getting is, in fact, an object that later a library can introspect and can build on top of. You can have HTML processing that will actually put more HTML for you there, or it will compose components together, or it will do user data checks, security checks, whether some string validates or not, whether it's secure or not, and so on and so on. This can be used for SQL querying and all other sort of use cases where you want to bring a different sort of notation language into Python into the same script, and you would want to provide it some structure that later on your code wants to kind of build on. You could do this before with a regular strings. The problem with that would be that every time you wanted to kind of make a tree out of your random raw string, you have to parse it. So this would be very inefficient because every time you are passing it back to the user, that template is now kind of a regular string. You're going to make changes to it. They're going to wrap something in another string. And your code needs to be this parser again. Whereas here, it's the Python at compile time that makes the objects actually nice, and the notation is already actually constructing Python objects within the byte code. So it is very efficient, but it looks to the user as if you're just working with cute strings that are just HTML, or SQL, or whatever else. It is, I think, a pretty pointed feature where it is not going to kind of rule the world of everything around you. You're not going to be formatting strings with it. That's not the use case, actually. The reason why we have it is for templating. For anything that you imagined, you were using strings before, and those strings were somewhat lacking because it was too easy to just make it wrong, or the parsing was just making runtime slower. Now with t-strings, it is just a way for us to maintain a very user-friendly notation, but to allow for efficient computation with the objects being ready for library use. So that the PEP that explains the feature is actually excellent. It had to convince people that this sort of feature is in fact, I don't know, preferable, that it is desirable. So it is doing like a pretty great explanation of why we want this and how the nitty-gritty details of it work. I do expect that most people using t-strings will not have to think about those intricacies because the library that they're going to be using, whether this is going to be some future version of Django or some database interaction library, they're just going to say, "Oh, we have all those objects here. But if you want to use a raw query, use a t-string and put it here as an argument to this method. And people are just going to do it. They're not going to think about what is happening on the other side of the library, that the library is now thankful to get nicely structured objects instead of a silly string that they need to parse themselves. One example where that parsing needed to happen before, and it was sort of always annoying, was when, in SQL libraries, sometimes you pass some arguments to a SQL query from the user. So you don't want to pass them literally as part of a string because what if it's just Bobby tables? It's some SQL injection, right? You're just going to pass the query first. And then the arguments one, two, three, four as separate arguments to the method. That sort of discipline had to be maintained because otherwise, "Uh-oh," you did something wrong. Now with t-strings, you can just pass a t-string and just pass those arguments where they belong in the query. In that way, it is more readable to you because you know what's going on when you're reading this code. It is sort of preferable to look at it that way. But it is still safe in the same way because the library can check those arguments. It sees which part of the query is a past argument, is an interpolated value. It is just like incremental improvement on of what we have been doing all along, but it can compose. You can put templates inside templates inside templates and build an entire tree of that. That's where the HTML example comes from. I believe that in the end, what we're going to get is some very pointed use just like with assignment expressions, but it's not going to take over the world, not like f-strings, but still a very, very welcome addition to Python. [0:22:04] SF: Is there any performance implications? [0:22:06] ?L: A little bit inside the compiling stage. When you're first starting your application and is creating pyc files from your source code, then we're doing additional work because we're actually translating your string into those objects that we're now passing to functions, right? So we're not actually passing strings, we're passing template objects. But that is the single piece of parsing that we need to do. And your code at runtime later does not have to do this. In fact, the performance implication is positive for almost all uses. It should be faster to do it that way than the random manual string parsing that we did before. At least of which because the template strings parsing is part of Python now. It's in C. It's highly optimized. Whereas a lot of the parsing that you would do on your own strings in a Python application would just be done in Python. Just dropping to that lower level of computing already saves you quite a few CPU cycles. If anything, I suspect that people will welcome this as a performance improvement and not the other way around. [0:23:15] SF: Mm-hmm. I wanted to also talk about the way type annotations are handled, and making sort of this evaluation deferred. Can you explain a little bit about what all that means and how is it different than what was being done in Python 3.13? [0:23:31] ?L: Yeah, this is a little personal since part of the PEPs that were actually approved and now are implemented are fixing a PEP that I wrote, and it turned out not to be a good enough solution for the general case. Shortly, the problem. When we first introduced type annotations to the language, they had the pretty obvious constraint that they were still living among all the other objects inside your Python script. They were the same object like any value that you were using in your Python application. That meant that if you had a module-level function that took an argument that was some class, I don't know, animal, right? And you wanted to annotate it that, "Oh, this function takes an argument of the type animal," you had to have this name ready at that moment when that function was defined. But what if this class animal was defined below your function? Well, that's not great because that class does not exist yet. For this reason, we had to support forward references. And those forward references were pretty annoying to spec, to write down, because you had to use strings. You had to just say I cannot use this nice object because the name animal does not exist on line 30, because the class is only defined on line 70. I have to say open string bracket animal close string bracket. That's not great, right? String quote, string quote. The solution that I initially came up with in a PEP 563 was that how about we do it differently and just make all annotations strings? So you don't have to actually write the quotes. But in the end, they will be quotes. Kind of in terms of meaning, right? Even if you just say animal in your source code, in the end, this will just become a string in dunder annotations of that function. This solved this problem, and it allowed for a bunch of other nice features. For example, that was in times of Python 3.7. Whereas in Python 3.9, we allowed built-in collections, like list, or iterable, or a tuple to be generic, which means you could just say lowercase list square brackets of string. Right? You don't have to from typing import uppercase list. You could just say, "Oh, there's a built-in list." I'm just going to say this type argument is lowercase list of string, and everybody's happy. However, well, that didn't work with Python 3.7. But if you just said from future important notations, you could use this already because it was a valid Python syntax, and it was a string anyway. Python did not object that. Actually, lists in Python 3.7 couldn't be indexed. It didn't care. And my Mypy understood it. Everybody was happy. And then in 3.10, we allowed for typing unions to be created using the pipe syntax. So instead of saying uppercase optional of int, you could just say int pipe none. Int or none, right? That is a much nicer notation for optionals or any other sorts of union. If you wanted to say it's either int or string, you could say int pipe string. Int or string. That's a nice notation. And then Python 3.7 supported it in air quotes because, from future import annotations, you could just turn everything into a string, and that was great. However, turning everything into strings actually, I don't know, limits what you can do with those strings later because you cannot so easily find what that string meant in the moment where it was defined later. What I mean by this is that if you are looking at a notation of your function that takes an argument that is animal object, that is animal of what? Where does animal come from? What does that word mean? What module does this class come from? If you look at it from a different module later on when the application is running, you didn't know. All you knew is that there's a string, and it says animal. And it turns out that there are some pretty heavy users of runtime type annotation introspection like Pydantic, and they were pretty upset least of which because a popular pattern, it turns out, in Pydantic applications is to create classes inside a function. Literally, those classes don't exist later on on module level, where you can get to them in any sensible way. If you just have a bunch of strings that say like this is some request, this is some order, this is some other type of class, how are you going to refer to this class there? It's not importable. It was a local insights of some functions. Samuel Colvin was pretty upset about this not being actually possible anymore from future import annotations with a group of other people who actually depend on this functionality. They raised the objection. Like, "Hey, this approved PEP 563 actually is now breaking a use case that we depend on." We stopped actually making this future the default. And for a while, we're just looking at a new solution. And Larry Hastings with PEP 649 actually came up with it, which is to still defer evaluation. All this forward reference stuff can happen. But to do it by just implicitly instead of turning things into strings, you're implicitly turning them into lambdas. They're essentially kind of deferred evaluation functions that, the first time you ask about them, then they will evaluate. But they will evaluate in the context that they were defined in. Even if you had a bunch of classes that were locals, we still hold on to that frame, we still hold on to those locals. Even half an hour later, when that function call was long gone, you can still ask for dunder annotations from that particular piece of code of Pydantic, and it will tell you, "Oh, that's this animal." It doesn't exist anywhere else in your application. It was just in locals, but you can still refer to this particular real class. And with this functionality, this entire problem is, in fact, solved in the most correct way possible. It has some edge cases that were a little nicer with PEP 563. But I think the compromise, the end result is well worth it because now the flexibility of annotations is way better. Obviously, the devil was in the details. And even though PEP 649 is quite old at this point, it was created four years ago, four and a half years ago, it took a while for the implementation to actually get to the level that was required to actually be part of Python. And Jelle Zijlstra actually tackled with the problem. And when he did, he discovered that there's actually a lot of little and larger edge cases to be discussed and to be decided on. Instead of just making changes to the already approved PEP 649, he wrote another one, 749, which explains all those things that needed to be specified additionally for the implementation. And with that, for Python 3.14, this is now the new default future. You don't have to import anything from future. You get this behavior by default. Because the wonderful thing about it is that it is essentially almost "wink-wink" entirely backwards compatible with the previous default behavior. We still have the from future import annotations if you do rely on things being strings. But unless you use this, you're going to be using the new automatic behavior of turning everything into lambdas, which is good enough indeed for forward references and for a bunch of other cases, which allow runtime use much more than Mypep ever did. [0:31:55] SF: And then for this, what is the impact to existing code that was using things like the future imports? [0:32:04] ?L: This is like a good question. The future import is still there. It is not entirely clear what to do with it. The PEP 749, one of the things that it needed, like define, was what are we going to do with this future import? Future imports were never feature flags. They were just a way of enabling a feature that is going to become the default in some future version. Hence the name, right? And now you have a future import that will never become the default in a future version of Python. What to do with that? And Pep 749 essentially punted on this deprecation. It says like, "Well, there's still legitimate cases to have this." Plus, it already works so there's very little cost for us to keep it. There are some discussions on our official forum about this. But currently, we don't intend to remove the future import, and we don't intend to change it in any way. We only signify that, "Look, in Python 3.14, there's a better default. If you just don't do anything, the situation is already better." The problem with us is that we are living literally, pun not intended, in the future because the core developers just released Python 3.14. We are already working on Python 3.15. It's easy to lose track of the fact that the most popular versions are still somewhere around 3.11 or 3.12. Before, people actually, overwhelmingly, reach 3.14 where this new feature arrived, it's going to be still a couple of years. And before people can safely say that the libraries I maintain no longer need to be compatible with Python 3.13. Well, that's an even longer proposition. Remember, I just released Python 3.9. That was the last ever version, and just declared Python 39 end of life. And this was, in fact, moment that a lot of people were waiting for. But why? Well, I guess a super old version. Well, because if I declared it officially dead, then library maintainers could use this as a reason to drop support for Python 3.9 in their libraries. Well, it's not officially supported, so we also don't support it anymore. If you want to still use Python 3.9, use an old version of our library. The newer versions will require Python 3.10. We are at 3.10 right now of support for the kind of majority of library maintainers. Before we get to that point where the oldest supported version for libraries is going to be 3.14, that's still close to 5 years away. Which is why, until at least Python 3.18, 3.19, we are very unlikely to be able to remove the future imports. If your code already has it or if you see it in the wild in your dependencies, don't worry about it. It will just not stop working next year or in two years. It's still going to be around for a long time. But eventually, yes, we will get rid of it. But we will be very careful not to do it abruptly. [0:35:24] SF: When you introduce something new that sort of replaces an old way of doing something, how long before you essentially take the old way of doing things out of support? [0:35:36] ?L: Deprecations is in fact like pretty, I don't know, important topic on our minds right now. Historically, the only thing that we said was that there is a PEP that specifies this. If I remember correctly, that's 387. And PEP 387 is our backward compatibility policy, and it has kind of the broad strokes of don't break people, you have to warn them. You have to give them enough time and whatnot. But this PEP was written in times where our release schedule was really like, "We release when we're ready." We were roughly releasing every 18 months, but sometimes it was closer to two years. So saying that what you need to do is you don't remove the feature. You first create a pending deprecation warning, which means this is going to be deprecated in the next version. This pending deprecation warning is released today. In 18 months, there's going to be a deprecation warning. And then 18 months from there, which is at least 3 years, but probably like four or five. At that point, you will be removing the feature. That's what originally PEP 387 told us. Like, "Hey, at least wait two releases. You need to create a pending deprecation warning, and then the deprecation warning, and then remove." It was pretty clear when we switched to the annual release cycle that this accelerated the backwards compatibility policy kind of as collateral of our increased release cadence. And that was not really intended. We still wanted to have our support windows as long as they were before. Since then, the PEP was updated to just say that, "You know what? You can actually extend this deprecation timeline to as long as 5 years because there's really not much in terms of maintenance cost most of the time." If there is significant maintenance cost to maintaining both behaviors, then we might just have to be more aggressive saying, "You know what? This old thing goes away. If you still need it, use an old version of Python." But it's very unlikely. We very strongly avoid this. So, it's more close to five years. But it turned out that sometimes, even with five years, what you're gaining from cleaning up things is just this feeling of things being clean, but you didn't really fix any problems for any real users. Nobody's thanking you for there to be fewer methods on some object. It doesn't really help anybody. Now, very recently, PEP was also updated to introduce this notion of soft deprecations, where we can now just say that we deprecate a particular behavior only in documentation. It's not even a warning that you're going to see at runtime. We're just deprecating it, saying there is a better way of doing it. Use the better way. But there's very little kind of advantage to breaking users' code. So we'll, in those cases, just leave it there forever. And maybe this is actually what's going to happen with the future import annotations, because it is not a significant cost for our maintenance. PEP 749 explains what I told you before that, at some point after 3.13 goes end of life, we will deprecate from future import annotations. But even then, it is not clear whether we will actually be removing it very quickly or we will just leave it. Similarly, the Global Interpreter Lock. Now, the build that allows you to disable it sort of suggests that, in some future version of Python, this will be the default. And that's true. But whether we will actually remove the Global Interpreter Lock entirely is not actually a decision that we made. Probably it makes sense to just keep it around forever. Because, fundamentally, it is not a huge maintenance cost at this point. And it gives you this security network, the safety network of falling back to this old solution when you are importing a C extension that is old, and it doesn't know how to behave in this new world of free-threading. Yeah, backwards compatibility, definitely a consideration that we're treating increasingly seriously. After the Python 2 to 3 transition, we knew that we need to. But now even with smaller breakages, we really think hard to avoid them. [0:40:22] SF: I know there's a lot of other things that are discussed in the 3.14 release, but what is one of the other features that you think is worth bringing up here that you're interested in or you think other people who are involved in the Python community should know about? [0:40:36] ?L: Well, there are things that are actually cool and things that I am super subjective about because I worked on them, right? Let me start with the thing that is actually super cool, and I had very little to do with, which is the safe external debugger interface for CPython. You can read the very long PEP, and it's very impressive. But long story short, what you can do with Python 3.14 is you can PDB into a running process that is remote to you. It can be in a container, and you can still PDB into it. It can be on machine somewhere over the network. And as long as your network configuration, your firewalls and stuff allows it, you can PDB into that networked machine and debug a running process. And you don't have to restart it anymore. You can just debug now things that are remote to you and they're running. If they hung on something, you can now learn why. You can kind PDB into an application and debug it as if you were starting it on your own local box, which sounds like, "Okay. Cool feature. But is it so foundational?" I think it is pretty foundational, because this ability to just be able to find reasons for strange behavior is excellent. We already thought very hard, and Pablo kind of spearheaded this, for instrumentation to be able to just look at foreign processes. If you had like, I don't know, strange behaviors of some memory leaks or CPU being hoged by a thing, you could run a profiler on your process and see where does the CPU cycles go. Where does this memory go? You could check those things. But very often, it was not enough because the only thing that you could see is, "Okay, there's many calls here." Or, "Oh, okay. This thing declared - well, actually used this much memory." But you don't know why. And now, actually having a debugger, which means you can issue commands, you can execute code on that remote Python process, that is amazing. That is, I think, the kind of killer feature that regardless of whether you're into free-threading or not, regardless whether you need kind of SQL templating or whatnot, regardless of whether you're happy that now forward references in your type annotations work automatically. Being able to debug a process that is already running, I think, is pretty cool. That I have to kind of mention first. But then, something that I actually did work on with Yuri and Pablo is additional visibility into asyncio applications. All those things that I told you about profilers before, that we could already look into an application that is running and see what Python and C functions at the same time are running right now in that application, that is pretty cool. But what you couldn't really see is what is asyncio doing at the time? Because long story short, the way asynchronous programming works is you have some event loop, and the event loop runs a particular piece of your core routine only one at a time, right? The entire point of it was that it maximizes the usage of a single thread. Instead of wasting your entire thread on waiting on the network, you can run other things at the same time. That's great. But you only run one at the same time. If you just run a profiler on that sort of application, what you're going to see is, at any given point, there's only one thing running, and it's running on top of the event loop. You don't see the causal chain on what is awaiting on what. How many tasks are there in this application? What is actually the cause of those requests taking a second? What is the thing that is breaking the application? Now with Python 3.14, you can see a tree of tasks and you can see who awaits on whom. Again, on a remote process, you can just direct asyncio PS at that thing and it'll tell you what is this process busy with. And that, I think, again, it's not really a feature that allows you to import a flashy thing for your code, but it's something that is going to make living with your current applications much easier, because it's also the way in which you can just see what is going on much better. Especially that with async programming, just print debugging is not so easy either since there's many things happening at the same time. Yeah, PS3 that I worked on, I think that is cool as well. And the third thing - I'm sorry. I'm sort of kind of beating my own drum here. But I have to say, I think it's pretty cool that now we have syntax highlighting in the default REPL. So if you just start Python and you just start typing in the commands, having them in colors is just like cutesy feature that you will not think twice about. And you're just going to be like, "Oh, I guess that's cool." That's exactly the reaction that I expect. But believe me when I say, when you then revert back to 3.13 for any reason, and you see that there is no syntax highlighting, it almost feels like it's broken. It feels like you're going back in time to some past that you no longer want to go back to. The experience feels just worse. So, it's one of those sort of usability features where the colors just provide you with an additional dimension in which you can look at the code that you're typing in, and it just, I don't know, provides some additional polish that I think makes the language feel better. It does for me. Those are the three features that I think we still definitely should be covering, but there's many, many more. [0:46:32] SF: Yeah. Yeah. Awesome. Well, I think we covered a lot. For those that are listening, check out the release notes to learn more. There's a ton of stuff available in this release. And ?ukasz, thank so much for coming back on the show. [0:46:44] ?L: Absolutely. [0:46:45] SF: Cheers. [END]