EPISODE 1609 [INTRODUCTION] [0:00:00] ANNOUNCER: Carlos Sanchez is a Principal Scientist at Adobe where he works on the Adobe Experience Manager. AEM is a content management system analogous to WordPress and provides a platform for site creation and content delivery. In addition to his work at Adobe, Carlos has a long history contributing to open-source projects, including Apache Maven. He joins the show today to talk about his work at Adobe, open source and more.  This episode of Software Engineering Daily is hosted by Jordi Mon Companys. Check the show notes for more information on Jordi's work and where to find him.  [INTERVIEW] [0:00:47] JMC: Hi, Carlos. Welcome to Software Engineering Daily. [0:00:49] CS: Well, thank you for having me.  [0:00:51] JMC: Pleasure to have you. We are here to talk about your work at Adobe. You were recruited a few years ago already? How long have you been working in this project at Adobe?  [0:00:59] CS: Yeah, over 4 years now. Yeah. [0:01:01] CS: Okay. Of which you gave a talk at Open Source Summit Europe 2023 in Bilbao, Spain where you and I met. And tell us about the product before we jump into what are you doing still there.  [0:01:15] CS: Yeah. Well, thank you for having me, of course. Adobe Experience Manager is the product I'm working on at Adobe. It's a content management system at this core. It has a lot of open-source background. A lot of open-source components. A lot of people are contributing back to open source. That's how I got to know my teammates back from my background at the Apache Software Foundation.  It's a very enterprisy content management system. It's very widely used. There's a lot of developers extending it. It's not maybe a lot. It's not very well known, I guess, compared to how used it is. It's very surprising. Even when I joined Adobe, I wasn't aware of what was the exposure that the product had. That's the core of Adobe Experience Manager. [0:02:09] JMC: In a way, WordPress is the king of sort of like b2c. Individual, small companies. And maybe AEM, Adobe Experience Manager, is more the enterprise king, right? I mean, I don't have any data to support that claim. But the way I see it is that, in a way, adobe serves a different market. And it probably supports the idea that that market is not as sexy as individuals or small startups. And therefore, it doesn't get so much account.  Before we get into the situation in which you – the project was when you joined, what open source projects were you contributing to in which you met sort of like mingled with developers that were working in AEM before? What are the specific open-source projects that AEM relies upon strongly?  [0:02:55] CS: Yeah, I think my main contributions back in the day were to Apache Maven when it was starting. Maven 1, Maven 2. It was a long, long time ago. And that's when I was most involved with the Apache Software Foundation. And I met a bunch of other folks at different events, conferences, mailing list and so on.  And the projects that were used regarding content management, there was back in the day Jackrabbit and standardization of the Java APIs for content management. All that was happening around 2005 or so. And from those projects, that kind of evolved.  And the projects we use today from the ASF is [inaudible 0:03:41] and OSGi. Projects related to OSGi component management in Java. Probably a bunch of others. Of course, Maven. It's almost ubiquitous everywhere in Java projects today. And that's when I started back in 2004 or five, fourth or three on this open source world when I was still in university. And since then, I've been contributing to a bunch of other projects. Before joining Adobe, I was doing a lot of work on Jenkins. And that's also where I got more exposed to Kubernetes and Docker container technologies.  [0:04:20] JMC: It says a lot about your experience with Java the fact that you consider Maven and Java different things or one part of the other. Because in my view – and I'm very little experienced in Java or in the Java ecosystem. For me, they are inseparable. Maven is a piece of technology of Java. And in a way, it has become that, right? But it was something different and separate. And that has become part of it only because it's so relevant. But yeah, it's not part of the core technology, right? [0:04:52] CS: Yeah. It's not. And I guess because I saw this baby getting born. And there was a big battle back in the day. This And versus Maven, declarative versus imperative. And this whole model. And there was a huge fight between each other on how people hated Maven, or liked Ant, or hated Ant, liked Maven and so on.  [0:05:16] JMC: Tell us about your experience. What do you reckon was relevant to the Adobe people that hired you? What skills made you attractive to the goals that you will later explain that Adobe's leadership wanted to apply to AEM? What is it that you've learned through all this time. Contributing to the Apache Foundation, but also getting involved in Jenkins, as you said, or Hudson. That was its original name, right?  [0:05:42] CS: Yeah. [0:05:43] JMC: And containers and Docker in particular. Yeah, what are the skill sets that make you important for Adobe right now?  [0:05:51] CS: Yeah, I think it was containerization Docker. I started in Docker fairly early. We were building a startup doing DevOps tooling back in 2009, 10 before it was sexy. We were working a lot with Docker. And then that moved on to Kubernetes. Very early on the Kubernetes project when it was really, really painful to use.  I think if you were to look at it again, it was very clear that containerization was going to be very successful. And once you went through the, "Okay, I'm running containers in one machine. How do I run containers in more than one machine?" Things like Kubernetes and Mesos back then were the obvious next step.  And that's when I started playing with Jenkins and Kubernetes. Coming from, "Okay, I have this hammer. How can I use it?" I was creating a Kubernetes plugin for Jenkins so you could run your Jenkins builds. There was plugins for running them in VMs, cloud, Docker and so on. And I started the plugin that allow you to run the builds in Kubernetes. That's how I started in this world. And this was fairly early on the Kubernetes life. That got me very interested. And I guess with a bunch of lived time to learn, and make mistakes and learn from them. [0:07:18] JMC: You are truly a full-stack developer. Because you're very experienced in the most widely used CI system. Maybe build system. CI. I would say Jenkins is more a CI system than a scheduler of sorts than a build system like Bazel, or Pants, or whatever. Then you extended it to be able to run jobs elsewhere than in your machine, right? That's why, I mean, you can use it with Kubernetes, right? To run those jobs remotely. And then you're very familiar with Maven, which is in a way the Java package manager? Is that a good way to call what Maven is?  [0:07:53] CS: Yeah, package manager. Build tool. Yeah, my background started in Java mostly. That's where my I guess work experience started. And then kind of evolved into more operational matters.  [0:08:08] JMC: Okay. You also manage the upside. Because what I just described is mostly the dev side. The way in which one builds software, tests it. Whether it's its own machine or remotely and then packages it. And that's where I think, in my view of the world, Dev ends. It's like, "Here you are. There's your package. Operations people, take it to the world and so forth." You've also become acquainted with that side. Let's call it the deployment side of things.  [0:08:35] CS: I guess you become interested. And maybe this comes from working at the startups early and where you have to do everything. I was doing UI. I was doing JavaScript on the front-end. I was doing everything that needed to be done, right? But yeah, I kind of evolved into a more DevOps role where, yes, I'm building something, but I also need to run it. And I want to see how to run it. I need to make sure that this is running fine. I need to deal with Kernel, the Linux Kernel. I need to deal with containers and so on. [0:09:07] JMC: Would you say that, apart from your own interest, your own training in computer science, or not, or engineering, or whatever, would you say that this has only been possible because of open source? Your expertise in these fields?  [0:09:22] CS: Yeah, I would say so. Yeah, definitely. Open source has been a huge plus for me. And from getting involved and getting into like an international audience or group of people, that was all due to open source and participating on the ASF, and Eclipse Foundation and other open source communities until now, the tools that we use on a day-to-day basis. Yeah.  [0:09:48] JMC: How did you know that containerization was going to be such a huge thing? How did you know that Jenkins – you haven't said so. But how did you get involved in such eventually successful products or projects rather? How did you have that sense of opportunity? Do you recall being there for a reason? Or was it just absolute sheer luck? Or was it a balance of both things?  [0:10:13] CS: I guess it's easier to see in hindsight. But I made bad decisions before. Yeah, I guess on the startup world where I was part of several startups in California, the first bet was big on Maven and continuous integration with another project on the ASF called Continuum. That was too early. The CI tool is not even – I think it's still there, but nobody uses it. Or just a few people in the world use it.  From then on, DevOps. Yes, we saw DevOps was a great idea, but it was too early again. It was 2009, 10. Nobody cared about DevOps. People would think that we were a bit crazy trying to push to bridge the both worlds. And at some point I got lucky, I guess.  But yeah, containerization, I think it was – for me was a good idea from the very beginning. You go and you say, "Okay. This is good. This is going to help." Because we've been through VMs before. We've been through VMs before. We've been through cloud before that I heard people saying cloud is never going to take off. Same as you would hear people saying the iPhone is never going to take off. Because people want keyboards, right?  And you saw that you think, "No. This is good." And conization was obviously good because it allowed you to do more things simpler, faster. And then I think the step from containerization to like a cluster type of Kubernetes was obvious. The only concern is which technology is going to win?  You have Mezos. You have Kubernetes. Then you have Docker Swarm. It's not a matter of if the technology is going to succeed. It's a matter of which project is going to do it. Yeah.  [0:12:12] JMC: Did you know that Kubernetes was going to be – or were you just lucky to be involved in it? Because you said it was really painful. And it still is, right?  [0:12:20] CS: Yes. Yes. Yeah, it was really, really painful. I mean, when I started working with it, it was basically, "Oh, something is broken. I'll just delete everything and start from scratch again. Because I don't know how to fix it." And it's also depending on the skill sets.  Early on Kubernetes – I mean, Kubernetes itself and even today is more of maybe ops. Especially early on was more of an ops project that you needed to know about memory, Kernels, SIG groups, this and that. That was more challenging for somebody like me.  But when I joined cloud, we used Cloudbase. It was the company I was working on before joining Adobe. We were building a cloud-based enterprise solution built on top of containers. And the first choice when I joined, people decided this was going to be Mezos. And then after that, we build the Kubernetes version. And I think that's the one that stick. Right?  But yeah, early on, people were betting on Mezos a lot because it was the more mature technology. Mezos had been around for years. It had very good sizable deployment base, cluster sizes and so on. I don't see that as a mistake, but more of a learning curve where you say, "Okay, there's a technology that is mature today that is called Mezos. There's a technology that is very promising, but it's not kind of ready today," which is Kubernetes. I think it's more of a business decision. Or where do you want to start?  [0:13:51] JMC: Was Kubernetes in that stage built on Go only? Go programming language?  [0:13:57] CS: I think it was. Yeah.  [0:13:59] JMC: And was that not that a problem for you? Or actually, was it an opportunity to learn about Go? Or you didn't require to know completely?  [0:14:06] CS: No. I didn't need to. I was not contributing to it. I was just trying to use it and build things on top of it. I remember that it was built – the deployment scripts where shell scripts and Salt. This was like a puppet thing, but different. And so, there was all this – because back in the day, for people that are old enough, they will remember, there were things like Puppet, Salt. And there was another one that I forgot. but basically, how to deploy or reinstall packages across a fleet of machines. And that's another technology that kind of went away today.  [0:14:44] JMC: Is that Chef? Were you thinking about Chef?  [0:14:45] CS: Chef. Yes. Exactly. Chef. Yeah.  [0:14:48] JMC: Well, Chef is still used by Facebook and others. But yeah, yeah. I think, in a way, they're dwindling. I don't know. I wish the best to those projects and those communities. Don't get me wrong. [0:14:59] CS: Maybe there's a curve of – maybe they're now in the plateau of people are using it. It's fine. It's great. And there's no big fuss about it.  [0:15:07] JMC: Correct. Yeah. Like many other technologies out there that we don't open stack. Always, I get reminded every now and then, they're strongly used. It's not growing. But most of the user base is really happy with it. If you find a solution to your problem with OpenStack, just go ahead and you surely will do with Chef, and Puppet, and others and stack. Yeah.  Tell us about AEM then. When you got hired there, what problem, what situation did you join? What was the main goal of the project or goals of the project and your charter?  [0:15:42] CS: Yeah. When I joined Adobe, the project was building a cloud service out of AEM. AEM had been widely used for many years, on-premise, managed services. People would run it on their data centers across multiple nodes. It was already being distributed in the sense that you could have multiple machines delivering this content. And the challenge was how do we run this for customers? How do we run this on the cloud for them so they don't have to worry about operational stability, performance, things like that? How can we make it like a SaaS product that they don't have to worry about and they can just simplify their usage?  [0:16:30] JMC: Was the cloud provider for that project already chosen? Or was it part of your charter to pick –  [0:16:36] CS: Yes.  [0:16:36] JMC: Okay. Okay.  [0:16:38] CS: Yeah. The project was already started and the technology chosen was Kubernetes. We run Kubernetes. We have now probably close to 40 Kubernetes clusters in the world that we run AEM on.  There are some interesting challenges here. Because being a content management system, you want the content to be close to your customers. So it would run across the whole world. We have multiple regions. We have regions as soon as they are available. Sometimes working in combination with a cloud provider to say – they tell us we are preparing this region. And we tell them we need all this capacity in that region as soon as it's ready to go.  Same thing with like Arm nodes running on Arm CPUs. That's one of the latest projects we've been working on. Yeah, the scale and how to – I think that's one of the benefits of Kubernetes is how easy it is to do a lift and shift to cloud or to Kubernetes, containers. Taking something that is already running and putting it in containers and just run it in the cloud.  [0:17:45] JMC: How did that go then? Because I'm sure that when you joined, it was not running on Kubernetes. What changes do you started taking into fruition?  [0:17:56] CS: When I joined the project, it was already going and Kubernetes was being used. It was not live yet. It was one year after I joined that the project was live and GA. The challenges is how to – yeah, growing the number of customers. How to deal with multiple clusters across the whole world is also a good challenge? Because with Kubernetes, I think there's two trends. There's people trying to do very big clusters. Like people claiming 5,000 nodes, or 7,000 nodes, or something like that. And on the other hand, I think it's becoming more popular to, "Okay, we don't want to deal with the scale issues in one cluster. So we just run. How can I easily run tens of clusters, of 100 clusters, or something like that?"  And this allows you to not have to spend so much time on getting ready to scale. And it also limits the surface of problems, the blast radius. If one cluster goes down, what happens? And then a lot of people are focusing on how can I manage multiple clusters easily instead of trying to do a very big one?  [0:19:12] JMC: How does that work? how do you manage a myriad of clusters as opposed to a huge one? What are the intrinsic problems of that, of a huge fleet?  [0:19:21] CS: Yeah. We deal with – we try to use them as scuttle. Not a sparse type of thing. But still, that's not trivial. We still have some dependencies that we would like to get rid of on specific clusters. But the whole premise is when we think we reach the limit of one cluster in one region, we create a new one. We start onboarding customer environments on that one. That one is static kind of – well, it's not static in the sense. Because we use autoscaling. We still deploy operators and other things. But we can limit the growth on specific clusters when we think it's close to its peak.  And then we kind of follow a templating pattern where it's very easy for us to add new clusters. We just take the templates and apply them to the new clusters. And we have like a queue of clusters that we can onboard customers. [0:20:25] JMC: What do you mean by a template? I mean, is this sort of like a GitHub's declarative definition of what a cluster looks like? Is it in a YAML format? In a data format? Is it stored in Git and you use something to look that up, and spin up the cluster and avoid any type of drift from its origin, from the declared state? How does that work? Can you explain the template feature?  [0:20:48] CS: Yeah. We have a set of operators that we need to run on each cluster. When we get a new cluster, we create those operators. Then this is all GitOps. The definitions are in Git. We just have to create a new namespace in the new cluster. Those get automatically provisioned from Git. And then the cluster is ready for onboarding of customers, environments.  One specific thing that we do at Adobe, we have a team, which is kind of a platform team that creates the clusters for us. That has its pros and its cons. Hopefully, more pros than cons. Where they build the clusters for us. They have a lot of expertise on Kubernetes. They create Go. Get the VMS from the provider. And they build the whole cluster for us. And they also run it. And their on-call for kind of the cluster layer. And we take that cluster and then we start installing operators and other things on top.  [0:21:50] JMC: How did the release process look like when you landed there even before like for the pre-cloud AEM? And how does it look like now? What are the main differences in release cycle? Maybe team topologies. Did that platform team exist before? Or was it charted differently? How did it look like before? And what is the release today?  [0:22:14] CS: Yeah. AEM before was a typical I guess downloadable software where there was one or two releases per year. I don't know exactly. But it was one or two. And then what we switch to on the SaaS or on the cloud service part is some parts of the system are continuously being deployed. There's no action of deploying something. You just commit things to Git and they get deployed. And this is all the GitOps model.  Some other parts are still on a monthly cadence or so just because there's dependencies on the customer side. Where, obviously, we don't want to change APIs or things that customers need to be aware of. Those kind of still have a more of a schedule. But most of the things that run on the cluster are just multiple times per day been deployed. [0:23:16] JMC: Wow. That's so modern. I love it. But yeah, you've talked about the extensibility, right? Or how your clients are extending. Is that the bit that you have to be careful with? Like not to deploy breaking chain to those extensions that your clients have put in place?  [0:23:34] CS: Yeah. There are a lot of developers that built on AEM and have expertise. This is something I wasn't aware before when I joined. There's a lot of people writing extensions. We have to keep this layer of compatibility with our layer of APIs and make sure that we are not breaking something of these extensions.  The other particular case is that we're running this customer's code in our systems. We are taking these extensions, packaging them up as part of AEM and then running this. This also has challenges of sometimes this code is running on the same JVM. So it's not clear what the API is in some cases. Because you may have access to a bunch of classes on Java.  Maybe a simple chain somewhere is having this side effect that is breaking somebody. There's some tests happening for each customer that before upgrading them on the Java application part. This test is run and we verify if is this – is this a problem with this specific customer? And this is before getting the release out. We check is this a problem with this specific customer? Is this a widespread problem? It's in the core. Or what's happening?  We're also taking some matters further. Trying to do more progressive delivery. I was advocating for progressive delivery a long time. And now we are working on using Argo Rollouts. Another open-source project that is very popular. Trying to make it even safer for us to deploy changes. [0:25:17] JMC: But wait. You define before as GitHub's releases in a way. You didn't say so. I'm paraphrasing you. But there's multiple updates, and rollouts and deployment per day, this to me – and especially if you use Canary, or Blue-Green, or just subsets of users that test out a specific feature, or a specific improvement, or whatever. And then depending on the results of the experiment, then roll it out or not. That sounds to me like a progressive delivery. What did you have in mind to say now that you wanted to propose it? I mean, it seems to me from what you say that you were already doing it.  [0:25:56] CS: Yeah. No. We definitely we're rolling out changes in different what we call groups. internal customers first and typical scenarios where you don't want to do a big bang across everybody. We want to do it even more granular.  Even more specific, some of the features that Agro Rollouts provide, the interesting one that I'm more fond of is automatic rollbacks. Where you deploy something. It will automatically check and you can define – I don't know. You could say, "Oh, the number of 500 errors has gone up. Just roll it back." [0:26:38] JMC: You define success. And if the criteria for success is not met, boom. It goes back to the previous state. [0:26:46] CS: Exactly. Yeah. [0:26:47] JMC: Yeah, that's brilliant. [0:26:48] CS: And it's all automated. You don't have to do anything manual. [0:26:51] JMC: I guess you need to work with product a lot there to define what success is. Especially for new features. Right? If it's performance and other things. I think for engineers it's mostly clear. It's like, "We want to –" lag, delays here. Get rid of them. But if it's a new feature, then what success looks like is something that needs to be arbitrarily agreed with product, right?  [0:27:17] CS: Yeah. We have multiple teams working on the product. We have metrics. We have alerts. We have the dashboards. And we have people on call across the whole arc. That's all already being kind of defined what is a problem. How can we respond quickly to it? This is taking it a step further in order to, one, automate it in a way that, okay, there's a problem. Let's automatically roll it back. Or, two, let's start doing things like traffic switching. Or let's start sending some traffic to the new version. Not to the old version. There's a bunch of possibilities there. [0:27:57] JMC: How did the – I'm going to call it team topology, right? It's just a reference to a fantastic book that came out I think three years ago that everyone I think should read if you're interested in modern software delivery and the structure of teams that powers that. How did that change since you joined four years ago at Adobe? Has it changed? And what new teams have come to play? You've mentioned the platform team. Maybe it was not new. But what has changed in the last few years in terms only of team structure, and coordination, and why do the relationships exist between teams?  [0:28:30] CS: Yeah. In a big corporation, there's always going to be dependencies across teams. And platform team, it's doing more and more I think. There's always in the last year, two years probably, platform teams have also become very popular. And there's this boom on the platform team side. And I agree with a lot of the tenants there where not everybody in the organization needs to know how to deploy something, needs to know how to run something in production.  How long it takes for me – and this is a key metric. How long it takes for a new person that joins the company to be able to deploy something to production? That's for me is key.  [0:29:15] JMC: Do you remember how long it took you to deploy something to production when you joined?  [0:29:20] CS: Yeah. I don't know exactly. Because we were not GA back then. But I think it, for me, it took a while to grasp how big the organization was and how big the product was and so on. I'm happy to say that we have people recently joining where they are deploying to production things in – I don't know. Less than a month. Obviously, that could be improved. But probably in the range of two weeks. It depends on – there's a lot of onboarding things that need to be done. [0:29:50] JMC: Yeah. Exactly. I mean, I'm not sure I want to reduce that time too much. Because I think there's a minimum onboarding time to just get – regardless of your seniority. If you've got 20 years of experience, you probably are – get familiar with a codebase, and the coding guidelines, and the production guidelines and other things fairly easy. But I wouldn't want anyone to be releasing to production anything week one or week two to be honest. [0:30:16] CS: Yeah. I guess it depends if you have the guardrails in place or not. And if you're pushing something, they're going to get a pull request review. And then somebody – I mean, some tests are running and everything looks fine, then go ahead. That's no problem with it. [0:30:33] JMC: Have those guardrails changed throughout time? Is it now simpler to deploy in a new feature? An update to AEM than it was when you joined?  [0:30:43] CS: Yeah, it is. Part of this kind of progressive delivery, kind of soonish I think we realized that we cannot make these GitOps changes everywhere at once across all the clusters, all the namespace. Tens of thousands of namespaces.  Our GitHub tooling, we set a way where you could target some specific environments. Not just stage and production, but also say I want this to revolve to 1% of our namespaces, or two, or whatever number. Now we have this capability. It's a matter of making it easier for people to do it. It's one of the things that we didn't get there yet. But yeah, the capability is there.  [0:31:32] JMC: When you joined, what was one of your, say, immediate – what did you think was going to become a problem and eventually did not become a problem? It's like when you joined, it's like, "Oh, I know that these – it's called cloud migration projects usually face this problem." But then it turned out that this didn't happen there. What was initially a problem that didn't turn out to be? Did you run into anything like this? Was it smoother than you planed for?  [0:32:02] CS: Yes. I guess I was not, or I was a bit skeptic, or a bit worried about if people would still – or customers would see as a problem moving to the cloud. That's a typical case, right? Moving to the cloud is sometimes it's seen as this I don't want to do it. Or why would I do it? Or so on. But that went pretty well. There was a huge response there.  The scale of clusters or how quickly are we growing? That was also worrisome at some points. Well, we need to grow, grow, grow. Are we going to hit problems? And obviously, we did hit some problems. But overall was not such a big deal. And things that I thought would be easier than they were. Because this is a product built mostly in Java. Or actually, the product that was on-prem was Java. [inaudible 0:33:02] and all these components.  There's a lot of people that have the Java knowledge, but they don't have the Kubernetes or more operational knowledge. And with the promise that Java can run anywhere, you would expect things to go smoother. But then you have to deal with memory constraints, and CPU problems, and CPU throttling and so on. Things that Java has improved, has improved in the last years.  But also, five years ago, it was still the defaults in Java and the JVM were not good for containerized Java workloads. That was also a challenge on figuring out why is this failing? Why is my process is getting killed? Why am I running out of memory? Why my process is not responding? All that stuff. [0:33:56] JMC: Do you see Arm architectures as the best ones out there for AEM specifically? Or in general, for Java applications running in the cloud?  [0:34:05] CS: Yeah. Arm has a lot of advantages. One is the cost. And the second is the performance. You have better performance for less cost. If you have to rearchitect your application to run Arm, then yeah, okay. It's important that you have to balance how much you're going to get out of it. How much you need to put in to get there?  For Java, this is easier than anywhere else, I think. Because you just change your Docker image. You pull a Docker image that is built for Arm and hopefully your containers are just going to run fine. JVM, that's all abstraction for you. The Java Virtual Machine. That's easy. It's just a matter of being able to rebuild everything for Arm.  And not just Arm. Because one of the problems with Arm today is availability. For the services that we are switching to Arm, we are not just switching them to Arm. We are building them in a multi-architecture container image. We need them to run on Arm and Intel for the time being because there's regions that don't have Arm availability. There's some regions that have just a bit of it.  [0:35:22] JMC: And how do you go about trusting the provider of the image? Do you just go to – I presume you don't do this. But what is the sort of like vetting process that one says, "Okay, I need to deploy my Java application in a container that runs on Arm. What do I just do? Go to Docker Hub and see if I can find such thing? Or how do you go about those things?  [0:35:45] CS: That's one way. I mean, there's official images on Docker Hub that are built by Docker, Inc. That's one possibility. We also have internal images at Adobe that this platform team builds for everybody. And they have – for popular languages, you have base images. Yeah, you have both options there. [0:36:10] JMC: Okay. Oh, the platform team also builds images. Fantastic. Wow. [0:36:13] CS: Yeah. They do a lot of things. [0:36:14] JMC: Okay. So what is the –  [0:36:16] CS: And they build them well.  [0:36:18] JMC: What is the state of the project then? And what is the next step for AEM running in the cloud? You've already hinted a few things, but I wanted to conclude this interview with what people – giving a bit of a sneak peek of where things are going internally over there. [0:36:35] CS: Yeah. From the technology point, more progressive delivery. We are trying to improve this and automate more. Always automate more and more. Dealing with the scale. How can we run in – instead of 40 clusters, how can we run in 400 clusters? Things like that is what we or I am working on most of the time.  Obviously, cost implications. That's a big one. And the new services. Taking what new things the cloud provider is giving us. How can those fit? Or what the new features allow us? To provide those some new features to customers without having to write them ourselves. Yeah. Overall is also some crazy ideas of, okay, what if we take this? I don't know. What if we run this on the edge? What if we run this thing and allow this to scale down to zero? All sorts of new ideas that can fit. [0:37:40] JMC: I'm wondering. And not asking about AEMs or any of Adobe's portfolios, roadmap or future announcements. But I'm wondering, are you getting requests to support future Gen AI features at AEM? I mean I guess I'll turn the question around and conclude the interview with this one. In the hypothetic case that AEM implement in the near future, Gen AI features, would that require important changes to what you just described? To the architecture? To the way you deploy? To the way you provision? Do you foresee any strong changes? Or is what you just described resilient and flexible enough to at least accommodate the beginning of that, say, Gen AI, next generation for AEM?  [0:38:31] CS: I think there's two parts. One is how can customers use Gen AI with our products? Obviously, Gen AI is everywhere now. Everybody's talking about it. And then there's how Gen AI is helping. Or internally, how do we use Gen AI or in development to – I don't know. Move faster. Provide new features faster and better.  I mean, you have examples out there like GitHub Copilot has been around now for a while. I think there's these two dimensions to it. And probably in any technology shop, you're going to see both of those, right? Customers demanding some Gen AI feature, whatever. And on the other hand is even if my product is not a candidate to use Gen AI, how can I use Gen AI to just – as an engineer, how can use Gen AI to move faster? Not break so many things? And provide the value –  [0:39:34] JMC: And so far, have you been able to use Copilot or any other Gen AI for software delivery? For software development product out there successfully? Because let's say, yeah, these vendors claim that junior developers will be able to onboard projects and potentially release to production like we said before earlier or sooner than they used to do before these things existed. But what about an experienced developer like you? Have you been able to leverage any value from these? You don't need to mention any particular product or any particular real use case. Have you been feeling it around? And do you see any value in it so far as is now?  [0:40:12] CS: Yeah. Definitely. Definitely. There's definitely value there has been value for over a year or two now already. I think one is the use case of not going to Stack Overflow and back, right? This saves you the round trip of having to go and search for something. Where you can just see it on your editor or something like that. And on the other hand, it's definitely helpful with all –  [0:40:40] JMC: By the way, I should say – because it's relevant to this conversation. There is an interview in this same podcast by my colleague, Sean, I believe Falconer with the Stack Overflow team that have reacted to this and provided features. Or they are planning to release products that embed the experience of looking up questions and answers in Stack Overflow. But instead of doing so in their own website, embedded in the ID and being able to retrieve it through a chat functionality, code completion-like thing. I think it's in a very experimental phase. If it's even being released. But anyone interested in overflow AI, I think that's the name of the portfolio they plan to release. Please go ahead and listen to that interview.  But as you say, one of the goals they have in mind is that, "Hey, developers do see a strong contact switch in going to Stack Overflow." Because it's not in your ID. And that's where you get – what you are focused on. Concentrated on. They see the value in taking Stack Overflow's knowledge base into the, let's say, deep work experience. Sorry. I interrupted you. [0:41:47] CS: No. No. That's great. I mean, I think AI is this fundamental shift on how people are – engineers are going to work. And Stack Overflow is a clear example how they need to adapt to it. To what you said, I think there was – years ago, there was already a project that whenever you get an exception in Java, it would go to Stack Overflow and paste a link to the most relevant result on the Stack trace. that was already, "Hey, let me help you. So you don't have to search. Here. This is a Stack trace. This is the link with the most possible answer." That was like early AI sort of thing. And this was years ago. And it was great. I mean, it was funny, but it was also good.  For me, the other point where it's very useful is for boilerplate. It's amazing how much boilerplate you have to do on your day-to-day basis. And AI is there helping you. I mean, that's one of the main repeatable content generation that is widely spread everywhere.  Now the question I guess that we need to ask ourselves is why do we need this boilerplate in the first place, right? If I'm going to have to just go and have AI filling this for me, what's the point?  [0:43:09] JMC: That is true. I think that's the reason why frameworks exist, right? In a way. To cover that to a certain extent. Another thing that developers, senior developers like you say that is valuable from Gen AI is that it allows them to explore. Sort of like use Gen AI as a soundboard before they actually even prototype anything. Rapid prototyping, but even pre-prototyping. But yeah. Still in its early stages. Let's see how that goes. Anyway, this conversation was not about Gen AI. We could talk about –  [0:43:39] CS: Everything is going to be about Gen AI.  [0:43:40] JMC: Exactly. I wanted to focus on your experience taking AEM to the cloud. And by the way, you gave a talk at Linux Foundation's Open Source Summit Europe 2023 in Bilbao, Spain, which I will link in the show notes for anyone who want to deep dive into the process. We just touched on the surface of this.  And yeah, Carlos, thanks for being with us. And looking forward to speaking to you again.  [0:44:05] CS: Thank you very much for having me. [END]