EPISODE 1764 [INTRODUCTION] [0:00:00] ANNOUNCER: Boston Dynamics is a robotics company known for creating advanced robots with highly dynamic movement and agility designed to navigate complex environments. Their robots, such as the quadruped Spot and the humanoid Atlas have applications and industries ranging from logistics to public safety. They also garner widespread attention with their impressive videos showcasing robots performing complex tasks with Precision. Matthew Malchano is Boston Dynamics Vice President of Software. For more than than 20 years, Matt has been a technical contributor and leader on robotics projects such as Spot, BigDog, LS3, and Sand Flea. He has led efforts in areas, including software, product, and robotics autonomy, perception and control. Matt joins the podcast with Sean Falconer to talk about his wide-ranging work at Boston Dynamics. This episode is hosted by Sean Falconer. Check the show notes for more information on Sean's work and where to find him. [INTERVIEW] [0:01:08] SF: Matt, welcome to the show. [0:01:10] MM: Hey, Sean. Thanks very much. I'm happy to be here. [0:01:13] SF: Yeah. Thanks for being here. I'm pretty excited about this. We don't have that many shows that touch on robotics. A couple. But I feel like a lot of times robotics in some sense is like a lot of people who end up studying engineering or even becoming software engineers, a lot of times that's like where they start is some interest in robotics. I'm kind of curious, where did your fascination with robotics begin? [0:01:34] MM: Well, when I was young actually. Essentially, in elementary school, I just became very excited about robots. And I remember telling my parents, "Oh, I'm going to grow up and be either a detective or a robotics person." And I happened to luck out that I went the standard computer science track all throughout my education. And then I decided, when I was going into my masters, that I wanted to shift gears and learn about an applied area. Learn how to apply software to robotics. And I shifted gears. Did work in that area. And then graduated and was lucky enough to get a job at Boston Dynamics right when we started building robots at the beginning of sort of the BigDog era and got to live out that childhood dream of mine. [0:02:18] SF: That's interesting. How do you think like building and thinking about building software for robotics something that is going to exist essentially in a physical capacity and maybe even interact with people physically? How's it different than maybe developing software that we think about maybe is only living in sort of like a digital sense? [0:02:36] MM: Yeah. It's interesting. In many ways, it's the same. We build all the same kinds of software that I think people who work all across the software industry do. We're making web pages and continuous integration systems. We're building up servers and clients. We're building APIs. All that's a part of building product robots. One of the things that separates, I think, robotics from just general software is it's like you run on a computer that might not be there some of the time. And so, you spend a lot of time dealing with the availability of hardware, the costs of scaling that hardware. A robot is a thing that can break particularly when it's in its research era. And so, you'll discover - you'll get a chance to run your algorithm and then you'll go, "Oh. Whoops. The robot's gone for the next day. I've got to go and go back to the drawing board to work on that." And so, you spend a lot of time trying to figure out ways to be very efficient about how you run software on the robot to maximize what you learn every time you do an experiment. And a lot of the work is an experiment in a way that all software work is an experiment. But software is blessed by being very repeatable, typically. That if you build up a system where you control all the inputs to the system, then you should get the same answer out every time you use it. It's terministic. Robotics isn't necessarily that way. You go run on the robot. And the second time, you have slightly different noise on all the sensors, slightly different behavior out of the robot. Maybe someone pushes it. And all these things are things that the system has to respond to but you aren't guaranteed to see every time. And so, it's got this extra level of non-determinism in it. It's got this extra level of logistical complication. But at the end of the day, a lot of the software feels very similar. There are some robotic-specific software areas that people work in. You're going to go off and learn about doing real-time software if you work in robotics. Or you're going to work on firmware if you're working close to the hardware. But all of those are also sort of very natural I think for a lot of software engineers. [0:04:26] SF: Yeah. Absolutely. I mean, it's interesting. In terms of testing, even though Boston Dynamics probably has access to more robots than most companies, you still have a limited number of resources. Presumably, everyone's sort of competing for access to the same limited set of robots to be able to test something once they get beyond simulation phase to being able to test it actually on physical hardware. [0:04:49] MM: Yeah. Absolutely. It depends on what level of maturity your project is in. The Spot project, for example, runs 100 robots internally. It has a tremendous amount of QA hours that get done every week on those robots. That gives us the ability to be really sure that that system is going to work when we ship a release. And it gives us a lot of coverage just day-to-day. You get super-fast feedback about whether the changes you've made are making the robot better or worse in the environment that we're running it in. But in the early days of that project, we only had a few robots that we were able to use to build the product robot out of. And then you have to sit there and work very carefully to figure out who's going to run what. And is the software buggy? Or is it the hardware that's broken? Lots of very difficult questions like that. [0:05:34] SF: Yeah. Absolutely. You work for, as we mentioned, Boston Dynamics. I think probably easily one of the most famous, if not the most famous advanced robotics company in the world. And I'm sure you get this kind of question all the time. But why build robots that can actually walk? Why make humanoid robots? Robots that can do back flips? What is the reason for doing these kind of super-dynamic behaviors with a robot? [0:06:00] MM: One of the sort of obvious things is that robots that walk just have a lot of advantages as a product over robots that don't walk. They're able to step on things, step over things. They're able to move through thin gaps. They can climb stairs. They can reposition themselves in all sorts of ways. They're able to control where their body is oriented, which is really important for manipulation or for doing inspection with the robot. All key uses that we've used on Spot. Also true for humanoid robots. They're able to reconfigure themselves and do things that humans do, like reach down or access low areas. And so much of our environment that we built both indoors and outdoors is focused on being able to do things like that. Not all of it, obviously. And many people are differently abled. But at the end of the day, it really is an advantage for a product to be able to walk around. And we've discovered that it's let us gain access to markets that previously people have not been able to succeed with wheeled robots in. And I think that's been really interesting. The other thing that's interesting about legged locomotion is nature just provides immense set of examples of the potential of what you can do with legs. From speed, to mountain goats climbing mountains, to just the kinds of brachiating motion that monkeys do. You never think to yourself, "Wow. I've solved this problem completely." Because there's always some angle that's of that form. And so, I think those are two sort of key advantages that are just very practical for us to focus on. Beyond that, I think that the Holy Grail of robotics is general-purpose robotics. Building machines that are able to cohabitate with human beings and do really meaningful work for people to improve and enrich their lives. And so, I think that us continuing to focus on building really capable robots which I think that limed robots have, for example, are just really worth the effort. [0:07:47] SF: Yeah. I think it's an interesting point. Especially internal environments, physical environments are designed by humans. And we design a lot of environments based on how we're going to access them. And that's us walking around these different places. And even in nature, a lot of - at least the large mammals that are taking out space are walking. And that's their form of essentially locomotion. It makes sense that like a lot of the Earth in the world is basically going to be set up for success when you can move in that capacity. What is kind of hard though about designing a robot that can walk naturally? That mimics essentially the walking behavior of either an animal or a person? [0:08:29] MM: In some sense, we like to think that we've like largely solved mobility at Boston Dynamics. We think that our walking behaviors are working really well right now. And if you look kind of at our focuses, we now spend just as much time on questions like manipulation, autonomy, navigation, and how to use AI on our platforms. In some sense, we don't think of mobility, like legged mobility, as that hard. However, we put tons of effort into getting here. And there's a lot of nuances in how you build a system that can do that. And there's 's always more to do. A great example for us was we have customers that have slippery facilities. Facilities with wet floors that are concrete. Those are incredibly hard to walk on. One of the things that we did was use reinforcement learning to build new capabilities into Spot so it was able to walk on those slippery floors much better than it could before. And probably better than I could walk on those floors right now. And that's us adding more capabilities into what we have. But we feel really good about the performance of the legged mobility that our platforms have right now. It's been less of a focus for us. And we're looking now much more forward towards how to use that mobility to perform work with manipulation or to let our robots go and do a set of inspections across a building. Things of that form. [0:09:42] SF: And then is there a difference in terms of solving that problem for bipedal locomotion? A human, basically, versus an animal. Spot is obviously modeled after a dog. Are there different challenges with that kind of motion? And is one easier than the other? [0:10:00] MM: Yeah. It's an interesting question. There are and there aren't. The nature of the motion is different and that you have four legs or two legs. But the same questions apply. How do you balance the system with the legs that are touching the ground? How you move the legs that aren't touching the ground? To smart places to put them. What are smart places to put them? How do you perceive the ground around the robot so that you're able to know where you can and can't step? All of those are very similar problems. And if you go look at the legged locomotion literature way back, people generalize and say, "Oh, animals that work on four legs are actually sort of walking on one virtual leg." And so, there's a way that they're all kind of equivalent. That said, you have more legs to manage with a four-legged platform. But you also have more points of contact so it's easier to sort of stand and balance. We use very similar software across all of our platforms. I would say 80% of our software stacks are the same across Boston Dynamics robots. But we do solve the locomotion problems differently. And, particularly, when you start getting into questions of doing backflips, for example, those end up being problems that require us to use a bunch of forward-thinking that extends kind of further and has a different precision the kind of forward-thinking that we use inside of the quadruped platforms. When I say forward-thinking here, I mean modeling and predicting where the robot will go and the results of actions that it takes. And so, there are differences. But lots and lots of it is very similar. And a person can move pretty easily between working on those platforms without needing to retrain, for example. [0:11:32] SF: Yeah. You said 80%, I believe, of the software is sort of common across the different robots. When you sort of crack the code on one type of skill, how transferable is that knowledge? Once you nailed down sort of walking, for example, does that help you figure out how to, say, pick something up and throw it or some other type of movement like that? [0:11:54] MM: Yeah. That's a great question. And the answer is it does. There's two ways that it helps you really. One way is robotics is a pyramid. You're building things on a lower level of basics. And then you're adding more complexity and building up. And sort of at the top of this pyramid is the behavior. Something like a high-level behavior. But at the bottom is just tons and tons of software to compute kinematics to make motor control happen. To make sure that you hit your real-time deadlines. To read sensors. To integrate IMUs. Just all this stuff. And you have to get all of that correct in order to have the thing at the top work. And so, the more you get that pyramid correct, the better off you are the next time you go and build the next behavior. Because you're like, "Oh, if I reuse all this software I've written and it's all accurate and works correctly, then you're in a great place to sort of move on to building the next behavior." Additionally, some of the ways that we build behavior are in some ways you just can write robot behavior and be like, "Oh, to do this, move the arm here, move the arm there." But in other ways, you can write systems that solve problems around behavior. And those can be learned systems. Or they can be optimizers. And once you build one of those, you sort of transformed your problem from needing to hand code what your robot does every time to saying, "Oh, I can have this robot solve this problem for me." And so, that's where we get a lot of benefit is building some sort of solution framework and then reusing that over and over again for a set of related problems. [0:13:23] SF: Yeah. How much of the software that's running is based on sort of more of these general learning systems versus essentially learning how to solve a problem through test and iteration? Kind of like a baby learns how to walk and move around and stuff like that. Versus something that's sort of a hardcoded sequence that the robot's going to execute. [0:13:44] MM: Yeah. That's a great question. Reinforcement learning, which is the thing that you're describing there, this idea that you use some sort of mechanism to automatically develop what's called a policy, and then that policy is a thing that you can apply to the robot, is a huge part of robotics these days. We're doing a of it internally. Depending on our platforms, we do a mix of things. One of the challenges there is you really just shift a lot of the problems about how you specify the behavior or how you validate that the behavior works in different circumstances. And you still have to do all that work. And so, we continue to use a mix. Where on Spot, for example, we have a lot of sort of classical controls that we put into place which we now augment using reinforcement learning in the space of how that classical controls works. On some of our other platforms, we're starting to look at reinforcement learning as a way to do various tasks that are relevant to manipulation. But we also mix that with other techniques as well. I think it's still hard to build a reinforcement-learned robotics behavior that essentially has everything in it that you need to productize that behavior and use it robustly with a customer who might not be a roboticist and just wants to rely on the idea that the robot will work and do what it's supposed to all the time. But a huge research area. We're doing a ton of it. And I know it's really big in industry right now. [0:14:59] SF: What about the difference between sort of trying to design a robot essentially to be like a general problem solver versus solve more specific tasks? If you could design a robot that, I don't know, specifically can go into a car that's on fire, rip the door off and save somebody, versus a robot that can be thrown into any sort of dangerous scenario where you wouldn't send a person to try to rescue. And those seem like completely different design scenarios and you would develop the software and the hardware maybe in a different way. [0:15:29] MM: Yeah. They really are. I guess the great example of this is something like robot arms in a factory versus putting a humanoid robot on a factory. And I think there's really a continuum here. On the very specific robots, they're very successful. Robot arms work really well in manufacturing right now. However, you pay all these additional costs. You have to go and retool your factory whenever things change. Or you got to go and move all these arms around. And so, they're good for fixed activities, for example. That you know ahead of time exactly what you're going to do. But if that changes, and requirements always change, that's sort of a software engineering thing. Then, all of a sudden, you're going to realize your fixed purpose robot maybe can't be adapted easily to the new thing. On the other end of the spectrum is building a general-purpose robot, which of course is a huge area to figure out to really achieve the kind of generality that humans have for working on tasks. But I think in the middle, there's lots of places where we can build robots that are really reproducibly general-purpose and then apply them to specific tasks, achieve ROI on those tasks, return on investment. And then use that to continue growing robotics as a business area and proving it out in more and more sort of applications as you get towards general robotics. And I think that's the tact we've really taken. Something like our Stretch robot is designed and sold to take boxes that arrive on trucks and essentially unload them onto a conveyor. However, the overall system is a general-purpose robot. And so, we think it can be used for lots of other handling activities that need to be done. Similarly, Spot is really a general-purpose quadruped platform. But we spend a lot of time trying to make it as adapted as possible for a very specific task, which is enterprise asset management. Going around and inspecting things inside of a facility and making that easy for someone to do. And so, I think that like there's a continuum and we're sort of figuring out a way to build robots that do a whole range of tasks as an approach to get to general-purpose robotics. And I think it's going to be like that for a long time. [0:17:30] SF: How many different types of robots does Boston Dynamics have? And what are they specifically designed for? [0:17:36] MM: Yeah. That's a great question. We have three main product lines right now. We have the Spot robot, which is a general-purpose quadruped. And it is primarily designed to do industrial asset inspection. You take it and you buy it. And it comes with a dock. And, essentially, you put it in your facility. And you can wake it up and it'll drive around on a schedule and take photos of things and use a variety of sensors that are payloads in order to understand whether things in your factory are breaking down. Whether you have something like an air leak or an overheating motor. Things of that form. And it gets used in a lot of industrial areas. It also has the ability to do lots of general-purpose work. You can buy other kinds of payloads and people use it to patrol areas. They use it to work in dangerous environments. Things of that form. We also sell a robot called Stretch. Stretch essentially got sold starting last year. We started selling it commercially. And it is specifically designed to operate in warehouses. And it's interesting because it's not a legged robot. It's a pallet-sized base with a large robot arm. And it's able to use perception and autonomy mechanisms to understand boxes and the environment around it. And then pick up those boxes using a vacuum gripper and unload them. The initial application is unloading trucks, which is really important for logistics. There's so many of these trucks that get unloaded. But we imagine that it's going to be useful in all sorts of places in a warehouse. And then our third product line is the Electric Atlas which we just launched. Sort of our initial public views of in some videos. And it's intended for real-world applications. And we're planning to be testing it along with Hyundai, the company that owns Boston Dynamics in majority, for a variety of applications that are relevant to them. [0:19:17] SF: And then what is the interaction with the people involved? I have Spot running in my factory. How do I interact with it and kind of give it the tasks that it needs to perform? [0:19:28] MM: Yeah. That's a great question. Our primary mechanism is a product called Orbit which we also sell. And it's a web-based experience. You log into it. And you can use it for fleet management to understand what your robots are doing. Monitor where they're moving around on a map. Essentially schedule missions for those robots and say, "At 4am, this robot wakes up and patrols its kilometer route." And then comes back and sits down. And then it presents that data up to you and manages notifications. And so, that's we think the dominant mode of interacting with our robots. We're also working on adding a bunch of features there that allow you to do more with the inspection data as it comes in. And then out of the box, you can actually just take a spot robot immediately and start driving it. It ships with a tablet controller that allows you to drive it around. Similarly, Stretch is planned to be added to Orbit. Also contains its own tablet controller to allow you to move it around. It has a simplified interface that makes it easy to train individuals who work in logistics facilities on how to command the robot to do things like start an unload job or pause that unload job. Additionally, all of our robots are planned to have APIs. We have an API on Spot that allows people to build sort of an arbitrary amount of software there. And many of the behaviors that we've built also operate using that same API. For example, our navigation stack. And that is just an indicator that, "Hey, this API is quite powerful and lets you do a variety of things." We recently just shipped it with an RL capability so that people are able to take our robots and build their own RL-based controllers and then directly control the joints of the robot, which was a big ask from a lot of researchers. And so, there's a bunch of different ways. But, predominantly, we're focused on finding ways to deliver robots out to not just people who are robot experts but to people who can just use them to make their jobs easier. [0:21:17] SF: Yeah. With that API, can I essentially programmatically build my own extensions and new behaviors that could be executed by the robot? Kind of like a much more sophisticated version of LEGO Mindstorms or something like that. [0:21:31] MM: Yep. Absolutely. And it operates at a bunch of different levels. The API runs over Google GRPC, which is a remote procedure call framework that communicates over TLS and HTTPS, which is great for being able to be compatible with enterprise environments. And then you're able to command a whole variety of different services. We broke in the sort of surface area of the robot's functional API into a bunch of really microservice-style services that you can talk to which allow you to take either high-level or low-level command. If your robot ships with an arm, there's an API that allows you to control how the arm works. You can drive the robot. The tablet is all implemented against the API. You can tell the robot to go touch to go to some location. Turn on and off all the features. It's really very full-featured and does a nice job of putting the complexity of commanding a robot behind a really usable network interface. [0:22:25] SF: Yeah. That's really cool. And then what is the actual development process for building one of these robots? I'm sure they're years in the making. But where's all this stuff kind of begin? [0:22:36] MM: Yeah. That's a great question. Oftentimes, it begins with the previous robots. We're hugely iterative. We're definitely a build it, break it, fix it company. Our CEO says that to us all the time. And so, our machines are constantly evolving. That's one thing to keep in mind. It's not like we throw it out and start over again. We just take what we have and figure out how to keep moving it forward both locally and then in terms of project-to-project. We often try to reuse as much software as we can between iterations so that you can get started as soon as you can. You might go ahead and get started then decide you need to change a bunch of that software. But that means that you're not spending a lot of dead time just writing things from scratch when you start a new robot. The other advantage of starting from a previous generation of robots is that you have a lot of information that can go into your design process. Spot is a great example. I essentially led the software product effort on Spot to turn it from a research prototype into the shipping product robot. And in order to do that, we started with just lab Spot. This thing that was only designed to work in a lab. It was totally hacked together in order to demo really cool behaviors. And we said, "Oh, there are all these things that are important to building a product." And we just began to incrementally push those into the system and transform the entire system into the thing that we ship that has security, and the API, and the ability for a person to configure it on all these different things that weren't part of the original system. But, man, it was great to start with an actual robot and be able to validate every change as you went. When we decide to design a new piece of hardware, we often start with things like simulation. Where we go and build a physical simulator of it and start asking how would we move it around. We work with canonical behavior requirements. People will go and do a study of how the robot should move. Very similar to product definition in other places in software. How should I move? What are the important factors here? And we'll start doing a bunch of hardware analysis. Like, " Hey, can we make this hardware work the way we expect? Can we test these sensors out on a cart?" All that kind of prototyping. All this involves a lot of software. That's one of the really fun things about robotics is, even when you go work with a bunch of hardware, you end up having to work with a bunch of software. And so, as a software engineer, you get exposed to this world of electrons and atoms. All these machines and mechanisms. And you learn a whole bunch of stuff as part of that that's just really, really fulfilling. I think to see a world outside of just like the software world that all of us have been a part of. We'll start with that. And then you just do a huge amount of work. Lots of iteration. Lots of prototypes. You'll build similar looks like, works like robots. You'll be bringing things up all the time and making them work and then working down the long tail of issues until the system really works as reliably as you want it to. [0:25:15] SF: In this sort of culture of break it, fix it, I think it makes a ton of sense. If you're going to figure out how to have a robot, do some sort of new behavior, realistically, that robots kind of break a lot probably through getting to a place where it's actually able to execute that behavior reliably. You have to be able to fix it, otherwise your learning cycles would be so slow. But how much of the hardware is being built in-house versus built by somebody else? Or is it just, even if it's built by someone else, the assumption is that you should be able to fix some of it in-house so that you can have essentially these fast learning cycles? [0:25:50] MM: A lot of our hardware is built in-house. Certainly, a lot of our prototype hardware gets built here. We have a machine shop of expert machinists. Sort of F1-grade machinists are able to build all sorts of really cool parts. And we've moved all of our manufacturing into Boston Dynamics. We have a factory that produces Spot and Stretch robots right now, which means that they come off a line in a box and we're able to get the same version that would ship to a customer. And once you hit that point, that's just great. Because now you're never - you can just get another robot if you need one. And we do all the sort of Hardware maintenance in-house. It's important to do a lot of that kind of thing. Because similar hardware and software, the entire concept behind a build, or break, and fix it kind of cycle is to do root cause corrective action. You want to take anything that goes wrong and ask what was the root cause of that thing. Really drive it back to whatever is the thing that caused it to happen. And then you want to take some positive action to fix that root cause. And so, that's just consistent across our entire organization from software to hardware. Everyone's asking, "Oh, when something has gone wrong, great. Where do we file the ticket? Which team needs to go and figure out how to fix it? How do we know that the fixes have all gone in?" And then we, similar to lots of software works, structure our software release process around that. We structure our hardware maintenance system, processes around that. And use just all that to constantly drive the robots to higher and higher mean times before failure and mean times before intervention, which are the kinds of metrics that we're tracking all the time to make sure that the products are always improving. [0:27:19] SF: And in the project where you took Spot from essentially being like an R&D product to actually being a product that you can sell to somebody who's not part of Boston Dynamics. You're part of some sort of government agency. What were some of the big changes and sort of technical challenges with making that actually a product? [0:27:39] MM: Yeah. That's a great question. We had to do everything from building the manufacturing flows for as you created components. How you'd assemble those components in a factory? All the UX that people need to be people who work on manufacturing. We had to take the robot from essentially being a bare Unix-style system to something that had security features. It has an encrypted disc and locks out the ability to access it. The API is authenticated. We had to add the API. People went off and built the tablet UX that didn't exist ahead of time. There were tons of internal interfaces that had to be either removed or pivoted to be shippable later on. There was a long tale of making sure the system was really reliable. You'd have these early robots. And the early robots would have some issue. And if it happens once every two weeks, a lab person won't care. They'll just reboot the machine. But that's totally unacceptable for a product. And so, all of those get run down and solutions get made for them. There's lots of things that are on the edge between software and hardware. People constantly are trying to understand why is the hardware acting the way it does? How do we make that work? We had to go write new motor controllers for it. One of the interesting things was Spot transformed from being a hydraulic robot into an electric one. As part of that, lots of work was done to build sort of the electric motor version of the system. There's a lot of sort of fundamental science type work there that was interesting. [0:29:11] SF: Why was that switch made from hydraulic to electrical? [0:29:14] MM: There's lots of reasons. Electrical things have improved. Since the early days of Boston Dynamics, we did lots of hydraulics work. Hydraulic is great for power density. It's also great because you can use gasoline as a fuel, which gives it really high energy density. You can put a limited amount of weight in the robot. And the robot can run for many, many hours. We saw this in our LS3 project where we had a robot that was able to go, I want to say, 20 kilometers on a single sort of tank of gas. But no one wants a gasoline-powered robot in their houses or their factories. And so, it really wasn't the right kind of robot for indoor applications. And so, we really started making a transformation around 2014 to electrical robots. And, in fact, our last hydraulic robot was the Atlas series of robots. And that has now turned into the E Atlas robot that you've seen in videos, which is all electric. And is even more capable than the hydraulic one, which I think is just an amazing testament to how good you can make electrical robots these days. [0:30:12] SF: In terms of going from a robot that maybe is used for doing a demo or even used in a lab situation where you can reboot into when some sort of problem happens or people are there to service it, to suddenly having potentially a robot that is in a manufacturing, a factory running 24/7. How do you deal with things dust accumulation or other things that could get into the robot's components and essentially impact its reliability and actually cause it to malfunction? [0:30:44] MM: Yeah. I mean, mechanical engineers spend a lot of time on that sort of thing. Ceiling is a big thing. As is figuring out ways to - it's always challenging when something moves. Things break around things that move. I think people who work on their cars know that. And things like that. And so, one of the interesting things is everywhere where you have wires that cross, moving points, or connectors, all of those are things that people are constantly figuring out how to drive out of the design in order to make the robot just as reliable as possible. And so, it's very similar to the software process. People are just constantly asking, "How do I make the equivalent of git commits to my manufactured design to say like, "Oh, let's fix this." They call them ECOs in the hardware world. But they go essentially make commits to the process and go, "We're going to go and swap this component out." And all these things are done just to sort of drive up that overall reliability of the system. [0:31:37] SF: What kind of safety controls have to be built into the robot to make sure that it doesn't accidentally run into somebody or it's performing some sort of maneuver that applying force to some object and that ends up like applying force to a person or some other thing that could lead to hurting someone? [0:31:53] MM: We have an entire safety department inside the org right now. They work on questions about hazard and risk analysis, which is a common practice for how you take a consumer product and make that product safe for people. And they do those analyses on our different product lines as we ship them. And so, that's all like a key piece of how we build what's called an operating domain. But the idea that you combine both the way the machine itself works along with ways that you require it be used in order to ensure that people don't get injured. And so, that's the thing we do with all of our products. On the Stretch product, for example, it's a very large robot. It has a variety of safety features that ensure that when someone's driving it, they are right there with the robot. There's a switch on the pendant they use that ensures that they're paying attention to the fact that they're driving the robot. And the robot can only move at a very limited speed and then it has to go into a limited area where it's able to then do its box unload at full speed. But the rule is no human can be in that space. Spot, because it's a smaller robot, doesn't have those same concerns. But there are lots of ways that we work together with customers to make sure that the robot is safe. Long-term, there's open problems in safety. And it's one of the places that we are continuing to look. If we want to really build robots one day that work together with humans in close proximity, then there's lots of technologies that are relevant to go and explore in that space. And we're starting to look at things from functional safety up to AI capabilities that'll make sure the robots are able to work with human beings. [0:33:30] SF: In terms of AI, some parts I would assume - we mentioned reinforcement learning. Learning new behaviors. But some other parts of the robot's behaviors are probably down to some sort of machine learning algorithm, like executing. How do you balance between the need for sort of being able to react and be real-time versus essentially the raw number crunching and computation cycles that's going to take to execute some of these models? [0:33:55] MM: The easy answer is lots of optimization. I guess, structurally, one of the things we do is we break up the decision-making into different times. This is very common for real-time software. You might have a very fast control loop that is executing some very basic algorithm to just run basic control on a set of joints. And then you'll build slower loops around that that are able to run at different speeds. And so, it's very common for our sort of low-level controllers to run at hundreds to thousands of Hertz. But then from there, to have something like an image processing algorithm running at tens of Hertz. And then as you go into sort of more modern ML models, many of those can't execute more than maybe once every half a Hertz, or two Hertz, or something. And so, you kind of take what you need to be able to do and you break it up into pieces. And you have those pieces all complete over the schedules and timelines that they can. And, of course, the thing you're always trying to do is figure out, "Well, is there a way I could do this faster? Could I speed up this loop? Could I make this decision a little bit faster?" And you get lots of advantages by making decisions faster in robotic systems. It improves your force control. It makes the robot respond to things more quickly. But on the other hand, that means you just have to have much more efficient sort of code for that. That's the general balance of how things work. And we'll even access things off robot. And that can take longer. And so, it's really a systems problem. You go and design the robot to achieve the set of goals that you need there. [0:35:29] SF: How many sensors are on one of these robots that you're taking as input in any given moment? [0:35:36] MM: There are various kinds of sensing on the robot. For example, there's what we might call low-level sensing. That would be for every joint on the robot. We care about the position of that joint. We care about the force that that joint's exerting. Oftentimes, we may measure those things in multiple ways. And then building out from there, you have physical sensors around the orientation of the body like an IMU. And then you'll often have a set of perception sensors built around that. The perception sensors on Spot, there are five pairs of stereo cameras that look out into the environment around the robot and allow it to perceive the ground, obstacles, things around the robot. On Stretch, there's a mast. And that mast contains a set of cameras and other sort of distance-measuring equipment that's able to image things around the robot. And so, it varies from robot to robot. But you'll have a set of sensors for just how the robot moves. Proprioceptive I guess would be another word for it. And then a set of sensors that are often perception-based. And many of our robots, Spot, you can put payloads on the back. You can go and add a LiDAR and use a Velodyne to understand - which is a spinning LiDAR common in the AB industry, to understand the space around the robot. Or you can get one of the inspection payloads that we sell and put really high resolution cameras on the system so that you can get really good images of far away things and read them as the robot walks by. And so, there's a bunch of different configuration in that space. And that gives some sense for the kinds of sensing systems have. [0:37:05] SF: What programming language are people actually writing the software in that's actually executing on the robots? [0:37:11] MM: Yeah. That's a great question as well. And it's a lot of languages. One of the things that happened as we moved into product is we became very polyglot. And so, classically, lots of robotics gets done in C++. The vast majority of people who do sort of behavioral type work on our robots are working in C++. There's a growing number that also work in Python or some Python C++ hybrid. We've also used Python, historically, MATLAB, for doing offline analysis. You need to go compute some downstream numbers and make some graphs. That kind of thing. Over time, TypeScript and JavaScript have grown as you end up having to put websites and web-type experiences in lots of places. That includes both on robot, and in the cloud, and our other appliance-type technologies. And then, additionally, we're starting to see uptick of languages like Go and Rust being used in small areas where people are like, "Hey, this is the right solution for these problems." And then all of that gets pulled together in the set of information that we release onto the robot as a software release. And so, you're really pulling all these different languages together into one thing. [0:38:19] SF: How big is the software engineering department at Boston Dynamics? And how is it sort of organized? [0:38:23] MM: Yeah. Boston Dynamics is around 250 software engineers, give or take, depending how you want to categorize them. It's organized into product thrust teams. We'll have a software engineering team that is focused on the Spot project and the Stretch project. We also have a central software team which I run, which is a team that works between those teams. Doing things from engineering enablement up to R&D-style activities. Future technology development. Basically, just anything we can do to make the company move faster. Our software engineers vary in background from people who are very robotics-focused and work on behavior or controls. Over to people who are very customer-focused and work on things like building software solutions for customers and doing field service for customers. And then a wide variety of experts in all sorts of different domains. People who build websites. People who write firmware. People who write safety-sensitive software. All of those things are things that are part of the group of people who are building software at Boston Dynamics. [0:39:27] SF: In terms of like third-party software that you're using to make some of this stuff happen, is there any open source technologies or software frameworks that play a key role in this? [0:39:36] MM: Yeah. Absolutely. We use a lot of open source software. I would guess we probably have hundreds of open source packages that are a part of the software we build. We use a lot of traditional technologies that people who work in computing probably know about. For example, protobuf from Google. GRPC from Google. We use Bazel as a build system. We got some of that because we used to be owned by Google. And so, that was probably like a flavor of the things we do. We also use a lot of open source and robotics packages like OpenCV, the Point Cloud Library, PCL, Eigen, the linear algebra system. We pull a lot of that kind of technology in as well. And that's really useful for people being able to rely on really high-quality software and tools to build things. We tend to be shy about committing to frameworks in our experience. One of the things we don't do, for example, is use a lot of ROS, which is the Robot Operating System. A common framework. And that's in part because once you sort of commit to a framework, you often find that it both does a lot for you but also controls a lot about what gets done. And so, we're just always cautious about how much you commit to a thing like that. And then how much you can align your road map developing software to where that framework chooses to go. It won't necessarily go where you want it to as a product company. That's something that we pay some attention to. We also built a lot of software because we've been doing this for over 20 years now building robot software. And so, it's one of those things where we built a lot of technology. And so, we tend to use our own frameworks and our own systems as well. [0:41:13] SF: I think being at it for 20 years, just like Google had to in the early days of the web, you have to invent a lot of stuff. Because no one's solved some of these problems before. [0:41:21] MM: One of the joys has been actually seeing software that I might have written 15 years ago. I've been at the company for 21 years. Seeing software that I might have written a long time ago still in use today. It's an amazing feeling to be like, "Wow. I wrote that on some field test out in the Mojave Desert 15 years ago. And it's still in production on a robot today." Some utility or something like that, which is true. And so, that's been just a very interesting thing about the sort of long duration of time that we've worked on these projects. [0:41:50] SF: How much do you have to worry about backwards compatibility and stuff like that as you're building new versions of the software? Is that a big factor? Do the robots that are already out in the wild get remote updates? [0:42:03] MM: We design all of our robots to accept remote updates. We have a system we invested in that lets us do those updates with high reliability and completely control the set of software that's on the robot. It's also a key part of our security story. There is a backwards compatibility to the robot hardware question. That always has to be maintained. We don't want to ship a release that won't work on some generation of the robots. That's generally been not a problem since the robots are - they evolve over time. But we've been able to factor those evolutions into the same software package. That's a concern. Within our software environment backwards compatibility, we build everything from tip of master which is a very common sort of model. Largely in an integrated - sort of mostly in a single repo, which means that we can do compatibility of our own software with itself relatively straightforwardly. And so, mostly, the concern then has to be around, "Oh, did we release an API in the past? And is that API still valid?" We have policies and procedures around similar to probably everyone around taking function calls in that API. Deprecating them. Announcing it to the customer. Et cetera. And so, we manage it everywhere we need to and wherever we can. We simplify it by just making it not a concern. Because that kind of stuff rapidly becomes incredibly complex if you let the matrix of compatibility, as we like to call it, turn into like a big matrix. All of a sudden, you can spend a lot of your engineering time just working on validating that matrix. [0:43:30] SF: Yeah. Yeah. Yeah. Absolutely. And then in terms of the simulation software where presumably a lot of the testing is happening, how well does that work versus the real world? If something works in the simulation, what are the chances that actually works on the physical robot? [0:43:43] MM: Yeah. It's interesting. That's actually evolved a lot. In the early days, we used to think of it as the main purpose of simulation was to show you, if it didn't work in simulation, it wasn't going to work on the robot. You shouldn't even run it. And simulation was good for that purpose, which is a really useful purpose. To save you the time of running something that doesn't work. However, now we're at the point where, for many of our mobility behaviors, we can actually run the behavior initially in simulation. And then the robot can just run that behavior and it will work out great right. And that's especially true on a lot of our E Atlas project. That team has gotten tremendously good sim-to-real, as they call it in the industry, results around mobility. The area of manipulation, which has sort of a different set of things happening in it, is I think still an area that sim-to-real can be challenging. And so, I think that continued work is happening there for us and I'm sure for others as well. Because there's just lots of things that can be difficult to model. And a simulator is only going to tell you what you've chosen to model in that simulator. It doesn't magically disclose new physics to you typically. Typically, you have to be the person who's like, "Well, I understand how to model this contact interaction or model this particular object." That can be kind of its own thing as well. But great results so far in mobility. [0:45:02] SF: All that simulation software, that's all built and owned in-house? [0:45:07] MM: We use third-party packages. We use Gazebo and Mujoco both in-house. And we have had our own custom simulators in the past and probably will again in the future. And so, we go and look for whatever tool we can find that will move us forward. It's not a key product for us. It's a product enabler. And so, we're happy to use whatever capabilities will move our products forward and be as efficient as we can around it. [0:45:32] SF: Earlier on, you mentioned how the Holy Grail was kind of this being able to build a general-purpose robot. How far away do you think we are from that? And what is sort of the main barrier to be able to achieve it? [0:45:44] MM: Humans can do a whole lot of things. And so, I think we're actually pretty close to the first humanoid robots being useful in real environments. And I think Boston Dynamics has a tremendous background to go and achieve that goal. We've got tons of practical robotics experience with our Spot robot. And so, I think that we're very close to essentially demonstrating humanoid robots doing a really useful job for people in some environment. I think that I'm hoping in our lifetime, in my lifetime I'll see robots moving among us as sort of collaborators in the real world broadly. One of the greatest things about my time at this company has been when I started to see Spot robots out in the real world just by themselves. Someone else operating it. It's not the company running. It's someone using it for a purpose or demoing it somewhere. And that's just been such a great feeling in robotics that I can't wait to essentially see that same feeling happen with humanoids. [0:46:40] SF: You've been there 21 years, I believe, you said. I guess what's kept you engaged and excited through over two decades of work for the same company? [0:46:49] MM: Yeah. Well, I mean robotics is super interesting. That's one way to think about it. There's just so much to learn and so much to do. And the field has changed a ton in that time. What we've done as a company has changed a ton in that time. The earliest days were, late nights, I would be soldering wires or disassembling things. I'm a software engineer. That's not my natural habitat. Just trying to make these crazy machines work. And now we're shipping products and essentially making robots do things that no one could have imagined. People thought it was crazy to go build a legged robot when we built BigDog. And now we're building robots that do backflips and work in all these industries, which is a tremendous feeling. And just in the process, there's just been so many things to learn about. Software in my mind is an applied science. It's a thing where like you can spend a lot of time learning about software itself. But some of the richest stuff you can do with software is go learn about someone else's problems and come away with this sort of knowledge and expertise about a problem that's parallel to your own but with the precision that you learn by being a software engineer. You know how to talk very precisely about problems, and cases, and all that kind of thing. And it's just like so rewarding to go and learn that in a growing and interesting field. I think that's something that software engineers is like just - that's one of the great things about the job is you get to like learn all these applied areas. And you're like, "Yeah, I write software. But I'm secretly an expert in motor control dynamics. Or I'm an expert in camera processing." And it's like this total separate area that you get to learn a bunch of physics for and all that. And robotics is just full of that because there's so many different angles. Also, it's just been I've been very lucky to join Boston Dynamics when I did. And we've just been doing really interesting work. And it's just really rewarding to see the impact it has on kids. They have our robots in museums. And we bring kids in to see the thing. My brother's children love it. It's just like - you're like, "Wow. I'm making the world a better place by doing this." [0:48:48] SF: Yeah, there's definitely - and I talked a little bit about this at the beginning how I think a lot of people who go into engineering. Their sort of first gateway to that is something with robotics. There's a certain fascination because it's an actual thing that you can touch in physical, which is so different than maybe a lot of things that we deal with in sort of digital software which you build an appreciation over time. But as a small child, maybe it's like there's less appreciation there for, I don't know, your business software than something that's like a physical robot that's like walking around. Awesome, Matt. Well, as we start to wrap up, is there anything else you'd like to share? [0:49:20] SF: I don't think I've got anything else just that it was a great opportunity to come and talk. And I hope the topic is interesting to your listeners. And thanks so much for taking the time to have me on the show. [0:49:31] SF: Thanks for coming on. I feel like we kind of just scratched the surface here. We could probably spend another episode or two discussing various topics. Hopefully, you'll come back down in the future. [0:49:40] MM: Great. I'd love to. [0:49:41] SF: All right. Thanks. Cheers. [END]