Product Design using AR and XR with Jenna Fizel

Augmented reality and extended reality, or AR and XR, are in the early stages of being integrated into design workflows. But they are likely to transform design in the years to come. Jenna Fizel has a background in architecture and desktop software engineering. She sat down with Tyson Kunovsky to talk about her work at the design firm IDEO and how they’re using AR and XR for industrial design prototyping.

TK: Hi, Jenna. And welcome to Software Engineering Daily.

JF: Thanks, Tyson. Happy to be here.

TK: Jenna, before we start, I’d love to learn a little bit more about you and your software engineering background. How did you become a software engineer? And what did your academic journey look like?

JF: Sure. I came to software actually through architecture originally. My academic background is in computational geometry. And so, I, for a while, was writing code and developing algorithms to create things like building facades and realized that I was a lot more interested in the systems I was building than the fact that they turned into buildings in the end.

I shifted careers a little bit. And for about a decade, was a desktop C++ developer. Creating custom software for lots of different kinds of institutions. From companies, to libraries to help them explain complex data to various audiences through sort of massive installations that ran across multiple servers connected to many different kinds of backends. And then display that all in novel interfaces located in places. That’s kind of where my architecture background remained. I always like to build things for specific human contexts.

TK: Well, that makes sense, because now you’re a senior director of emerging technology for a really interesting and unique company called IDEO. And, Jenna, for those that aren’t familiar with IDEO as an organization, what kinds of work does IDEO do? And who do you do it for?

JF: Yeah. IDEO is a design consultancy, which means that we help our clients with problems that can be framed as design questions. Now, that’s vague, because we do a lot of different things. We’ll work on anything from the physical design of something like a guitar capo to government policy. And we do this in a really cross-disciplinary way. We have dedicated teams for each client project where we combine different skill sets and different backgrounds. You might see somebody like myself with a software engineering background. Somebody with a background in organizational change or somebody with a business design background alongside of people who are experts in graphic design or interaction design all working together on a single problem over a concentrated period of time.

TK: That’s quite the breadth of solutions that you offer. Given we’re software podcast, we don’t usually have a lot of folks on talking about designing guitar capos and government policy. But I’m guessing that taking this design centric worldview and applying it to the problems that you are solving is a unique and powerful way to create novel solutions that you might not be able to arrive at otherwise.

Given that IDEO is so cross-disciplinary, how do you see the role of actual software engineering in the design work that you do?

JF: Yeah. I think there’s sort of two approaches here. One is simply our interdisciplinarity. We always value the expertise of our team members and bringing sort of the lens of the kind of craft that they’re bringing into the situation in understanding the design problem. For software, that’s a lot about thinking systematically or thinking about real-world implications of design decisions. Or actually building software. I still do that. And my colleagues do that as well. But we do it in a slightly different way that I think we’ll probably dig into.

And then I think there is the sort of expertise level. Because we work on a lot of digital products. Or we work on systems or questions that are adjacent to digital product development. And so, bringing the sort of experience lens of having worked myself on both desktop and web-based products and other people with a variety of other kinds of experience. We can bring in the sort of needs of the client context more vitally into the room when the design solution is being worked through.

TK: It almost sounds like I don’t know quite what to call this. But your approach seems more as if it’s applied software engineering for the real-world, which is interesting. Because when you think about why companies typically build or hire others to build software, it’s usually for some very specific and often, for lack of a better term, boring business purpose. Given the work that IDEO does is so diverse, why does IDEO build software?

JF: Yeah. I would say that there’s sort of three reasons why we build software. One is to create a prototype. To sort of answer the question, “How could this thing work?” And that is especially relevant these days in emerging technology where if we’re designing like an augmented reality experience, it’s not really enough to just talk about it or even draw it. You kind of have to make it or make at least some limited aspect of it.

And, often, we’ll come to clients who have large engineering organizations. And maybe they have some prototypers that they’re often working inside of relatively complex systems. And we take the approach that you should only build what you need to answer the question you’re trying to ask with your prototype.

Step one, or approximately step one, is to define that question. And then step two is to try to build something and see how hard that is. And then maybe go revise step one again until you get to something that leads you to a better question. And so, that’s what a prototype is. It’s a machine for asking better questions. That’s one thing.

Another one is experiences. One of our co-founders, Bill Moggridge, has this quote I love that the only way to experience the experience is to experience it, which a little bit silly. But I think it’s also really true. Sometimes we need ourselves and our clients to sort of feel a future scenario or even feel the possibility in an existing design. And, often, the only way to do that is to actually make the thing to some degree.

Again, we always are trying to be lean with how much effort we’re putting into any of our work, including engineering work. We think through like how much of this experience? And in what ways does it need to be real to convey the emotion or that connection to others? Or the impact of the design choices we’re making. Yeah, four experiences.

And then I think, thirdly, to support with strategic choices. Sometimes, or really often, even when we’re doing product design, it’s in the context of larger decisions a company is making. What markets to enter? Or what new teams to stand up? And it can be really powerful to build a little bit of the future of what those teams might accomplish in order to help feel the consequences of different strategic choices.

TK: It sounds like software then is just yet another tool in your belt to help facilitate understanding. And I absolutely love that quote. The only way to experience the experience is to experience it. I’m definitely going to have to steal that.

Now that we know a little bit more about your overall approach to utilizing software, let’s change direction and talk about something that I know you have deep experience with, which is augmented reality, or XR, as it’s commonly referred to for those that are unfamiliar. When it comes to emerging technologies like XR, how are those technologies used by the engineers and designers at IDEO today?

 JF: Yeah. It’s a good question. First of all, IDEO’ers are curious people. And they’re always sort of looking out for the next new thing. And so, a lot of us bring in our interests, especially in emerging technology, naturally to our design work. We also do have some slightly more formal structures that encourage this.

Part of my role actually is running our internal learning group on emerging technology that I started during the pandemic actually to help with collaboration. It was kind of a wonderful – or wonderful and tragic confluence of factors where we’d had VR headsets and Microsoft HoloLenses lenses in our studios. But one or two of them that project teams could take on and use.

But we were all in our own homes, and apartments, and things when the Meta Quest 2 came out at a low enough price point that we could conceivably send out hardware to more people who wanted them. And that created an opportunity to make a sort of obligation between the people who raised their hands for this hardware and a learning community.

I’ve sort of engaged with learning communities before. And it is very important to make sure that everyone is getting something out of learning things. It was an amazing opportunity to have this sort of like new capability and hardware to send to people in exchange for their commitment to use it and share back their learnings.

And that has really blossomed into a now – well, sort of about 80-person active membership where we have folks trying out different virtual reality, augmented reality. And, of course, new sort of AI tools. And then sharing back their experiences, which we do every Thursday. And we try to keep those sharebacks relevant to concerns within client work sometimes directly supporting a specific piece of work. But, also, often driven by sort of what is happening inside of the realm of feasibility itself. I think this is a little bit underappreciated source of inspiration, which is actually sort of like what is possible? What can we build? And what can we build relatively easily? And that is one of the purposes of this group.

TK: Could you give a couple of examples of things that you’ve learned about XR during these sharebacks? And then perhaps use that to speak about how these learnings help you and your team better solve problems for your customers.

JF: Yeah. I think um a really powerful example is what I’ve seen happen inside of the industrial design community. They’ve really found a few different high value approaches to using XR in particular. Although, actually today, this afternoon, one of my industrial design colleagues will be sharing back about using AI tools to go from sketches to rendering. We have a variety of interests.

But I think the first thing I saw for that community was reality capture. Using mostly their phones. But, also, occasionally, dedicated hardware to take physical mockups or even physical objects in their environment. Scan them and then share them. This is maybe a little bit less relevant now that a lot of them share a physical location. But, certainly, when everyone was separated, this was incredibly useful. And that was really used sort of like inside the design process to get to results more quickly and with a higher fidelity.

It has since morphed into a storytelling technique in a lot of ways. Whereas before, teams would always present renderings at the end of a design project. They’re now often also sharing augmented reality models with each other and with the client themselves, which I think really lets you see the design in context in a way that you previously would have to make a high fidelity physical model before. And so, that’s really exciting.

This is also a little bit to do with industrial design. But we do a lot of what’s called design research, which is going out uh into the contexts of the people who will eventually be using the things that we design and asking them questions or engaging them in activities that help us understand their habits, their needs, their desires relevant to whatever it is we’re designing.

Um, and we’ve been able to, uh, engage with people again, especially people who have constraints on where they can physically be through – especially augmented reality. Sometimes we work on medical devices. And sometimes those medical devices are for people who have immune disorders or ill in other ways. And they are perhaps using like very complicated physical devices to help with their conditions. And to sort of produce a bunch of mockups of those physical devices. Sterilize them and send them. Or is a thing that we sometimes engage in but is very costly.

And with a fiducial that you can print out on your home printer and tape to a book, you can get an augmented reality experience where you can sort of understand things like where might a button sit. What might the sort of indications on a display be to help you understand how to walk through a treatment protocol or what have you? Or even to just understand does this item fit in with my life? Does it feel intimidating? Does it feel friendly? These are things that you can kind of get at without um sending a physical object and instead using a virtual overlay.

TK: Those are some interesting applications of XR. And it sounds like you’re working really closely with both technical and non-technical stakeholders to implement them. I want to talk more deeply about XR in a moment. But since we’re on the topic, I’m curious to get your perspective on collaboration between technical and non-technical stakeholders. Because I know it’s an issue for a lot of companies. How do you get non-technical folks to engage and provide helpful input, especially when maybe they don’t have the right technical expertise necessary to contribute effectively?

 JF: Yeah. That’s a great question and an eternal struggle. But an exciting one. I really think that there’s sort of stereotypically a divide between the people who can sort of make things for real and then everybody else. It’s the consumer versus the creator.

And for a lot of these technologies, that line is getting blurrier and blurrier. It actually is relatively possible with a little bit of support and a sort of like friendly guardrails to help people without a lot of background make the leap into being productive creators.

But everybody has to be a little bit bought-in. And that is part of why having the learning group is so powerful. It contains many more folks than just the sort of software industrial design and interaction design people. We have people who work on business development in it. We have, again, design researchers in it.

And I try to balance the programming so that we have inspiration inside of the expertise of the various folks who are engaged in the community. I’ll often have – I had last year an artist called Grace Boyle do a short series of workshops with us. And she really cares about multi-sensory design and designing for emotions, dreams. Sort of ideas and concepts that might appeal to people with backgrounds other than engineering. But she then executes that work in a very precise and fairly engineering-heavy way. And so, that was sort of an attempt to get that inspiration in.

And then through a series of structured activities actually help people build and make. And I always try to offer a few different ways of building from something that you can do simply by clicking on things in a browser. To, yeah, forking a template that I’ve started for those who would like to write some TypeScript.

TK: You’re learning groups sound like an effective way of getting non-technical folks involved. Because you make it easy for them to become creators themselves. And I know a lot of companies try to do similar things with their lunch and learns, competency groups, and other activities but with varying degrees of success.

Before we get back to XR, what advice would you have for folks at organizations that might want to spin up and start running their own learning groups? I guess, specifically, what have you seen work to make folks feel like this isn’t yet just another job responsibility on top of everything else?

JF: I think there have to be a couple preconditions. One, I think people have to be interested. That sounds silly. But I think it’s actually really important. You have to either have the topic be attractive enough. You have to bring in interesting outsiders. Or you have to offer something. And so, I feel really lucky that I was able to offer those VR headsets when we were starting up.

And then the second part of your question, there has to be value there. This learning group’s been going for more than two years now. And that’s because what we do has real tangible impact for our clients. And while I value the sort of opportunity to think a little bit more broadly and with a little bit frankly lower stakes in some of these sessions, because we’re not necessarily building to exactly a deadline or for a high-stakes presentation, it’s really critical that people are gaining skills that then they can use in those high-stake situations or higher stake situations anyway.

And I try to do that both through connecting with leaders across IDEO to understand what are they seeing in the market. What are we seeing within our project teams? And then inviting sort of the closed loop of a shareback from a project team who’s used something, learned in the learning group to demonstrate to everybody that, actually, you can learn valuable stuff here. I think both being attractive and explicitly demonstrating value are both really important.

And I think there is a third way to do both of those things, which is to help people become just a tiny bit famous. My colleague, Danny Durant, started first a Medium blog. And now has graduated to be a part of IDEO’s website called Edges, currently, where we help people publish uh sort of this interesting work that they’re doing um at the edges of what IDEO does. That is both client work and independent work that happens.

And a great source for that has been for us the learning group. If you share something sufficiently interesting in that group, I will bother you to turn it into an article that we’ll then publish. Or I’ll bother you to submit to conferences. Juliette Laroche, one of the participants created a share a shareback about envisioning new kinds of materials using image generators. And it was a really compelling and profound presentation. And off of that, I actually pitched her to a conference organizer. She’s spoken a couple of different times about this topic and is actually um getting some of her images included in a textbook that’s going to be published next year. It is possible to sort of get some personal motivation into a learning group as well. But you have to put in that effort to help people find those opportunities before you get a little engine going and they become sort of easier and more natural.

TK: Some great advice there. Thanks for sharing. Okay. Now that we’ve talked about collaboration and learning groups, let’s circle back and talk about the topic of XR. When you think about the future, what gets you most excited about XR?

JF: I think we’ve seen, especially over the last couple of years, XR really used as a sort of explanatory tool or a tool to create imaginative leaps in our clients. We’ve done sort of large-scale simulations of things like retail stores of the future. We did this in work for Canada Goose that launched actually just pre-pandemic, uh which was a sort of rethink of a zero-inventory store with a bunch of immersive experiences. And because we sort of did all that previous work, they actually implemented basically everything that we put into our final design. It was an amazing thing to see the final physical results.

We’ve also done this for transportation. Sort of making simulations of personal vehicles up through things the size of airplanes. And that has been super successful. But what I’m really excited about these days is that more and more XR products are actually coming to market. And now there are all of these other questions around the products that companies are beginning to ask.

And so, we’re able to bring this sort of relatively deep expertise in personal usage across a lot of different kinds of skill sets to thinking about questions of onboarding people, helping them understand what it feels like to have a social interaction inside of an immersive 3D space instead of in some boxes on a screen. And helping clients from startups to the large tech companies, you might imagine, think through these sort of secondary consequence questions of the products that they’re developing.

TK: Can you talk to us a little bit about the actual software development process for XR? Not having actually done any XR Dev myself, how does it work? What languages are you coding in? What frameworks do you use? How does prototyping development work? Give us the rundown.

JF: Sure. There’s actually quite a few approaches that I found to be pretty viable here. Again, it depends a little bit on who I’m encouraging to build. Or if I’m building myself. Or with the engineering organizations of my clients. I think the sort of typical place you might think to start is game engines. Both Unreal and Unity have good support for XR platforms. And there’s actually an open standard called Open XR that you can build from that’s supported by meta devices. And the Vive and sort of all the usual suspects.

That can be really helpful if you’re working with somebody who is already kind of familiar with one of those game engines as many people are. I would say the biggest barrier there is setup. There’s many, many, many YouTube tutorials. And actually a big effort at the beginning of the learning group was to build out a specific tutorial that works on a Mac, because we’re a Mac shop, to get you from Unreal Engine onto your Meta Quest headset. And I think that was worth it. We build a lot of nice things off of that workflow.

If we’re talking about Apple, very well integrated in their ecosystem. If you are already an Apple – especially an Apple mobile developer, it’s sort of almost trivially easy to engage with like both a Vision Pro or just simply an on-phone augmented reality experience. And this is actually the first place that I would point out that you don’t need to have really almost any coding expertise to still engage in these kinds of ecosystems.

There’s an app called Reality Composer that works on any iOS device that will let you compose scenes, assign behaviors to different parts of the scenes. Not super complicated ones. But like tap behaviors. Or proximity behaviors. And you can easily import assets that have animations already baked into them that you’ve perhaps built in programs like Blender, or even purchased from an online service, or downloaded from Apple’s own pretty robust library. And that lets almost anybody quite easily sketch-out what a scene or an interaction might feel like in XR, which can then actually be smoothly passed over to a developer and imported it into Xcode. And that process is actually not completely foolproof. But you’re not going to lose all the effort that that original person put in. That’s kind of game engines and native stuff.

There’s also actually a relatively good sort of web platform called WebXR that has good support from platforms like Three.js. Or Frame, which is actually Three.js under the hood anyway. But that’s simple to create. Like, low complexity environments, I would say, with complicated behavior that are connected to complex backends, which is often kind of my sweet spot. I tend to build web experiences most often.

I’ve also gotten a lot of excitement out of combining a lot of large language model or image generator services into XR environments, which is super easy to do in a web app. Because all the infrastructure is already there to take advantage of combining these other services into one thing.

TK: You beat me to it. The very next question that I was going to ask was about AI applications within the context of XR. When you think about the confluence of LLMs, AI, and XR, I bet there are some pretty useful and interesting applications. Can you give us a couple examples of how you’re able to pull in and harmoniously leverage these technologies together?

JF: Yeah. I mean I think the biggest and most obvious one is voice control. That is the sort of – at least sort of the support so far inside of VR headsets, especially for interaction, is very much game-like. You have sort of controllers that feel a lot like a separated game controller. You have buttons and joysticks. And that, first of all, just isn’t very intuitive or comfortable to a lot of people. And I think you know once you put on a headset, you start to feel the Star Trek Holodeck dream and you want to talk to the computer.

And while I am a little bit skeptical of voice control as a panacea for user interface problems, I do think it’s a really, really interesting thing to add to the mix. And it’s now almost trivially easy to do so, especially in a web platform. Obviously, there are blockbuster things like Whisper that are relatively easy to access. There’s also even locally-run models. Like, Picovoice comes to mind there that can run even on a mobile device like a VR headset to good quality and get you reasonable transcription that you can then flow into sort of actions within your overall environment.

And so, I find that to be like the sort of like no question. We should be engaging with this. We should be thinking through how these interactions might happen. And then actually building them. I think to one of my points earlier where sometimes you need to prototype to answer a question, I think a big question in UX these days is what does voice really mean for my app? And I think, these days, it’s easy to do that in any context. But maybe a little bit surprisingly, it’s also easy to do that inside of XR apps.

TK: You’ve mentioned a few different kinds of hardware, frameworks, and companies that are helping to move XR forward. What are some of your favorite organizations that are doing interesting things with XR today?

JF: Yeah. Great question. This is not a sort of commercial company or not a sort of mass consumer company. But I think one of the most interesting explorations in XR is this company called aNUma. They started by designing sort of transcendental experiences for groups of people to have inside of headsets. Sort of group-guided meditation.

But meditation in a really interesting way. Because instead of sort of imagining energy flowing through you, you can simply experience energy flowing through because you’ve replaced all of your visual and auditory inputs. And they started with sort of a more general application. And now they are focused on helping people through end-of-life experiences and letting people be together in these meaningful ways with their loved ones when they might not be physically able to otherwise. Obviously, that’s only relevant to a small portion of people. But I think that is the most powerful experience I’ve had the privilege of engaging with in XR.

I think beyond that, there are a lot of the sort of more artistic or more emotionally-driven XR experiences are really interesting to me. You retain a lot more from an experience that completely replaces your senses than you do from almost any other kind of experience. Combining that with something of high emotion is really powerful.

There’s the artist I already mentioned, Grace Boyle. She has mostly site-specific pieces. You’d have to like see what she’s up to and where it’s being shown at any particular time. There’s another artist named Wyatt Roy who does a lot with reality capture both for memory and to sort of explore the living places and the working places of himself and of other artists that I find really compelling.

And then, yeah, I think I’ll shout out two slightly older applications. One is called Notes on Blindness. This was actually I think from originally back from maybe 2017 or even earlier, which explores the diaries of man who went blind in adulthood and sort of visualizes what it feels like to slowly lose your sight in interesting ways. And then, finally, an experience called Goliath, which tries to create empathy for the experience of having schizophrenia. Yeah. Sort of emotionally impactful subject matter. But I think conveyed in really, really interesting and compelling ways.

TK: Wow. All of those are really wonderful applications of XR. And I’m sure a lot of folks, myself included, have never thought about applying XR to empathize with those that are losing their sight or to be with their loved ones towards end-of-life.

Given that XR has so many broad applications, on a more practical level, what advice would you have for companies that are looking to gain XR competency and integrate XR or MR into their own product offerings?

JF: Yeah. I would say to really consider what having more context about the space around as if you’re exploring AR or MR, mixed reality, and which I think most people would probably be looking in that direction. If you’re no longer constrained to your black rectangle, what is it about that environment that you would want to really integrate with? Because, ultimately, I think the question of designing for XR is not that different from creating architectural designs, which is part of why I’m so excited about this change in the world. But it does require more ambient thinking and more sort of consideration of what might be around the person when they’re engaging with your product.

And so, I think making mockups like in real-life. Use some paper and tape and try to put interface elements physically around a person and see how that feels. See in what ways would you want to sort of touch and interact. And, also, what do you need to know about? Do you need to know – there’s a lamp sitting on the desk in front of me right now. Is it useful to your service to know that there’s a lamp there? What would you want to do with that lamp? Is it reasonable to collect information about it? Could you connect to it? There become all of these sort of spidery ecosystem questions that arise when you’re thinking about integrating with the world. And, also, frustrations that might naturally happen for people.

There’s not a good definition of sort of like what’s in scope and what’s out of scope for these interactions. And so, defining a couple of really high-value ones and then figuring out how to communicate them to your users I think make for much more satisfying XR experiences than just putting a hat on something. Or sort of like just taking your buttons and putting them into space in front of you. Not that that can’t be fun. But I think if you actually want to invest in this space, thinking about that holistic understanding of your surroundings is really important.

TK: For me, the image of just putting a hat on something brings to mind that wonderful scene from the show Silicon Valley where that company is trying to poach Richard with their super-secret XR mustache technology. Definitely worth the watch if you haven’t seen it.

But, Jenna, I just want to say thank you so much for taking time to talk to us about your experience at IDEO and your work with XR. If folks are interested in learning more about the work that you and IDEO do, what’s a good way to connect?

JF: Sure. I mean, first of all, our website is ideo.com. We have a contact forum. You’re certainly welcome to reach out there. I think the most – the easiest way to reach me and my colleagues who are interested in emerging technology is to email ai@ideo.com. We have of course chosen the very shortest possible email for that one.

You will be able to talk to us about more things than just large language models and image generators. But we’re happy to talk about those as well.

TK: Awesome. Well, Jenna, thank you so much for your time and for coming on Software Engineering Daily.

JF: Thanks for having me, Tyson.

 

 

 

Software Daily

Software Daily

 
Subscribe to Software Daily, a curated newsletter featuring the best and newest from the software engineering community.