Me, Myself, and AI Episode 806

AI on Mars: NASA’s Vandi Verma

Play Episode
Listen on
Previous
Episode
Next
Episode

Topics

Artificial Intelligence and Business Strategy

The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.

In collaboration with

BCG
More in this series

When Vandi Verma saw the Spirit and Opportunity rovers land on Mars while she was working toward a Ph.D. in robotics, it set her on a path toward working at NASA in space exploration. Perhaps unsurprisingly, today, as chief engineer for robotic operations at NASA’s Jet Propulsion Laboratory (JPL), Vandi sees the biggest opportunities for artificial intelligence in robotics and automation.

On this episode of the Me, Myself, and AI podcast, she describes the ways in which the Mars rovers rely on AI, including the technology’s use in digital twin simulations that enable JPL scientists to practice their driving skills before actually controlling the rovers on Mars. She also discusses with hosts Shervin Khodabandeh and Sam Ransbotham how NASA’s use of AI — and its approach to risk — offer lessons for organizations that are looking to simulate real-world scenarios here on Earth.

Subscribe to Me, Myself, and AI on Apple Podcasts, Spotify, or Google Podcasts.

Transcript

Shervin Khodabandeh: What can we learn from the use of AI on Mars? Find out on today’s episode.

Vandi Verma: I’m Vandi Verma from NASA JPL, and you’re listening to Me, Myself, and AI.

Sam Ransbotham: Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I’m Sam Ransbotham, professor of analytics at Boston College. I’m also the AI and business strategy guest editor at MIT Sloan Management Review.

Shervin Khodabandeh: And I’m Shervin Khodabandeh, senior partner with BCG and one of the leaders of our AI business. Together, MIT SMR and BCG have been researching and publishing on AI since 2017, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities and really transform the way organizations operate.

Sam Ransbotham: Hey, everyone, welcome. Today, Shervin and I are honestly crazy excited to be talking with Vandi Verma, the chief engineer at NASA’s Jet Propulsion Laboratory. It’s really cool stuff. Vandi, from the sneak preview that we had, we’re really excited to have you on the show. Thanks for taking the time to talk with us.

Vandi Verma: Thank you for having me here.

Sam Ransbotham: I admit I’m really geekily fascinated by your job — I’m sure everyone is — but let’s clue in everybody on what you do. Can you start by giving an overview of JPL in general and your particular role?

Vandi Verma: I am the deputy manager for the Mobility & Robotics system at NASA’s Jet Propulsion Laboratory, and I’m also [working] with the chief engineer for the Mars 2020 mission, which consists of the Perseverance rover and the Ingenuity helicopter.

JPL is a NASA center that specializes in building robots for space exploration. And NASA’s mission is to explore, discover, and expand knowledge for the benefit of humanity, and what we do is the robotics aspect of that.

Shervin Khodabandeh: When you say “robotics,” I think about artificial intelligence, but Mars missions seem like a very challenging place for new technologies like AI. How are you and JPL using AI in what you’re doing with robotics?

Vandi Verma: Right. AI has sort of transformed what we call that over a period of time, and there are things that we do on the ground, and there are things that we do onboard our robots, and so I’m going to touch on some of those. So, in general, we are more on the side of autonomous capability — closer to what you might think of as what self-driving cars use — and not a lot of it is potentially classically machine learning per se, although we use that to inform a lot of our work.

In fact, with Perseverance, 88% of the driving that we’ve done is autonomous driving. And so the rover has cameras: It’s taking the images; it’s detecting the terrain and figuring out what’s hazardous and navigating around obstacles. And it’s actually quite interesting because it’s driving on terrain that no human has ever seen, so we can’t even give it that kind of information. So that is definitely a form of autonomous navigation.

We also, at the end of drives, are trying to make a lot of progress because we’re in this really harsh environment and we have a mission to collect and cache a certain number of samples with Perseverance, because for the first time, we are actually going to bring them back to Earth. But we want them to be from as distinct places as possible, so we want to do a lot of driving. If you stop all the time, you’re not going to make as much progress. But who knows if there’s something really exciting along the way that we’re just going to miss? In our world, we call it the dinosaur bones.

We have AI capabilities on the rover where it’ll take a wide-angle image, look at a large swath of terrain, and then try to figure out what is the most interesting feature in there. We have a whole slew of instruments, but one of the instruments is the SuperCam instrument, which does a lot. It has a laser, and from a distance, you can shoot a laser at a rock, and it creates a plasma, and we study that with a telescopic lens. That is such a narrow field of view — you know, a milliradian — and so if you were to try and do that to the whole view you see, you’d spend days there.

And so essentially, we use the AI to figure out “What’s the most interesting thing that we should zap?” And then you can send the data back and tell the scientists on Earth. That’s been very valuable as well. So we do that.

And then, you know, there’s planning. There are a lot of resources we use, from things like … mostly on Mars, when you have a spacecraft, the environment is harsh. So [we’re] thinking about “How do you heat things — keep it at the right temperature? How much power do we have?” You need to communicate with Earth; where’s Earth? We also have planning onboard, which thinks of things more in terms of sort of the bigger picture. So all of those sorts of things are examples of what we do.

Sam Ransbotham: That’s a ton of examples there. And the fact that you’re predominantly driving autonomously — it seems like a fascinating world. You mentioned finding something interesting. What is the objective function there? How do you figure out that something is interesting? I know what I think is interesting, but tell me about that process of having a machine figure out what’s interesting.

Vandi Verma: I think one of the most interesting things about defining what’s interesting is that it puts it on the humans. We actually have a really hard time telling machines what we want them to do, right? In order for us to tell what’s interesting, we have a lot of different parameters that scientists can use to specify “I am looking for light-toned rocks of a particular size, of a particular albedo and shape, that are interesting in this area.” And we can change that. So we have these different templates, depending on the terrain we are in, that scientists on the ground help us determine. We send that to the robot to say, “We’re looking for this kind of thing.”

We have done some research as well where we tell it, “You now track all of the things we have seen” — it’s called novelty detection, which we don’t actually yet have deployed, but “Find what we haven’t already looked at.” That’s another one.

But there are two things in here. When we’re doing exploration, we’re looking for things that are new, but we also try to characterize things we have seen with multiple different instruments, because we are trying to collect a statistically significant amount of data for the hypothesis we have. We’re trying to figure out “Could life have existed on Mars and, especially, ancient life?”

And so that puzzle … There are hypotheses, and you’re trying to answer specific questions, and that’s what the scientists then will tell the robot that they’re interested in. We’ve actually used supercomputers to translate that into parameters that we can then uplink to the robot.

Sam Ransbotham: So the people kind of describe in rough terms what they want, and then you’ve got some supercomputer, something here on Earth, trying to translate that into a set of parameters that you then send to the rover to figure out what to look for. Did I understand that correctly?

Vandi Verma: That’s right. And this is, I think, an area where AI can help a lot because we are still in that phase in robotics in a lot of areas where we have a lot of knobs. We can do a lot, but the art is in tuning this multivariable space. In fact, you know, just on Perseverance — we call them parameters in software, [and] this isn’t even taking into account hardware design and other things — we have over 64,000 explicit parameters. These are saved in nonvolatile memory. This is not even taking into account the arguments to commands you can send. So there’s just so many ways in which you can express what you have to say, and that’s where we can use a lot of capability to know what the right combination is for what we intended to do.

Sam Ransbotham: Yeah, the combinatorics on something like that just seem like they would explode, so it seems like a great tool for machine learning and to figure out what’s the right set of optimal parameters or next parameters to choose when you have that many to choose from. Like you said, you can’t laser the entire surface of Mars. Well, you also can’t explore 64,000 parameters at the same time.

Vandi Verma: Yeah, you’re absolutely right. And yet, the challenge and the beauty — what makes it such a fun environment to be in — the margin for error is very low, so you cannot experiment when it is so hard to get a spacecraft to successfully land on Mars. It’s a national asset. So we say, “You’re not being meek,” and yet you are doing all the checks you can to ensure that it’ll succeed. You cannot put the vehicle at risk.

Sam Ransbotham: Mm-hmm. Most of the people listening obviously are not going to be exploring Mars, but when we think about the analogies you could make, people are deciding right now about risk portfolios, about how much they turn over to a machine to, in your case, decide novelty or decide where to drive. Other people are making the same sort of risk decisions. Now, it seems like you have an extremely low tolerance for risk, given the asset and where it is. But I feel like other people, with artificial intelligence and new technologies, have to be making similar risk decisions as well.

Vandi Verma: I think you’re absolutely right. In fact, I would think that in some ways, you might think we have a risk tolerance, but we have to make those decisions so frequently, we would do nothing and not move at all if we actually were very risk-averse. Having a process to evaluate it and knowing, for a particular situation, where that threshold is, is something that everybody on the team sort of learns to do with whatever job they’re doing. So I think it’s actually something that would go over well into other areas.

Shervin Khodabandeh: Coming back to something you said earlier, when you talked about autonomous driving: You really can’t practice driving in a place you’ve never been before, so how do you practice before you get there?

Vandi Verma: There are two elements to that. One is, how do we have the autonomous capability practice, and then how do we have the humans, who still at some level need to instruct the autonomous capability, practice? So we do both of those. In terms of building robots for a planetary body — which is so different, right? The gravity on Mars is different, the pressure, the temperature, all these things — we create simulations. Some of the software that’s running onboard Perseverance, I helped program, and, really from the beginning, we develop software simulations because we may not actually have the full Earth replica. We create a full-scale model on Earth to test, but that’s also evolving in the early stages of the mission. So we’re building hardware, which they are also experimenting with — “What’s the best wheel design? What’s the best material?” — as we are writing the software.

And there’s a lot of thought that goes into “How do you build these simulations so they are helping us represent the environment we’re in correctly?” But then we also start to peel away certain hardware interfaces. So we’ll have the real flight software running on more sort of commercial interface robotic parts but in our Mars Yard. We have a Mars Yard. It is not Mars, but we try to have slopes and bedrock and other characteristics. And then we build the full replica running the actual computing we’re going to have on Mars with the sensors, and we test it. And after that, we do specific tests. So we’ll have a thermal vacuum chamber test for certain parts, and we do it in bits and pieces.

As we’re entering the atmosphere, we do some tests with aircraft on Earth because we have to look at how we would land on Mars. But other than that, once we get on Mars, we do it in stages. So we might actually have the autonomous navigation tell us what it would do but not actually do the navigation.

We would actually have the human direct the drive, as we call it, but we’re actually letting it shadow and say, “Let’s see what you would have done.” And so we do it in stages.

We do want to progress very quickly because if you do that for too long … it’s valuable time on Mars. So that’s sort of how we’ve rolled out the autonomous capability. Now, in terms of humans, I have been driving robots on Mars for multiple different missions since 2008. You start to get to know Mars, and it takes time. So we’ve been shortening the time. We have trainees, we have classroom sessions, so we take drives from Mars and the data, and we have them plan offline. And then we have shadows. So most of the drives now, I actually have someone else I’m training on the keyboard, and you’re sort of watching them as you train them to be a pilot. So we do that, and that actually still takes years.

Some of us who helped build the robot will start on Sol 0, which is the start of when we land a mission on Mars. And then, very quickly, within half a year to a year, we start having the next set of people come. Because if you look at missions, they can be on Mars for a very long time, so you have to have people trained to do that.

Sam Ransbotham: Actually, there’s lots of interesting aspects of that in terms of things that other people are doing. But you mentioned simulating and building digital twins. You don’t want to practice on Mars. You want to practice on Earth, or you practice digitally, especially, as you mentioned — which I hadn’t appreciated — that the hardware doesn’t exist to even practice on, even if you could practice; that’s happening simultaneously. But also, this idea that humans are learning, too, in the process and that you wouldn’t turn anyone loose driving on the first day behind the wheel; you wouldn’t turn the rover off to drive on its own the very first day either. So that process of learning is interesting too.

I also thought it was fascinating. … You were talking about shortening the time — that as you get more experience, you can shorten that time. And as we have so many people in the world deploying artificial intelligence solutions to do different things, I’m guessing a lot of people watch them pretty carefully at first but then gradually trust them more and more. And that’s the same way I’m guessing that you work with the other person you were talking about at the keyboard — probably looking at them typing the first day but less time on the keyboard now. So I think there’s lots of analogies, even though Mars seems like a foreign environment, to how other people are using artificial intelligence as well.

Vandi Verma: Yeah, I think you’re absolutely right. One of the interesting things is, what is it that you can take from a completely different part of the planet, a completely different robot, that might actually have different mobility characteristics? But humans are able to extract patterns very well. So if you were a rover driver on one rover mission, you actually take less time, like you’re saying. But also part of it is, we’re becoming much more sophisticated in our user interfaces.

If you look at the interface we use to operate and drive robots — operate the robotic arm and actually sample, which is in some ways even more complicated — they have also evolved significantly. We used to send instructions — like, literally command-line instructions like you might do with a function call on a program. Now we do it very graphically, where you are essentially sort of selecting waypoints on a map. So I think that is also extremely helpful because we’ve started to let the humans focus on the aspect where human intuition and the wealth of experience we accumulate and can bring to a problem. … Because AI still, even though it’s getting really sophisticated, the capabilities we have, they’re still limited by our imagination at the time we created it. We are very aware of this from having operated robots for decades on Mars. We always tell ourselves, “What is beyond our imagination?” Because it happens — it happens every single time. We always are surprised by these amazing things, and we end up using it in a way we hadn’t intended to. And that’s sort of like what you see all the time — the technology you might develop for various other Earth applications. What other things are people going to come up with and use it for?

Sam Ransbotham: People are crazy.

Vandi Verma: I mean, I think they’re innovative!

Sam Ransbotham: Right. And that’s really what you want, because you’re not just trying to do the same thing over and over again.

You mentioned the word surprise, which I thought is an interesting thing. One of the things that we talked about was that you do all these simulations and you want stuff to work, but you don’t want it to work exactly perfectly because you’re trying to discover something that you’re not expecting. So tell us a little bit about how that process works of “Hey, we want things to work like we want them to work, but we’re also open for things to happen that we weren’t expecting?”

Vandi Verma: That’s a very good point you make … that the simulation is not going to be exactly how things execute. And, in fact, it almost never is. And partly, the reason we’re driving autonomously is because the detail-level surface information — the imagery that the rover is going to take from its Mars cameras — we cannot simulate it precisely enough. And so any path that we simulate on the ground, it’s sampling terrain. You know, we have an abstraction; we have an orbital map. But it’s doing it at a very coarse level. And if we already had that detail map, we wouldn’t even need autonomous navigation. We would literally just script it to drive.

As soon as it drives 5 [or] 10 meters, it has far more information about the environment than we had before we sent this command. So at that point, it is far more capable of making decisions and doing the right thing than anything we could [do]. So we have to learn to not over-constrain it. And this is actually one of the things that’s really hard to teach new people: You’ve perfected it in your simulation, but you have to anticipate where your simulation is actually a simulation. It is not reality. And if you don’t leave it enough room to maneuver, you’re actually going to have it fail miserably.

So we have these things we call “keep in boxes” where, for autonomous capability, we sort of want humans to say, “I have some insight, and I want you to stay within this area.” It can be a hundred meters, right? Like, a really large area. So we create these leashes to leash the behavior, but there’s an art in how long you make the leash.

Shervin Khodabandeh: Vandi, this has been a really fascinating discussion. Can you also share a bit about how you ended up in this role?

Vandi Verma: I remember watching the Mars Exploration Rovers land. I was in graduate school. I was doing my Ph.D. in robotics, actually at the time because I had already taken a class — and it was a programming class where we were programming mobile robots. And it was just so much fun that I think I spent all my spare time just on this competition we had at the end of the class, where we had to have these robots navigate a maze.

And it was just fascinating to me that you could apply the theory to an actual machine and see it do something in the environment. I’d been working with AI, actually; my master’s was in AI, and it was fascinating. But here, there’s something so satisfying about a robot that you can actually see operating in a physical world. And I love space exploration; the combination of space and robotics was just a perfect fit. And the robots ended up lasting so long, the mission for the Mars Exploration Rovers spurred an opportunity — it was supposed to be 90 days — that I graduated, and it was still on Mars. So I actually never thought that I would actually get to work on them, and I did.

And so I think that’s sort of how it came about is, I was fascinated by it, and when I was at university, there are a lot of collaborations that NASA does with universities because a huge part of the mission is education. And so you can get exposed to this. You can work on problems that are interesting to NASA, and my thesis was very much aligned with that, and that’s how I got into it.

Sam Ransbotham: Very cool. You have some engineers to thank for the longevity of the mission that let you step in and do it.

We have a segment where we want to ask you some rapid-fire questions. Just answer the first thing that occurs to you as we do this.

What do you see as the biggest opportunity for artificial intelligence right now?

Vandi Verma: I think the biggest opportunity … I think it’s in robotics, actually.

Sam Ransbotham: Shockingly.

Vandi Verma: Yes.

Sam Ransbotham: OK. What’s the biggest misconception people have about artificial intelligence?

Vandi Verma: I think the biggest misconception they have is that it can’t extrapolate.

Sam Ransbotham: Hmm. So, what was the first career that you wanted?

Vandi Verma: I wanted to fly airplanes. My dad was a pilot. I wanted to be a bush pilot.

Sam Ransbotham: Well, since then you have gotten your pilot license, so you’ve achieved that.

Vandi Verma: I did, yes.

Sam Ransbotham: Do you think there are places that we’re trying too hard to make artificial intelligence fit a solution that it doesn’t fit in? And are we applying this tool in the wrong places anywhere?

Vandi Verma: I think that sometimes you could have said that about neural networks, at a certain stage. So I am actually a little bit shy to say, is it the wrong place? It depends on where your bar is to realize whether it’s worth it to do, given the technology at this stage. I think it just depends on your threshold and your horizon.

Sam Ransbotham: OK, that’s fair. What’s one thing you think that would be really nice if artificial intelligence could do right now that it currently is just not capable of? What’s the one thing you could change?

Vandi Verma: You know, one of the things is that we do have a huge, huge amount of data. And one of the limitations in applying it for some of the space explorations [is], you still need a lot of auditing of the tokens or what it extracts. So I think there’s still just a lot of tweaking. That is the challenge with it, I think. If you could get over that, I think that potential would be unleashed.

Sam Ransbotham: Great discussion. I’m guessing that, of course, none of our listeners are driving robots on Mars, but I think there’s lots of things that people can learn from the things that you have learned through this process. People may not be building digital twins for simulating Mars, but they are building digital twins for simulating processes on Earth. We’re all increasingly experiencing the world through these devices and through AI sensing. Even if we don’t work in the space context, I think we can learn a lot from what you and your team have learned. Thanks for taking the time to talk with us today.

Vandi Verma: Thank you so much for sharing a little bit of what we do with your audience.

Shervin Khodabadenh: Thanks for listening. Join us next time when Sam and I meet Prem Natarajan, chief scientist and head of enterprise AI at Capital One. Please join us in the new year.

Allison Ryder: Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn’t start and stop with this podcast. That’s why we’ve created a group on LinkedIn specifically for listeners like you. It’s called AI for Leaders, and if you join us, you can chat with show creators and hosts, ask your own questions, share your insights, and gain access to valuable resources about AI implementation from MIT SMR and BCG. You can access it by visiting mitsmr.com/AIforLeaders. We’ll put that link in the show notes, and we hope to see you there.

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Subscribe to Me, Myself, and AI

Me, Myself, and AI

Dismiss
/