Me, Myself, and AI Episode 201

Less Algorithm, More Application: Lyft’s Craig Martell

Play Episode
Listen on
Previous
Episode
Next
Episode

Topics

Artificial Intelligence and Business Strategy

The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.

In collaboration with

BCG
More in this series

Craig Martell says he won the career lottery. After studying logic, philosophy, political science, and political theory, he completed a Ph.D. in computer science and found his way to machine learning, a field he thoroughly enjoys. After spending time at Dropbox and LinkedIn, Craig headed to Lyft, where he runs the LyftML engineering team. He’s also an adjunct professor at Northeastern University in Seattle.

We kick off Season 2 of Me, Myself, and AI discussing a particular trend Craig has seen in the AI and machine learning space: As organizations depend more on technology-driven solutions to solve business problems, algorithms themselves are less important than how they fit into an overall engineering product pipeline and product development road map. Craig shares his thoughts about what this shift means for academic education and cross-functional collaboration in organizations, and the hosts pick his brain about how to combat unconscious bias.

Read more about our show and follow along with the series at https://sloanreview.mit.edu/aipodcast.

To find out more about the movie Coded Bias, which Craig mentions during the interview, visit www.codedbias.com. To learn more about the work of MIT Media Lab researcher Joy Buolamwini, visit her page on the lab’s website.

Subscribe to Me, Myself, and AI on Apple Podcasts, Spotify, or Google Podcasts.

Transcript

Sam Ransbotham: Are algorithms getting less important? As algorithms become commoditized, it may be less about the algorithm and more about the application. In our first episode of Season 2 of Me, Myself and AI, we’ll talk with Craig Martell, head of machine learning at Lyft, about how Lyft uses artificial intelligence to improve its business.

Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I’m Sam Ransbotham, professor of information systems at Boston College. I’m also the guest editor for the AI and Business Strategy Big Ideas program at MIT Sloan Management Review.

Shervin Khodabandeh: And I’m Shervin Khodabandeh, senior partner with BCG, and I colead BCG’s AI practice in North America. Together, MIT SMR and BCG have been researching AI for five years, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities and really transform the way organizations operate.

Sam Ransbotham: Today we’re talking with Craig Martell. Craig is the head of machine learning for Lyft. Thanks for joining us today, Craig.

Craig Martell: Thanks, Sam. I’m really happy to be here. These are pretty exciting topics.

Sam Ransbotham: So Craig, head of machine learning at Lyft — what exactly does that mean, and how did you get there?

Craig Martell: Let me start by saying I’m pretty sure I won the lottery in life, and here’s why: I started off doing political theory, academically, and I had this misspent youth where I gathered a collection of master’s degrees along the way to figuring out what I want to do. So I did philosophy, political science, political theory, logic … and I ended up doing a Ph.D. in computer science at Penn. I thought I was going to do testable philosophy. The closest to that was doing AI, and so I just did this out of love. I just find the entire process, and the goals and the techniques, just absolutely fascinating.

Sam Ransbotham: All part of your master plan that all came together.

Craig Martell: Not at all; I fell into it.

Sam Ransbotham: So how did you end up then at Lyft?

Craig Martell: I was at LinkedIn for about six years. And then my wife got this phenomenal job at Amazon, and I wanted to stay married, so I followed her to Seattle. I worked for a year here at Dropbox, and then Lyft contacted me. I essentially jumped at the chance because the space is so fascinating. I love cars in general, which means I love transportation in general. And the idea of transforming how we do transportation is just a fascinating space. And then, in my prior life, I was a tenured computer science professor, which is still a big love of mine, so I’m an adjunct professor at Northeastern, just to make sure I keep my teaching skills up.

Shervin Khodabandeh: Craig, your strong humanities background in philosophy, political science, you mentioned logic — all of that — how did that play for you in your overall journey?

Craig Martell: So that’s really interesting. When I think about what AI is, I find the algorithms mathematically fascinating, but I find the use of the algorithms far more fascinating. Because, from a technical perspective, we’re finding correlations in extremely high-dimensional nonlinear spaces. It’s statistics at scale in some sense, right? We’re finding these correlations between A and B. And those algorithms are really interesting, and I’m still teaching those now and they’re fun. But what’s more interesting to me is, what do those correlations mean for the people? I think every AI model launched is a cognitive science test. We’re trying to model the way humans behave. Now, for automated driving, we’re modeling the way cars behave in some sense, but it’s really [that] we’re modeling the right human behavior, given these other cars driven by humans. So for me, the goals of AI — I look at them much more from the humanities perspective, although I can nerd out on the technical side as well.

Sam Ransbotham: Can you say a bit more about how Lyft organizes AI and ML teams?

Craig Martell: At Lyft, we have model builders throughout the whole company — we have a very large science org. We also have what’s called ML SWEs — ML software engineers. I run a team called LyftML, and it consists of two major teams. One is called Applied ML, where we leverage expertise and machine learning to tackle some really tough problems; and also, the ML platform, which drives my big interest in operational excellence on getting ML, to make sure it’s effectively hitting business metrics.

Shervin Khodabandeh: What do you think — because, I think, Craig, you’re still teaching, right?

Craig Martell: Yeah, I adjunct teach at Northeastern University here in Seattle.

Shervin Khodabandeh: So what do you think your students should be asking that they’re not? Or maybe, stated another way, what would they be most surprised [about] when they enter the workforce and actually do AI in the real world?

Craig Martell: The algorithms themselves are becoming less important. I’m hesitant to use the word commoditized, but to some degree, they’re being commoditized, right? You could pick one of five, one of seven, you could try them all — model families, for a particular problem. But what’s really happening, or what I think is the exciting thing happening, is how those models fit into a much larger engineering pipeline that allows you to measure and guarantee that you’re being effective against a business goal. And that has to do with the cleanliness of the data, making sure the data is there in a timely way … classic engineering things, like, are you returning your features at the right latency? So the actual model itself has shrunk from, say, 85% of the problem to 15% of the problem. And now 85% of the problem is the engineering and the operational excellence surrounding it. I think we’re at a point of inflection there.

Shervin Khodabandeh: So do you believe, with the advent of AutoML and these package tools, and your point about over time, it’s less about the algo, more about the data and how you use it. … Do you think the curricula and the training and just the overall orientation of data scientists 10 years from now would be dramatically different? Should we teach them different things, different skills? Because it used to be, a lot is focused on creating the algorithms, trying different things, and I think you’re making the point that that’s plateauing. What does that mean in terms of the workforce of the future?

Craig Martell: Yeah, I think that’s great. I’m going to say some controversial things here, and I hope not to offend anybody.

Shervin Khodabandeh: That’s why I asked, so I hope that you will.

Craig Martell: So if you look just five or 10 years ago, in order to deliver the kind of value that tech companies wanted to deliver, you needed a fleet of Ph.D.s, right? The technical ability to build those algorithms was extremely important. I think the point of inflection there was probably TensorFlow, 2013-ish, where it wasn’t commoditized — you still needed to think very hard about the algorithm — but the actual getting-the-algorithm-out-the-door became a lot easier. Now, there [are] plenty of frameworks for that.

I wonder — this is a real wonder: I wonder the degree, how much we’re going to need specialized machine learning/AI data science training going forward. I think CS undergrads, or engineering undergrads in general, are all going to graduate with two or three AI classes. And those two or three AI classes, with the right infrastructure in the company, the right way to gather features, the right way to specify your labeled data … if we have that ML platform in place, people with two or three strong classes are going to be able to deliver 70% of the models a company might need. Now, [for] that 30%, I think you’re still going to need experts for a while. I do. I just don’t think you need it like you used to need it, where almost every expert had to have a Ph.D.

Shervin Khodabandeh: Yeah, I actually resonate with that, Sam. In an interesting way, it corroborates what we’ve been saying about what it takes to actually get impact at scale, which is the technical stuff gets you only so far, but ultimately, you’ve got to change the way it’s consumed, and you’ve got to change the way people work and the different modes of interaction between human and AI. I guess that’s a lot of the humanities and the philosophy and the political science and how the human works — more so than what the algo does.

Sam Ransbotham: Well, that’s a good redirection, too, because if we’re not careful, then that conversation slips us into the curriculum being DevOps more, and so what Shervin’s pointing out is that maybe that’s a component, of course, too, but there’s process change and more, let’s say, business-oriented initiatives.

What other kinds of things are you trying to teach people? Or, what other kinds of things do you think executives should know? … Everybody can’t have to know everything; it would be a bit overwhelming. Perhaps that’s ideal if everyone knows everything, but what exactly do different levels of managers need to know?

Craig Martell: I think the top decision maker needs to understand [the] dangers of a model going awry, and they need to understand the overall process — that you really need labeled data. There’s no magic here. They have to understand there’s not magic. So they have to understand that labeled data is expensive, that getting the labels right and sampling the distribution of the world that you want correctly is extremely important. I believe they also have to understand the life cycle in general, which is different than [with] two-week sprints we’re going to close these Jira tickets; that data-gathering is extremely important, and that could take a quarter or two. And that the first model you ship probably isn’t going to be very good, because it was from a small labeled data set, and now you’re gathering data in the wild. So there’s a life cycle piece that they need to understand, and they need to understand that, unfortunately, in a lot of ways — maybe not for car driving, but for recommendations — the first couple that you ship get iteratively better. I think that’s extremely important for the top.

I think for a couple levels down, they need to understand the precision/recall trade-off: the kinds of errors your model can make. Your model can either be making false-negative errors or false-positive errors, and I think it’s extremely important as a product person that you own that choice. So if we’re doing document search, I think you care a lot more about false positives; you care a lot more about precision. You want the things that come to the top to be relevant. And for most search problems, you don’t have to get all the relevant things; you just have to get enough of the relevant things. So if some relevant things are called nonrelevant, you’re OK with that, right? But for other problems, you need to get everything.

Sam Ransbotham: Document search, that’s fine. But yeah, Lyft as well. … Put it in the context of one of these companies where you’ve had a precision and recall trade-off — false positive, false negative.

Craig Martell: Luckily, at Lyft we have nice human escape hatches, which I think is extremely important. All these recommendations ideally should have a human escape hatch. So if I recommend a destination for you, and that destination is wrong, it’s —

Sam Ransbotham: It’s OK.

Craig Martell: No harm, no foul — you just type the destination in. So for Lyft as a product, I think we’re pretty lucky because most of our recommendations — which are trying to lower friction to get you to take a ride — it’s OK if we don’t get them exactly right. There’s no real danger there. Self-driving cars, that one’s tough, because you want to get them both. You want to know that’s a pedestrian and you also want to make sure you don’t miss any pedestrians.

Sam Ransbotham: And the idea of putting a human in the loop there is much more problematic than just saying, “All right, here are some destinations; which one do you like?”

Craig Martell: Right.

Sam Ransbotham: Yeah.

Shervin Khodabandeh: Craig, earlier you talked about how AI in real life is a bunch of cognitive science experiments, because it’s ultimately about —

Craig Martell: For me, at least.

Shervin Khodabandeh: Yeah. And it brought up the idea of unconscious bias. And so we as humans have become a lot more aware about our unconscious biases across everything, right? Because, they’ve been ingrained through generations and stereotypes, etc.

Craig Martell: And just our past experience, right? Like, a biased world creates a biased experience, even if you have the best possible intentions.

Shervin Khodabandeh: Exactly — right? And so I guess my question is, clearly there is unintended bias in AI — [there] has to be. What do you think we need to think about now, so that 10, 20 years from now, that bias hasn’t become so ingrained in how AI works that it would be so hard to then course-correct?

Craig Martell: It already has. So the question is, how do we course-correct? Let me start by saying, I was on a panel for Northeastern about this movie [called] Coded Bias. If you haven’t seen the movie Coded Bias, you should absolutely see it. It’s about this MIT Media Lab undergraduate Black woman who tried to do a project that didn’t work because facial recognition just simply didn’t work for Black females. It’s an absolutely fascinating social study. The data set that was used to train the machine learning — the facial recognition algorithm — was gathered by the researchers at the time, and the researchers at the time were a bunch of white males. And this is a known issue, right? There’s a skew in the way the data set is gathered. Look, there’s a similar skew in all psychological studies; psychological studies don’t apply to me — I’m 56. Psychological studies apply to college students because that’s the readily available subjects.

So these were the readily available people because of the biased world, and so that’s how the data set came about. So even if [there was] no ill intention, the world was skewed, the world was biased, the data was biased; it didn’t work for a great number of people. Not a lot of females were part of the training set. And then the darker your skin, the worse it got. And there’s all kinds of technical reasons why: darker skin has less contrast, blah, blah, blah. But that’s not the issue. The issue is, should we have gathered the data that way? What is the goal of the data set? Who are our customers? Who do we want to serve? And let’s sample the data in such a way that it’s serving our customers.

We talked about this earlier about the undergrads. I think that’s really important. One way to get out of that is diversity in the workplace. I believe this so strongly. And you ask everybody, all of these diverse groups, to test the system and to see if the system works for them. When we did image search at Dropbox, we asked all of the employee research groups, “Please search for things that in the past have been problematic for you and see if we got them right.” And if we found some that were wrong, we would go back and regather data to mitigate against those issues. So look: Your system is going to be biased by the data that’s gathered — fact. Just a fact, it’s going to be biased by the data it’s gathered. You want to do your best to gather it correctly. You’re probably not going to gather correctly, because you have your own unconscious bias, as you point out. So you have to ask all the people who are going to be your customers to try it, to bang on it, to make sure it’s doing the right thing, and when it’s not, go back and gather the data necessary to fix it. So I think the short answer is diversity in the workplace.

Sam Ransbotham: Craig, thanks for taking the time to talk with us today — lots of interesting things —

Craig Martell: Yeah, my pleasure, these are really fun conversations. I’m pretty nerdy about this, so I enjoyed it very much.

Sam Ransbotham: Your enthusiasm shows.

Shervin Khodabandeh: Really insightful stuff. Thank you.

Craig Martell: Thank you, guys.

Sam Ransbotham: Well, Shervin, Craig says he won the lottery in his career, but I think we won the lottery in getting him as a guest for our first episode of Season 2. Let’s recap.

Shervin Khodabandeh: He made a lot of good points. Clearly, the commoditization of algorithms over time, and how it’s more and more going to be about tying it with strategy, going back to key business metrics, making change happen, the usage. … I really liked his point on what it takes to get the bias out of the system and how bias is already in the system.

Sam Ransbotham: The commoditization is particularly important. I think it resonates with us, because we’re talking about this from a business perspective. What he’s saying is that a lot of this is going to become, increasingly, a business problem. When it’s a business problem, it’s not a technical problem. I don’t want to discount the technical aspects of it, and certainly he brings plenty of technical chops to the table. But he really reinforced the “this is a business problem now” aspect.

Shervin Khodabandeh: Yeah, in five minutes, he basically provided such a cogent argument for our last two reports — the 2019, the 2020.

Sam Ransbotham: Exactly.

Shervin Khodabandeh: It’s about strategy and process change and process redesign and reengineering, and it’s about human and AI interaction and adoption.

Sam Ransbotham: And what’s also a business problem, too, is the managerial choice. He came back to that as well. He was talking about … some of these things are not clear-cut decisions. There’s a choice between which way you make a mistake. That’s a management problem, not a technical problem.

Shervin Khodabandeh: And it also requires managers to know what they’re talking about, which means they need to really understand what AI is saying and what it could be saying, and what [are] its limitations, and what’s the art of the possible. I also really like the point that as you get closer to the developers and the builders of AI, you have to really understand the math and the code, because otherwise you can’t guide them.

Sam Ransbotham: Although, don’t you worry that we’re just running into this thing where everyone has to understand everything? I feel that’s a tough sell. If the managers have to understand the business and how to make money, and they have to understand the code. … I mean, having everyone understand everything is obviously important —

Shervin Khodabandeh: Well, I guess the question is, how much do you have to understand everything? A good business executive already understands everything to the level that he or she should, to the point of asking the right questions. I think you’re right. But isn’t this what Einstein said — that you don’t really understand something unless you can describe it to a 5-year-old? You can describe gravity to a 5-year-old and to a 20-year-old and to a grad student in different ways, and they will all understand it. The question is, at least you understand it, rather than to say, “I have no idea there is such a thing as gravity.”

Sam Ransbotham: So basically, teaching and academics are really important. Is that what Shervin has just gone on the record as saying?

Shervin Khodabandeh: I think the idea that managers and senior executives need to understand AI itself is not a slam dunk, because you’re raising the right question: What is the right level of understanding? So what is the right level of synthesis and articulation that allows you to make the right decisions without having to know everything? But isn’t that what a successful business executive does with every business problem? And I think that’s what we’re saying: that with AI, you need to know enough to be able to probe. But, suffice it to say, it’s not a black box like a lot of the technology implementations have been a black box in the past.

Sam Ransbotham: And that helps get back to the whole “learning more” and “where to draw the line” and helps to understand that balance. After the discussion of gravity, each one of those people would understand more about gravity than they did before, and so it’s a matter of moving from current state to next state.

Shervin Khodabandeh: Yeah.

Sam Ransbotham: Craig made some good points about diversity in the workplace. If the team gathering data isn’t hyperaware of the inherent biases in their data sets, algorithms are destined to produce a biased result. He refers to the movie Coded Bias and the MIT Media Lab researcher Joy Buolamwini. Joy is the founder of the Algorithmic Justice League. We’ll provide some links in the show notes, where you can read more about Joy and her research.

Thanks for joining us today. We’re looking forward to the next episode, when we’ll talk with Will Grannis, who has the unique challenge of building the CTO function at Google Cloud. Until next time.

Allison Ryder: Thanks for listening to Me, Myself, and AI. If you’re enjoying the show, take a minute to write us a review. If you send us a screenshot, we’ll send you a collection of MIT SMR’s best articles on artificial intelligence, free for a limited time. Send your review screenshot to smrfeedback@mit.edu.

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Comments (3)
Udaya kumar V
Craig's comments algorithm bias is quite relevant. Look forward to future podcasts to focus on how to reduce this bias.
Vasant Pawar
It was a good introduction for senior executives regarding the importance of AI in management. The webinar attempted to remove the fear of AI. Practical examples of AI application in day-to-day management issues (particularly in healthcare setup) will be more helpful.
PS Tan
Very interesting podcast - domain knowledge in the modelling aspect is definitely very important. Indeed, commoditisation of AI algorithms will be good for many, and provides better scalability in developing more "intelligent" systems

Subscribe to Me, Myself, and AI

Me, Myself, and AI

Dismiss
/