Me, Myself, and AI Episode 202

Games, Teams, and Moonshots: Google Cloud’s Will Grannis

Play Episode
Listen on
Previous
Episode
Next
Episode

Topics

Artificial Intelligence and Business Strategy

The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.

In collaboration with

BCG
More in this series

Will Grannis discovered his love for technology playing Tron and Oregon Trail as a child. After attending West Point and The Wharton School at the University of Pennsylvania, he translated his passion for game theory into an aptitude for solving problems for companies, a central component of his role as founder and leader of the Office of the CTO at Google Cloud. Will leads a team of customer-facing technology leaders who, while tasked with bringing machine learning solutions to market, approach their projects with a user-first mindset, ensuring that they first identify the problem to be solved.

In Season 2, Episode 2, of Me, Myself, and AI, Will makes it clear that great ideas don’t only come from the obvious subject-area experts in the room; diverse perspectives, coupled with a codified approach to innovation, lead to the best ideas. The collaboration principles and processes Google Cloud relies on can be applied at other organizations across industries.

Read more about our show and follow along with the series at https://sloanreview.mit.edu/aipodcast.

Subscribe to Me, Myself, and AI on Apple Podcasts, Spotify, or Google Podcasts.

Transcript

Shervin Khodabandeh: Can you get to the moon without first getting to your own roof? This will be the topic of our conversation with Will Grannis, Google Cloud CTO.

Sam Ransbotham: Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I’m Sam Ransbotham, professor of information systems at Boston College. I’m also the guest editor for the AI and Business Strategy Big Ideas program at MIT Sloan Management Review.

Shervin Khodabandeh: And I’m Shervin Khodabandeh, senior partner with BCG, and I colead BCG’s AI practice in North America. Together, MIT SMR and BCG have been researching AI for five years, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities and really transform the way organizations operate.

Sam Ransbotham: We’re talking with Will Grannis today; he’s the founder and leader of the Office of the CTO at Google Cloud. Thank you for joining us today, Will.

Will Grannis: Great to be here. Thanks for having me.

Sam Ransbotham: So it’s quite a difference between being at Google Cloud and your background. Can you tell us a little bit about how you ended up where you are?

Will Grannis: Call it maybe a mix of formal education and informal education. Formally, Arizona public school system and then, later on, West Point — math and engineering undergrad. And then, later on, UPenn — University of Pennsylvania, Wharton — for my MBA. Now, maybe the more interesting part is the informal education, and this started in the third grade, and back then, I think it was gaming that originally spiked my curiosity in technology, and so this was Pong, Oregon Trail … in television, Nintendo — all the gaming platforms. I was just fascinated that you could turn a disk on a handset and you could see Tron move around on a screen; that was like the coolest thing ever.

And so today’s manifestation — Khan Academy, edX, Codecademy, platforms like that — you have this entire online catalog of knowledge, thanks to my current employer, Google. And just as an example, this week I’m porting some machine learning code to a microcontroller and brushing up on my C thanks to these … what I call informal education platforms. So [it’s] a journey that started with formal education but was really accelerated by others, by curiosity, and by these informal platforms where I could go explore the things I was really interested in.

Sam Ransbotham: I think, particularly with artificial intelligence, we’re so focused about games and whether or not the machines beat a human at this game or that game, when there seems to be such difference between games and business scenarios. So how can we make that connection? How can we move from what we can learn from games to what businesses can learn from artificial intelligence?

Will Grannis: Gaming is exciting and it is interesting, but let’s take a foundational element of games: understanding the environment that you’re in and defining the problem you want to solve — what’s the objective function, if you will. That is exactly the same question that every manufacturer, every retailer, every financial services organization asks themselves when they’re first starting to apply machine learning. And so in games, the objective functions tend to be a little bit more fun — it could be an adversarial game, where you’re trying to win and beat others, but those underpinnings of how to win in a game actually are very, very relevant to how you design machine learning in the real world to maximize any other type of objective function that you have. So for example, in retail, if you’re trying to decrease the friction of a consumer’s online experience, you actually have some objectives that you’re trying to optimize, and thinking about it like a game is actually a useful construct at the beginning of problem definition: What is it that we really want to achieve? And I’ll tell you that, being around AI machine learning now for a couple of decades — when it was cool, when it wasn’t cool — I can tell you that the problem definition and really getting a rich sense of the problem you’re trying to solve is absolutely the No. 1 most important criterion for being successful with AI and machine learning.

Shervin Khodabandeh: I think that’s quite insightful, Will, and it’s probably a very good segue to my question. That is, it feels like in almost any sector, what we are seeing is that there are winners and losers in terms of getting impact from AI. There are a lot less winners than there are losers, and I’m sure that many CEOs are looking at this wondering what is going on, and I deeply believe that a lot of it is what you said, which is it absolutely has to start with the problem definition and getting the perspective of business users and process owners and line managers into that problem definition, which should be critical. And since we’re talking about this, it would be interesting to get your views on what are some of the success factors from where you’re sitting and where you’re observing to get maximum impact from AI.

Will Grannis: Well, I can’t speak to exactly why every company is successful or unsuccessful with AI, but I can give you a couple of principles that we try to apply and that I try to apply generally. I think today we hear and we see a lot about AI and the magic that it creates. And I think sometimes it does a disservice to people who are trying to implement it in production. I’ll give you an example: Where did we start with AI at Google? Well, it was in a place where we already had really well-constructed data pipelines, where we had already exhausted the heuristics that we were using to determine performance, and instead we looked at machine learning as one option to improve our lift on advertising, for example.

And it was only because we already had all the foundational work done — we understood how to curate, extract, transform, [and] load data; how to share it; how to think about what that data might yield in terms of outcomes; how to construct experiments, [the] design of experiments; and utilize that data effectively and efficiently — that we were able to test the frontier of machine learning within our organization. And maybe, to your question, maybe one of the biggest opportunities for most organizations today, maybe it will be machine learning, but maybe today it’s actually in how they leverage data — how they share, how they collaborate around data, how they enrich it, how they make it easy to share with groups that have high sophistication levels, like data scientists, but also analysts and business intelligence professionals who are trying to answer a difficult question in a short period of time for the head of a line of business. And unless you have that level of data sophistication, machine learning will probably be out of reach for the foreseeable future.

Shervin Khodabandeh: Yeah, Will, one other place I thought you might go is building on what you were saying earlier about the analog between gaming and business, all around problem definition — how it’s important to get the problem definition right. And what resonated with me when you were saying that was, probably a lot of companies just don’t know how to make that connection and don’t know where to get started, which is actually, “What is the actual problem that we’re trying to solve with AI?” And many are focusing on, “What are all the cool things AI can do, and what’s all the data and technology we need?” rather than actually starting with the problem definition and working their way backwards from the problem definition to the data and then how can AI help them solve that problem.

Will Grannis: It’s really a mindset. I’ll share a little inside scoop: At Google, we have an internal document that our engineers have written to help each other out with getting started on machine learning. And the No. 1 — because there’s a list of like 72 factors, things you need to do to be successful in machine learning — and No. 1 is you don’t need machine learning. And the reason why it’s stated so strongly is actually to get the mindset of uncovering the richness of the problem, and the nuances of that problem actually create all of the downstream — to your point — all of the downstream implementation decisions. So if you want to reduce friction in online checkout, that is a different problem than trying to optimize really great recommendations within someone’s e-commerce experience online for retail. Those are two very different problems, and you might approach them very differently; they might have completely different data sets, they might have completely different outcomes on your business. And so one of the things that we’ve done here at Google over time is we’ve tried to take our internal shorthand for innovation, [our] approach to innovation and creativity, and we’ve tried to codify it so that we can be consistent in how we execute projects, especially the ones that venture into the murkiness of the future.

And this framework, it really … has three principles. And the first one, as you might expect, is to focus on the user, which is really a way of saying, “Let’s get after the problem — the pain that they care the most about.” The second step is to think 10x because we know [that] if it’s going to be worth the investment of all of these cross-functional teams’ time and to create the data pipelines, and to curate them, and to test for potential bias within these pipelines and within data sets, to build models and to test those models, that’s a significant investment of time and expertise and attention, and so we want to make sure we’re solving for a problem that also has the scale that will be worth it and really advances whatever we’re trying to do — not in a small way, but in a really big way. And then the third one is rapidly prototyping, and you can’t get to the rapid prototyping unless you’ve thought through the problem, you’ve constructed your environment so that you can conduct these experiments rapidly. And sometimes we’ll proxy outcomes just to see if we’d care about them at all without running them at full production. So that framework, that focusing on the user, thinking 10x, and then rapid prototyping, is an approach that we use across Google, regardless of product domain.

Shervin Khodabandeh: That’s really insightful, especially the “think 10x” piece, which I think is really, really helpful. I really like that.

Sam Ransbotham: You’re lobbying, I think, for … I would call it a very strong exploration mindset toward your approach to artificial intelligence, versus more of an incremental or “Let’s do what we have, better.” Is that right for everybody? Do you think that’s … idiosyncratic to Google? Almost everyone listening today is not going to be working at Google. Is that something that you think works in all kinds of places? That may be beyond what you can speak to, but how well do you think that that works across all organizations?

Will Grannis: Well, I think there’s a difference between a mindset and then the way that these principles manifest themselves. Machine learning, just in its nature, is exploration, right? It’s approximations, and you’re looking through the math, and you’re looking for the places where you’re pretty confident that things have changed significantly for the better or for the worse so that you can do your feature engineering and you can understand the impact of choices that you’re making. And in a lot of ways, the mathematical exploration is an analog to the human exploration, in that we try to encourage people — by the way, just because we have a great idea doesn’t mean it gets funded at Google. Yes, we are a very large company, yes … we’re doing pretty well, but most of our big breakthroughs have not come from some top-down-mandated gigantic project that everybody said was going to be successful.

Gmail was built by people who were told very early on that it would never succeed. And we find that this is a very common path — and before Google, I’ve been an entrepreneur a couple of times, my own company and somebody else’s, and I’ve worked in other large companies that had world-class engineering teams as well. And I can tell you this is a pattern, which is, just giving people just enough freedom to think about what the future could look like. We have a way of describing 10x at Google you may have heard, called moonshots. Well, our internal engineering team has also coined the term roof shots, because the moonshots are often accomplished by a series of these roof shots, and if people don’t believe in the end state, the big transformation, they’re usually much less likely to journey across those roof shots and to keep going when things get hard. And we don’t flood people with resources and help at the beginning because it’s … this is hard for me to say as a senior executive leading technology innovation, but quite often, I don’t have perfect knowledge of what will be the most impactful project that teams are working on. My job is to create an environment where people feel empowered, encouraged, and excited to try — and [I] try to demotivate them as little as possible, because they’ll find their way to the roof shot, and then the next one, and then the next one, and then pretty soon you’re three years in, and I couldn’t stop a project if I wanted to; it’s going to happen because of that spirit, that [Star Trek:] Voyager spirit.

Shervin Khodabandeh: Tell us a bit about your role at Google Cloud.

Will Grannis: I think I have the best job in the industry, which is I get to lead a collective of CTOs who have come from every industry and every geography and every place in the stack, from hardware engineering all the way up to SaaS, quantum security, and I get to lead this incredible team. And our mission is to create this bridge between our customers — our top customers, and our top partners of Google, who are trying to do incredible things with technology — and the people who are building these foundational platforms at Google, and to try to harmonize them. Because with the evolution of Google — now, especially with our cloud business — we have become a partner to many of the world’s top organizations.

And so, for example, if Major League Baseball wants to create a new immersive experience for you at home through a digital device or, eventually when we get back to it, into the stadiums, it’s not just us creating technology, surfacing it to them, them telling us what they like about it, and then sending it back, and then we spin it; it’s actually collaborative innovation. So we have these approaches to machine learning that we think could be pretty interesting: We have technologies in AR/VR [augmented reality and virtual reality], we have content delivery networks, we have all of these different platforms that we have at Google. And in this exploratory mode, we get together with these large customers and they help guide not only the features, but they help us think about what we’re going to build next. And then they … layer on top of these foundational platforms the experience that they want as Major League Baseball [for] us as baseball fans. And that intertwined, collaborative technology development is at the heart, and that collaborative innovation — that’s at the heart of what we do here in the CTO group.

Shervin Khodabandeh: That’s a great example. Can you say a bit more about how you set strategy for projects like that?

Will Grannis: I’m very, very bullish about having the CTO and the CIO at the top table in an organization, because the CIO often is involved in the technology that a company uses for itself, for its own innovation. And I’ve often found that the tools and the collaboration and the culture that you have internally manifests itself in the technology that you build for others. And so a CIO’s perspective on how to collaborate — the tools, how people are working together, how they could be working together — is just as important as the CTO’s view into what technology could be most impactful, most disruptive coming from the outside in, but you also want them sitting next to the CMO. You want them sitting next to the chief revenue officer, you want them with the CEO and the CFO. And the reason is because it creates a tension, right? I would never advocate that all of my ideas are great. Some of them are, but some of them [haven’t] panned out. And it’s really important that that unfiltered tension is created at the point at which corporate strategy is delivered. In fact, this is one of the things I learned a lot from working for a couple of CEOs, both outside of Google and here, is that it’s a shared responsibility: [It’s] the responsibility of the CTO to put themselves in the room, to add that value, and it’s the responsibility of the CEO to pull it through the organization when the mode of operation may not be that way today.

Shervin Khodabandeh: That’s very true. And it corroborates our work, Sam, to a large extent — that it’s not just about building the layers of tech; it’s about process change, it’s about strategy alignment, and also it’s about ultimately what humans have to do differently, and to … work with AI collaboratively. It’s also about how managers and midmanagers and the folks that are using AI to be more productive, to be more precise, to be more innovative, more imaginative in their day-to-day work. Can you comment a bit on that, in terms of how it could have changed the roles of individual employees — let’s say, in different roles, whether it’s in marketing or in pricing or customer servicing? Any thoughts or ideas on that?

Will Grannis: We had an exercise like this with a large retail customer, and it turned out that someone from outside of the organization — the physical security and monitoring organization — it turns out that one of the most disruptive and interesting and impactful framings of that problem came from someone who was in a product team, totally unrelated to this area, that just got invited to this workshop as a representative of their org. So we can’t have everybody in every brainstorming session, despite the technology [that] allows us to put a lot of people in one place at one time, but choosing who is in those moments is absolutely critical. Just going to default roles or going to default responsibilities is one way to just keep the same information coming back again and again and again.

Sam Ransbotham: That’s certainly something we’re thinking about at a humanities-based university, that blend and that role of people. It’s interesting to me that in all your examples, you talked about joining people, and people from cross-functional teams, [but] you’ve never mentioned a machine as one of these roles or a player. Is that too far-fetched? How are these combinations of humans going to add the combination of machine in here? We’ve got a lot of learning from machines, and I think … certainly at a task level, at what point does it get elevated to a more strategic level? Is that too far away?

Will Grannis: No, I don’t think so, but [it’s] certainly in its early days. One of the ways you can see this manifest is [in] natural language processing, for example. I remember one project we had, we were training a chatbot, and it turned out we used raw logs — all privacy assured and everything — but we used these logs that a customer had provided because they wanted to see if we could build a better model. And it turns out that the chat agent wasn’t exactly speaking the way we’d want another human being to speak to us. And why? Because people get pretty upset when they’re talking to customer support, and the language that they use isn’t necessarily language I think we would use with each other on this podcast. And so we do think that machines will be able to offer some interesting response inputs, generalized inputs at some point, but I can tell you right now, you want to be really careful about letting loose a natural language-enabled partner that is a machine inside of your creativity and innovation session, because you may not hear things that you like.

Sam Ransbotham: Well, it seems like there’s a role here, too that — I don’t know, these machines. … There’s going to be bias in these things. This is inevitable. And in some sense, I’m often happy to see biased decisions coming out of these AI and ML systems, because then it’s at least surfaced. We’ve got a lot of that unconsciously going on in our world right now, and if one of the things that we’re learning is that the machines are pointing out how ugly we’re talking to chatbots or how poorly we’re making other decisions, that may be a Step 1 to improving overall.

Will Grannis: Yeah. The responsible AI push, it’s never over; it’s one of those things. … Ensuring those responsible and ethical practices requires a focus across the entire activity chain. And two areas that we’ve seen as really impactful are when you can focus on principles as an organization. So, what are the principles through which you will take your projects and shine the light on them and examine them and think about the ramifications? Because you can’t a priori define all of the potential outputs that machine learning and AI may generate.

And that’s where I refer to it as a journey, and I’m not sure that there is a final destination. I think it’s one that is a constant and, in the theme of a lot of what we talked about today, it’s iterative. You think about how you want to approach it: You have principles, you have governance, and then you see what happens, and then you make the adjustments along the way. But not having that foundation means you’re dealing with every single instance as its own unique instance, and that becomes untenable at scale, even small scale. This isn’t just a Google-scale thing — this is, any company that wants to distinguish itself with AI at any type of scale is going to bump into that.

Sam Ransbotham: Will, we really appreciate you taking the time to talk with us today. It’s been fabulous. We’ve learned so much.

Shervin Khodabandeh: Really, really an insightful and candid conversation. Really appreciate it.

Will Grannis: Oh, absolutely. My pleasure. Thanks for having me.

Shervin Khodabandeh: Sam, I thought that was a really good conversation. We’ve been talking with Will Grannis, founder and leader of the Office of the CTO at Google Cloud.

Sam Ransbotham: Well, I think we may have lost some listeners saying that “you don’t need ML” as item one on his checklist, but I think he had 71 other items on his checklist that do involve machine learning.

Shervin Khodabandeh: But I thought he was making a really important point: Don’t get hung up on the technology and the feature functionality, and think about the business problem and the impact — and shoot really, really big for the impact. And then also, don’t think you have to achieve the moonshot in one jump, and that you could get there in progressive jumps, but you always have to keep your eye on the moon, which I think is really, really insightful.

Sam Ransbotham: That’s a great way of putting it, because I do think we got focused on thinking about the 10x, and we maybe paid less attention to his No. 1, which was the user focus and the problem.

Shervin Khodabandeh: The other thing I thought that is an important point is collaboration. I think it’s really an overused term, because in every organization, every team would say, “Yes, yes, we’re completely collaborative; everybody’s collaborating; they’re keeping each other informed.” But I think the true meaning of what Will was talking about is beyond that. There’s multiple meanings to collaboration. You could say, “As long as I’m keeping people informed or sending them documents, then I’m collaborating.” But what he said is, “There’s not a single person on my team that can succeed on his or her own,” and that’s a different kind of collaboration; it actually means you’re so interlinked with the rest of your team that your own outcome and output depends on everybody else’s work, so you can’t succeed without them and they can’t succeed without you. It’s really beyond collaboration. It’s like the team is an amalgam of all the people and they’re all embedded in each other as just one substance. What’s the chemical term for that?

Sam Ransbotham: Yeah, see, I knew you were going to make a chemical reference there. There you go: amalgam.

Shervin Khodabandeh: Amalgam or amalgam? I should know this as a chemical engineer.

Sam Ransbotham: Exactly. We’re not going to be tested on this part of the program.

Shervin Khodabandeh: I hope my Caltech colleagues aren’t listening to this.

Sam Ransbotham: Yeah, actually, the collaboration thing. It’s easy to espouse collaboration. If you think about it, nobody we interview is going to say, “All right, you know, I really think people should not collaborate.” I mean, just, no one’s going to [say] that, but what’s different about what he said is they have process around it. And they had, it sounded like, structure and incentives so that people were incentivized to align well.

Shervin Khodabandeh: I like the gaming analog — the objective function in the game, whether it’s adversarial or you’re trying to beat a course or unleash some hidden prize somewhere; that there is some kind of an optimization or simulation or approximation or correlation going on in these games, and so the analog of that to a business problem resting so heavily on the very definition of the objective function.

Sam Ransbotham: Yeah, I thought the twist that he said on games was important, because he did pull out immediately that we can think about these as games, but what have we learned from games? We’ve learned from games that we need an objective, we need a structure, we need to define the problem. And he tied that really well into the transition from what we think of as super well-defined games of perfect information to unstructured. It still needs that problem definition. I thought that was a good switch.

Shervin Khodabandeh: That’s right.

Sam Ransbotham: Will brought out the importance of having good data for ML to work. He also highlighted how Google Cloud collaborates both internally and with external customers. Next time we’ll talk with Amit Shah, president of 1-800-Flowers, about the unique collaboration challenges that it uses AI to address through its platform. Please join us next time.

Allison Ryder: Thanks for listening to Me, Myself, and AI. If you’re enjoying the show, take a minute to write us a review. If you send us a screenshot, we’ll send you a collection of MIT SMR’s best articles on artificial intelligence, free for a limited time. Send your review screenshot to smrfeedback@mit.edu.

Topics

Artificial Intelligence and Business Strategy

The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.

In collaboration with

BCG
More in this series

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Comment (1)
John O’Connell
Highly recommended, as refreshingly insightful and engaging podcast, two key takeaways for me are 1) Success with AI/ML is dependent on a clear and concise Problem definition- ‘Focus on the the user’ 2) The importance of living ‘active collaboration’ - great quote ‘There is not a single person in my team can succeed on their own- we need each other to succeed!’

Subscribe to Me, Myself, and AI

Me, Myself, and AI

Dismiss
/