Topics
Artificial Intelligence and Business Strategy
In collaboration with
BCGAs a partner with OpenAI — the company that recently wowed the tech world and the general public with its DALL-E image generator and ChatGPT chatbot — Microsoft helped to make those generative AI tools possible. But Microsoft has long invested in developing its own artificial intelligence technologies, for internal and external customers alike. And even when AI is not the centerpiece of a specific software program, it’s often driving how that tool — such as the company’s Bing search engine — works.
As corporate vice president of Microsoft’s AI platform, Eric Boyd oversees product and technology teams that build artificial intelligence and machine solutions for the company’s Azure platform and its AI services portfolio. Eric joins Sam Ransbotham and Shervin Khodabandeh on this episode of the Me, Myself, and AI podcast to talk about how Microsoft builds AI tools and embeds the technology in its various products, AI’s potential for helping to expand people’s creativity, and the democratization of AI.
Read more about our show and follow along with the series at https://sloanreview.mit.edu/aipodcast.
Subscribe to Me, Myself, and AI on Apple Podcasts, Spotify, or Google Podcasts.
Give your feedback in this two-question survey.
Transcript
Sam Ransbotham: What exciting new AI-enabled tools are on the horizon? Find out on today’s episode.
Eric Boyd: I’m Eric Boyd from Microsoft, and you’re listening to Me, Myself, and AI.
Sam Ransbotham: Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I’m Sam Ransbotham, professor of analytics at Boston College. I’m also the AI and business strategy guest editor at MIT Sloan Management Review.
Shervin Khodabandeh: And I’m Shervin Khodabandeh, senior partner with BCG and one of the leaders of our AI business. Together, MIT SMR and BCG have been researching and publishing on AI since 2017, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities and really transform the way organizations operate.
Sam Ransbotham: Today, Shervin and I are excited to be joined by Eric Boyd, corporate vice president, AI platform, at Microsoft. Eric, thanks for taking the time to talk with us. Welcome.
Eric Boyd: Great to be with you both.
Sam Ransbotham: Let’s start with “corporate vice president, AI platform.” Can you tell us what that job title entails and what the scope of that is? What do you do?
Eric Boyd: Yeah, sure. AI, obviously, is such a heady buzzword these days. I lead the AI platform team at Microsoft. … The AI platform team really has a couple of different things that we focus on doing. One of the things that we do is we bring the tools for people who are trying to build and train their own AI models to make them more productive. And so that’s Azure Machine Learning, and that’s a set of tools that we make available externally. But we also use those same tools internally, so teams like Bing, like Office, like Azure … all across Microsoft, we’re using those tools to really build all the models that we use across all the things that Microsoft does.
The other thing that we do is we build some of our own models ourselves. We call those Cognitive Services. So if you want the latest and greatest models in speech and vision and language, we’ve got a cognitive service model that does that, [which] you can then call directly as a web service. And so with that, we’re really working with the research departments that we’ve got at Microsoft Research and pushing this state-of-the-art research that we have, pushing the state of the art of AI really forward, and then making that available both internally, to our internal services at Microsoft, as well as to our customers through Azure. So … my job is building all those products and figuring out how we can best meet the needs of all of our customers in this rapidly expanding field of AI.
Sam Ransbotham: What I really like about that is this idea that if everyone using these tools had to go invent them from scratch, obviously it would take forever, and most businesses — their goal is not speech synthesis or speech generation.
Eric Boyd: That’s right.
Sam Ransbotham: And so that seems exactly the right sort of thing: to be building these small components and delivering them. How do you know what to build? How do you tell people how to use them? How does this work? How does this infrastructure and ecosystem start to play out?
Eric Boyd: We’re pretty privileged at Microsoft to have a whole bunch of different businesses that we’ve been in for a while, and so we get to work and learn with all of them over time. And so basically everything that we’ve done in our AI field has grown out of something that we’ve needed internally at Microsoft.
When we try and think about, like, what are the things that customers need, we’ve already proved out these services. If it’s a tool for how to train models, we have thousands of developers and researchers across Bing and Office who are training models to do things that you’ll experience every day as a user of Microsoft. And when we think about speech recognition, you know, we work with Microsoft Teams, so we can get a transcription of every call using the speech recognition software that we’ve already built. And so then we take those exact same things and then make them available to our customers, because we know that where we found value in them, our customers are also going to find value in them. That’s been one of the major innovation engines for us.
As the field continues to grow — obviously, Azure has thousands and thousands of enterprise customers all across the world, all across every industry that you could think of. And so we go and meet with them and talk with them, and that also opens up a lot of insights on where are the places that companies are struggling with things. But, you know, similar to what you described, many companies are like, “You know, speech recognition is not core, but I need this in my product. And so I could waste a lot of time and energy training a speech recognition model, which is really hard to do and really hard to do effectively, but it’s much better for my business if I can just consume something state of the art that you’ve already built for us.” And so that ends up being the thought process that many of our customers end up going through.
Sam Ransbotham: I love that because some of the analogy that I use in class is that no one says, “Hey, you know, I have a great bookstore, but you really should buy my books because we have a great payroll system.” We quickly figured out that that was the sort of thing that single companies can do well, and we might as well have those single companies do well because your competitive advantage does not come from having an awesome payroll department.
Eric Boyd: And we see that too. We work with a lot of startup companies, and startup companies have to have this fixation on what’s their competitive differentiator, and anything that’s not, then they have to go and find that someplace else. Startup companies come to us knowing we’ve got the latest stuff that they can go and use and make their products great while focusing on the things that really are going to matter to their business.
Sam Ransbotham: And the other part of that, too, is that when people use these services, you’re going to constantly improve those services. You don’t have to just build it in the first place. … This stuff is changing so quickly, the idea that you would then invest enough to keep up is daunting.
Eric Boyd: Yeah, so rapidly. It’s really kind of crazy just how quickly this field is moving. The speech quality that we deliver through our speech API literally improves every month. We measure, and we’ve got data to back that up. Our vision models have just exploded in quality recently, and we’ve seen lots of crazy things. And then let’s not even get started on language, right? The large language models are just incredibly powerful these days, and so [there’s] just a massive explosion going on there.
Sam Ransbotham: It’s great that you mentioned the language models. I’m literally just talking about the OpenAI products tomorrow in class, and the developments there are just … well, first, I’m angry with you people because I have to redo my slides and stuff every semester. But the progress that I see from every time I teach, it’s just staggering. I can’t even use my “Oh yeah, ha ha, this is where AI fails.” It’s just harder and harder to find those. They’re still there, but it’s just harder and harder to find them.
Eric Boyd: The frontier is moving quickly when you’re looking at exponential growth with a wall in front of you [that] is literally vertical, and so it feels like we’re starting to see that kind of progress. You referenced OpenAI. Obviously, we do a lot of work with them and power all the infrastructure behind them and bring their products to market. Everyone’s abuzz over ChatGPT and all the amazing progress that that seems to have made.
I’m really excited that people are now starting to see it, because we’ve been looking at things like that for a while now. Starting to see all the applications that we’re going to be able to light up as a result of that is really powerful and really exciting. It’s going to change everything. It’s going to change all the ways that we interact with computers, and so that’s really exciting to see.
Shervin Khodabandeh: Eric, it sounds like a fair amount of your role is focused on your Azure product. But let’s also come back: You mentioned a huge other part, which are the other products Microsoft develops for end users. What’s the process of getting technology advances into these products that people are using every day?
Eric Boyd: There are a couple of things that we look at. First is, it’s hard to stand up these models and to do them at scale. Every company out there struggles with “All right, I just got some new breakthrough, and how do I actually deploy it at really large scale?” And so that’s where we make the tools that really make that easy. Azure Machine Learning is the way that we deploy models all across Microsoft. If you’re using anything in Office or in Teams or something like that, you’re calling models that are hosted in Azure Machine Learning. And so any other business that wants to go and do that, they can go ahead, and they know it works at scale; they know Microsoft has built and trusted its business with it, so they can bet their business on it. And that’s a really hard thing to work through. Everything from the failure cases, to failovers, to load scaling — all those — we just sort of build all that in, and so that’s not something that people have to figure out how to go do.
But then the other side, too, is we get some crazy idea of “Maybe we could build a model that understands people when they talk to it; how do we get that to actually show up in products?” And so that process of taking research out of the lab and getting it into a product that’s enterprise grade that will perform at the right scale — that’s a ton of work. That’s a lot of work that we go through, just literally working model by model to figure out how we can get these things to be as efficient as they can, and then figure out how we stand them up in products the best way possible.
Sam Ransbotham: Shervin and I released the research we’re doing. We had a finding that 66% of the people reported they did not use artificial intelligence or minimal use [of it]. And then, [with] just a little bit of pushing back, 43% of those people then came back and said, “Oh, no, no — yes, I am, once I started thinking about it.” And I’m guessing that really undercounts, based on the things that you’re saying, because if you think about the number of people that are using Office products and how much AI is embedded in that … but that doesn’t count, does it?
Eric Boyd: I mean, it counts to me.
Sam Ransbotham: It counts to me, too.
Eric Boyd: It’s one of the things I push my team on a bit: I want us to think about scenarios and products where the portion of people using it, who were using the AI-powered features, is 100%. You can think of something like, yeah, in this Teams call, we have transcription and so we could turn transcriptions on, but not everybody uses that. You could go your whole day never using transcription.
What are the products that you absolutely can’t avoid AI because it’s just intrinsic to the product? Search is, of course, like that. You can’t avoid using AI in search. When you are talking to your phone to compose a text message with speech, you’re using AI 100% of the time you do that. And so increasingly, as we see these scenarios, there’s so many things that are just not possible, right?
If I’m going to now start with my three bullet points and have that expanded into a paragraph for me, like, well, you can’t do that without AI. So every time you’re using that functionality, you’re using AI. It’s pretty ubiquitous, but that’s something that there are really a lot more scenarios coming on that. I mean, there’s this whole field of AI-powered applications that is really about to start to blossom, where the application just doesn’t exist without the AI that [powers] it. And so that’s going to be really exciting.
Sam Ransbotham: It doesn’t exist, but at the same time, it doesn’t have to be the showcase part of it, either. I think when we ask people about AI, they’re thinking, “Well, did I speak with a humanoid robot today?” And you’re talking about embedded cases that are part of another process but are integral to that process, perhaps.
Eric Boyd: And that’s one of the things we focus on a lot, is how to make AI that helps people. The AI doesn’t have to be the centerpiece of it. The person is the centerpiece of it. If you think about … [you’ve] got a product designer now, where you’re setting up your PowerPoint presentation, you’re trying to create nice artwork and nice styles with it. And so we’ve embedded DALL-E with it. DALL-E is a system where you can say some words and it’ll give you an image back describing exactly what you just told it to make: I want a pink elephant running on the moon that is playing with a unicorn. And you’ll get a lovely picture of exactly that.
And so there, the person is still the focus of it, right? The AI is a very powerful tool, but ultimately, we’re helping you be creative to go and build something and do something that you couldn’t have otherwise done. And that’s, I think, a lot of what’s exciting about this new space of generative AI — what people are branding this — of “How do I use these image-creation tools, these text-creation tools, to really take my creativity a lot further?” I’m a terrible artist. I don’t draw anything. But to be able to say, “I want to create this type of an image or have this type of feeling or style,” that’s exciting to see.
Sam Ransbotham: That is exciting. And I’ll go and say, I’ve started to use that too. I make slides for class. It’s very easy for me to go to DALL-E, put in a key phrase of a point I want to make, and then I get five images to choose from. And it takes me seconds. Actually, it takes me a lot longer than that, because I get very excited about screwing around with it and it becomes a procrastination tool.
Eric Boyd: The fun factor. Yeah, it sort of draws you in. I used to do the same thing. And of course, how you used to do it, you went to an image search engine and you would search for something. And you’d never find what you wanted, and you couldn’t tweak it just the right way. And for me, I usually am looking for something funny. If I’m making a presentation and I’m not making people laugh, then I know they’re already bored, so I’m looking for some way to punch it up.
And so then you need some unusual scenario. And so it probably doesn’t exist. You’re not going to find some image searching for it. Being able to have this creative tool that can really help you do what you want — I mean, that’s where AI is empowering you as a person to do something that you otherwise couldn’t do. It’s not the centerpiece, but it’s absolutely powering the application of the things that you can go do with it.
Sam Ransbotham: And some of these things are marginal. Like, I mean, my actual case for tomorrow’s class is, I’m going to give them a bunch of machine learning code that has a lot of errors, and their challenge is to fix all the errors that I’ve made, and so my image there is “students dunking on professor.”
There’s not a stock photo for that, but DALL-E came through for me with a lot of different choices there. But it’s interesting, though — if that image hadn’t been there, I would have gotten the point across in class. So some of these generative examples right now feel nice but marginal. Take us to the path where those are more integral and more, let’s say, value creating than my, like you said, funny illustration. Take me another step there.
Eric Boyd: Maybe I’ll make the anti-point first. Someone was making a joke the other day that they said, “We’re not too far from the days where a CEO is going to have four bullet points and ask an AI to create a two-page memo for his staff, and the staff is going to use an AI to reduce this two-page memo to four bullet points and then go and read them.”
Sam Ransbotham: Oh, I love that.
Eric Boyd: I thought that was very funny. But that type of process, though … I’ve used AI already to say, “I’ve got some tough email I need to write; I want to make sure I’m getting the tone right. Here’s how I wrote it. Can you make this more polite, or can you give me a suggestion on how I should change it?”
Just being able to get valuable input and feedback on that — it’s kind of amazing. That’s really empowering and so very central, then, to the work that you’re going to try and do as a result of that.
We have a lot of scenarios. We’ve got AI embedded into … Microsoft Dynamics; it’s a CRM and advertising tool. And one of the things we’re using it for is to help people create advertising copy. They can create advertising copy that gets generated for them. A similar use case: We worked with CarMax, and CarMax has every single car on the planet, and they want to have a unique page describing every single car on the planet. And so whenever I talk about CarMax, I say, “Well, my first car was a 1986 Ford Tempo. The 1986 Ford Tempo was a piece of garbage. It was a piece of garbage in 1986 and it’s absolutely one now, but that was my first car.”
So CarMax wants to have a page describing the 1986 Ford Tempo. And they have user reviews about it, but they want a page that describes it that will do really well for search engine optimization. And so they used GPT-3 to go and summarize reviews that they’ve got on each make, each model, each year, and then generate a page for it. It would have taken them years, literally years — they did the math on how you could sort of go and do that. But now they have this really high-quality, valuable content that’s directing people to their site as a result of that.
Sam Ransbotham: And what’s exciting about that is the scale part that you’ve alluded to: that this is individualized and personalized, but it’s scaled at the same time. And I think there’s where we see a lot of the promise.
Eric Boyd: Exactly right. I think that’s very exciting that you can do this and you can run it for a couple of hours and get lots and lots and lots of work done. And again, sort of with that theme of “this is AI that’s helping people do things better,” as an editor, you can review these and see how they’ve come in and really be able to go so much faster and so much more productive than you would in a different manner.
We see examples all over the place of how this AI is really helping people do things that they couldn’t before. We work with a lot of companies, and they come just in all shapes and sizes, and you really have to take them with the problems they have today and give them solutions that they can use today.
Shervin Khodabandeh: What I really like about this conversation is, we’re not just talking about a typical handful of tech companies that are really doing advanced stuff with advanced tools. Can you comment a bit about how the process of making these tools available to other companies and other people — people who are not superusers — and how you see that playing out over the next few years?
Eric Boyd: I think there’s already just such a democratization of technology that’s going on. When you think about how powerful your cellphone is versus who was able to get access to that type of computing power three decades ago, when you think about the access to the information on the internet and online versus three decades ago, we see that transformation happening just everywhere, where you’re putting power in the hands of so many more people. And so AI is both going to be a part of that and an accelerant of that, as I think about it.
When you think about the ability to create an application that right now requires knowing computer science and how to write code and knowing a programming language and having access to the tools to go and do that, and then you look at something like GitHub’s Copilot, which is just scratching the surface of how powerful it can be to describe a concept and have an AI literally translate that into code for you. …
We’re going to have so many more people who, because of AI, are now able to create applications; they’re able to get work done that they previously couldn’t imagine getting done, that they might have needed to go hire someone to build something for them. I think we’re going to see a lot of that democratization continue to happen. Even with ChatGPT, I think we’re starting to see some of the democratization of “Let’s expose to the world, hey, this is the type of thing that’s possible.” It may be the ultimate homework cheater, and so we’ll have to deal with all the essays [that] now need to be filtered against “No, ChatGPT didn’t write this,” but just getting people exposed to the ideas of what you can do and then thinking about how that’s going to turn into the next companies, the next ways that are going to go in and power more and more people, that, to me, is where that democratization is really going to take off.
Sam Ransbotham: I’m safe saying this because we’ll broadcast after my exams, but literally one of my exam questions for this week is to take the ChatGPT and come in and answer the question. And then step two is improve it. You know, take what it gave you as a starting point and improve it. But what I’m excited about what you said actually relates to our last episode, with Ziad Obermeyer, who is an emergency medical physician who’s trying to build ML models to solve health problems. And that’s great if you’re Ziad, who happens to be really smart about medicine and really smart about ML, but the things you’re talking about are taking people who maybe are not as entrenched in ML and AI and getting them able to use these tools. I think that’s what’s really exciting.
Eric Boyd: And that’s going to be an important thing. And medical is such a great field, right? We see such an explosion in what’s coming in medical information and bioengineering and all of these different areas. It actually starts to scratch on another one of my favorite topics: [We’ve] got to make sure that we’re doing these things in the right and responsible way. And so whenever you have a model that is going to be making any sort of health care judgment or advice, you’ve got to make sure it’s not biased against different groups. You’ve got to make sure that it’s actually fair. One of the examples — we worked with the National Health Service in the U.K. on exactly that problem of trying to help them understand, when they’ve got a model, how is it being fair? When is it having the right outcomes, and are they using it in the right way? Because we do want to make sure that the power of all of these applications really is brought to everybody and not just small classes of people who are able to benefit from it.
Sam Ransbotham: You mentioned scarcity before, and I think about it in [terms of] “could versus should.” I mean, we didn’t have to have these conversations about “Should you do it?” when we couldn’t, because we were bound by “could.” But now that we can, then “should” becomes a much bigger deal. And it strikes me that as we relax the compression or relax the scarcity of data science and machine learning algorithms, then we’re going to whack a mole here and increasingly need people who can answer the “should” question. And we’re going to make that huge transition probably before we’re all ready for it.
Eric Boyd: There’s obviously a lot of interest in some corners over exactly that: How do we make AI available to people and do it in a responsible way, especially as we know that not everyone’s going to follow the same rules that we’re willing to follow? At Microsoft, we think a lot about that, and so we published our set of responsible AI principles on fairness and transparency and security, and are really just trying to hold ourselves accountable. We’ve also published our responsible AI standard — the set of practices that we follow internally for how we make sure that we think about all the impacts that could happen with a particular product. And then we think about how to mitigate those impacts and try to release them and use them in a really responsible way.
We feel like that’s our obligation — that we need to make sure that we’re driving that as much as we can. But there’s also, of course, going to be a place where there has to be some government regulation of when it comes to what we should or shouldn’t do as societies. If you leave that up to corporations, corporations will make different decisions. That has to become a government decision. And so we’ve seen that, and we’ve even advocated that in some places. Things like face recognition for law enforcement is something that we don’t offer, but we also think that really needs to be regulated by the government because we can’t stop everyone from doing something like that.
Shervin Khodabandeh: That’s great. Eric, we have a segment where we ask our guests five questions. Just answer with the first thing that comes into your mind. The first is, what was your proudest AI moment?
Eric Boyd: I mean, it’s hard not to be really proud of the ChatGPT work, which is just seeing the light of day. That’s most top of mind, but just the potential impact of that, I think, definitely … that would be my quick answer on that.
Shervin Khodabandeh: You just touched on bias and transparency, but what worries you about AI other than that?
Eric Boyd: I mean, that’s probably the thing that most worries me: making sure that it’s used responsibly, and making sure that the benefits of it really do accrue to everyone. It’s something that we have an obligation [to do]. We’re creating this stuff, and if we do it in an uninformed way, then we’re responsible for that. And so we take that quite seriously and then want to make sure that we do a good job of that. It’s something that we see the industry as a whole really embracing. Really, all the customers that we talk to, that’s one of the things they’re always asking for help about too. And so I feel good about it, but that’s definitely the thing that we’re very much on guard about.
Shervin Khodabandeh: What’s your favorite activity that does not involve technology?
Eric Boyd: I’m a pretty avid cyclist, and so basically anything outdoors. It’s now cold and rainy here in Seattle, and so we’re starting to think about ski season. That would be the other thing.
Shervin Khodabandeh: What was the first career that you wanted? What did you want to be when you grew up?
Eric Boyd: I wanted to be a pilot. My grandfather was in the Air Force Academy and taught there. Actually, I applied to the Air Force Academy, could have gone there, but my vision wasn’t perfect and so they don’t let you fly if your vision isn’t perfect. Top Gun, the first one, had an impact on me. And so it’s kind of fun to see the second one coming back around.
Shervin Khodabandeh: What’s your greatest wish for artificial intelligence in the future?
Eric Boyd: I’m really struck by the way artificial intelligence has the potential to change all the things that we do in a positive way. And so we’ve started to see that, where AI starts to penetrate a lot of different fields and areas that we might not have thought about — you know, financial predictions all the way to inventory management at a retail store. We’re seeing AI have a big impact on that. But I think we’re really poised for it to just rewrite the ways that we interact with so much in our world in what I think is going to be a really positive way. That’s the thing that I’m pushing most for it and trying to sort of [say], “Hey, how do we help make that happen?” Because I think that’s going to be really great.
Shervin Khodabandeh: That’s great. Thank you.
Sam Ransbotham: This has been fascinating. We really enjoyed you taking the time to talk with us. I think the things you’re mentioning about getting these sort of tools into the hands of … not the top technology companies but into the hands of everyone is really going to be explosive, and I think people will enjoy hearing about that. Thank you for taking the time to talk with us today.
Eric Boyd: Always glad to talk with you, and thanks for having me on.
Sam Ransbotham: Thanks for tuning in. Please join us next time, when Shervin and I meet Michelle McCrackin, who’s helping Delta Air Lines teach front-line employees about analytics and AI.
Allison Ryder: Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn’t start and stop with this podcast. That’s why we’ve created a group on LinkedIn specifically for listeners like you. It’s called AI for Leaders, and if you join us, you can chat with show creators and hosts, ask your own questions, share your insights, and gain access to valuable resources about AI implementation from MIT SMR and BCG. You can access it by visiting mitsmr.com/AIforLeaders. We’ll put that link in the show notes, and we hope to see you there.