Topics
Artificial Intelligence and Business Strategy
In collaboration with
BCGRebecca Finlay, CEO of Partnership on AI (PAI), believes that artificial intelligence poses risks — and that organizations should learn from one another and help others avoid the same hazards by disclosing the mistakes they’ve made in implementing the technology.
In this episode of the Me, Myself, and AI podcast, Rebecca discusses the nonprofit’s work supporting the responsible use of AI, including how it’s incorporating global perspectives into its AI governance efforts. She also addresses the complexities of integrating AI into the workforce and the misleading narrative around the inevitability of AI taking over humans’ jobs. She advocates for a proactive approach to adopting the technology instead, where organizations, policy makers, and workers collaborate to that ensure AI enhances jobs rather than eliminating them.
Subscribe to Me, Myself, and AI on Apple Podcasts or Spotify.
Transcript
Shervin Khodabandeh: Stay tuned after today’s episode to hear Sam and I break down the key points made by our guest.
Sam Ransbotham: We can all certainly learn from companies’ successes with AI. But what about from their failures? On today’s episode, we speak with one leader who encourages organizations to share the bad along with the good, in the hopes that we can all learn together.
Rebecca Finlay: I’m Rebecca Finlay from Partnership on AI, and you are listening to Me, Myself, and AI.
Sam Ransbotham: Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I’m Sam Ransbotham, professor of analytics at Boston College. I’m also the AI and business strategy guest editor at MIT Sloan Management Review.
Shervin Khodabandeh: And I’m Shervin Khodabandeh, senior partner with BCG and one of the leaders of our AI business. Together, MIT SMR and BCG have been researching and publishing on AI since 2017, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities, and really transform the way organizations operate.
Hi, everyone. Thanks for joining us today. Sam and I are very happy to be speaking with Rebecca Finlay, CEO of Partnership on AI. The organization is a nonprofit that brings together a community of more than 100 partners to create tools, recommendations, and other resources to ensure we’re all building and deploying AI solutions ethically. Rebecca, this is super exciting and important work, and we’d love to [speak] to you about it. Thanks for joining the show.
Rebecca Finlay: Thank you so much for having me. I’ve been looking forward to this conversation.
Shervin Khodabandeh: Wonderful. So let’s get started. Tell us more about the organization’s mission and purpose.
Rebecca Finlay: The Partnership on AI was formed in 2016 with the belief that we needed to bring diverse perspectives together in order to address the ethical and responsible challenges that come with the development of artificial intelligence, and also to realize the opportunities to truly ensure that the innovation of AI benefits people and communities. And so, with that belief in mind, a group of companies and civil society advocates and researchers came together to chart out a mission to build a global community that has now come together for many years, focused on ensuring that we’re developing AI that works for people, that works for workers, that drives innovation that is sustainable and responsible [and] privacy-protecting, and really enhancing equity and justice and shared prosperity.
Shervin Khodabandeh: Maybe give us an example, or some examples, of the companies and the type of research.
Rebecca Finlay: The very first investment that was made into the Partnership on AI was by the six large technology companies, so that’s Amazon and Apple, Microsoft, Facebook — now Meta — Google/DeepMind, and IBM. And it was really at that moment when this new version of AI, or what was new then — you know, deep learning, pre-generative AI, that wave of AI — was really starting to be deployed in internet search mechanisms, in mapping mechanisms and recommendation engines, and the realization that there were some important ethical questions that needed to be answered. And so that brought together a whole group of other private sector companies, but also organizations like the ACLU and research institutes at Berkeley and Stanford and Harvard, and internationally as well — so organizations like the Alan Turing Institute in the U.K. and beyond.
And so that group came together, and now we have a number of different working groups that are really focusing both on the impact of that predictive AI, but, even more importantly, the potential impact of generative AI foundation and frontier models.
Sam Ransbotham: What are some examples of some progress that you feel like the partnership has made? What are some specifics here?
Rebecca Finlay: Particularly in this area, it’s clear that we need to think about it through what we would call a sociotechnical lens. Yes, there are technical standards, like watermarking or standards like C2PA, that are thinking about how you clearly track the authenticity of a piece of media through the cycle. But you also need to think about, what are the social systems and structures in place? So one of the efforts that we developed and now have 18 organizations signed up and evolving with us is the framework for responsible development of synthetic media. And that is really looking across the value chain. What are the responsibilities of creators, developers, deployers — that’s platforms and otherwise — when it comes to thinking about how to disclose appropriately to make sure that whoever comes in contact with the media that is developed is aware that it is AI-generated in some way, and also to make sure that the media that is being developed is not being maliciously used or being used in any way to harm? And we have a whole series of lists and information about what those harms are and why we need to be protecting people from them. So that’s a really important effort.
And, of course, the question is, how is it being used? And so one piece of this work that we’ve been doing is to ensure that the companies and organizations and media agencies that have signed up to support this work are really being transparent about how they are using this to respond to real-world circumstances. And so we make that available. And the goal is, yes, both to be accountable, in terms of the framework itself, but also to try to create case studies that other organizations can use and learn from as well.
Sam Ransbotham: It makes a lot of sense. I guess what I was particularly interested in is the aspect of deployment there. So, you know, in my mind, it’s really hard to imagine the FAANG companies [Facebook (now Meta), Amazon, Apple, Netflix, and Google (now Alphabet)] that you mentioned that were part of your origin story. They have enough economic incentives not to, let’s say … let’s pick deepfakes as an example. They have a lot of incentives not to create this sort of media, and so I guess I’m less worried about them being involved in the creation, but you also mentioned deployment, which I think is really where they have a huge role. And it seems like this is a case where maybe the people that the partnership is focusing on are probably what we would call the people who are likely to behave well in the first place. From that generative standpoint, how do you reach the people who are not, let’s say, not inclined or not incentivized to behave well?
Rebecca Finlay: Great question. None of the work that we’re doing at the Partnership on AI in any way should stop appropriate regulation. I’ve always been a supporter of governments attending to and being aware of and acting upon harms and ensuring that citizens are protected. And so regulation is a key part of thinking about an innovative accountability ecosystem. But, at the same time, we do think it’s really valuable to be helping those organizations that do want to be good and responsible actors to know what “good” looks like.
It’s both helpful in terms of companies, in terms of the work they’re doing. It’s also helpful, I think, to ensure that civil society and academics have a seat at the table in saying what “good” looks like. And then we also are finding that it’s very helpful for policy makers. So helping them to better understand the details of the emerging technology and where regulation is appropriate and useful is part of the work we do as well.
Shervin Khodabandeh: What are the partnership’s views on [AI’s] impact on [the] workforce?
Rebecca Finlay: This, I think, is one of the most fundamental areas that we all need to be thinking about. And it’s funny that it’s the one area in the whole AI development conversation that has this sort of air of inevitability to it, you know? It’s sort of like “The robots are coming; they’re taking our jobs. We need to look at [universal basic income]; we need to think about all these other mechanisms as well.” And so we’ve really rejected that inevitability and said, “No, there are choices that firms and employers and labor organizations and policy makers can make to ensure that AI is developed to augment workers and to ensure that workers’ voices are at the table when thinking about developing this technology.”
And this is a very complex question. Yeah, it’s a question of education and reskilling, but it’s also a question of, how are we measuring and evaluating these systems to ensure that we’re focusing on development that works for people? We issued a set of guidelines — we call them the Shared Prosperity Guidelines — which really were trying to get very specific. They were based on a series of interviews and some research that we’d done with workers themselves to better understand when and how AI deployed into their workplaces was beneficial and when it wasn’t. And so —
Shervin Khodabandeh: Beneficial to who?
Rebecca Finlay: Beneficial to the workers. There are all sorts of times when workers realize the value in having AI systems working with them — that is, so that they can be more creative, so that they can make more decisions, so they can be more innovative in the way in which they approach their work. When it’s not working for them, of course, is when they feel as if they’re being driven to rote tasks or repetitive tasks or being surveilled in some way through the system. So [it’s about] taking those insights and saying, “OK, so how do we make choices, clear choices, when we’re deploying these systems, to ensure that we’re doing it in a way that works for workers?”
And so I think … if you are an employer and thinking about “How do you even begin to wrestle with these questions of how to use generative AI in the workforce?” you need to be thinking about it in a very experimental way rather than thinking, “All right, it’s going to save me all these costs. Let’s deploy it 100% in this direction,” thinking about it as a way to sort of pilot new technologies, hear from your workers. One of the things we know is that a lot of AI systems fail when they’re deployed because we haven’t thought through what it actually means to put it into a workforce setting [or] what it actually means to put it into a risk management setting.
Shervin Khodabandeh: Yeah. I love that, particularly the lens of, it clearly can give a fair amount of efficiency, but if that’s the only lens, then what happens is, you’re missing the bigger opportunity. But I also think, in that bigger opportunity, of how [can] AI and humans sort of grow the size of the pie together? I think AI is not going to replace an entire worker, but it will replace tasks. So, “Now, 30% of my tasks have been replaced. I have to do something with that 30%.”
Sam Ransbotham: I’ve got no shortage of things to do with my 30%.
Shervin Khodabandeh: You can’t fire 30% of me, but if you have not just the cost lens, and you have much more of a growth and productivity and profitability and innovation lens, then the sum of all those 30 percents can go into creating all those new opportunities.
I was wondering if there is any research or thoughts that the partnership has on, what are some of these opportunities that are going to create jobs versus replace jobs? There’s a lens here, which is, “We’ve got to be ethical, we’ve got to make sure that harm is not done, and we also have to have a longer-term perspective of not just replacing [workers] with pure blind automation.” So that’s just sort of more like protecting. But I wonder if the partnership has any points of view on, how do you expand the art of the possible through the creation of new roles and new jobs?
Rebecca Finlay: Well, 100%. I mean, I think the way you’ve just described it, it’s not a trade-off. It’s not like responsibility and safety are a counterweight to innovation and you have to constantly be choosing between innovation and benefits. We know that in order to be innovative, in order to be opening up new markets, in order to be thinking about new beneficial outcomes, you need to be thinking about how you are doing this safely and responsibly as well. The whole opportunity for generative AI to become much better —because today, it still has real challenges, whether it’s hallucinations or other ways in which it is deployed — just hasn’t really gotten to where it needs to be. But if we’re starting to think about, once it’s there, how it can be deployed to really deal with some of the biggest global challenges of our time.
That’s why I’m at the Partnership on AI: because I believe that AI does have that transformative potential to really support important breakthroughs, whether it’s in health care or, really, the big questions in front of us around our environment and about sustainability. We’re already seeing this in the predictive AI world, where we’re starting to see it just becoming integrated into the scientific process across all sorts of disciplines.
I do think, getting back to this question of the trade-off between responsibility and innovation, that one of the things that I hear from companies right now is, they feel alone as they’re trying to disentangle the risks of deploying these technologies and the benefits to their productivity and the innovation and how they serve their customers as well. And so one of the reasons why I think the work that we do at PAI is important is, I want to say there is a community of organizations that are wrestling with exactly these same questions, that are trying in real time to figure out, what does it mean to deploy this responsibly in your workforce? What does it mean to think about the safety of these systems and how they’re operating, whether that’s auditing oversight or disclosure or otherwise? How do you experiment, and what is best practice? And so I think more and more, if we can let companies and organizations know that there are communities who are actively working on these questions, where you can get some insights and, really, in real time, develop what will become best practice, that that’s a good thing for them to know.
Shervin Khodabandeh: Yeah. It sounds like education and collaboration and sort of sharing all these things is key because, absent that, it’s really easy for many people, including many executives, to think of AI as “It’s a tool; it’s going to give me 12% productivity. I’m going to need 12% less people, and let’s figure out how to do it,” which is a valid way of thinking if you’re not exposed to everything else that’s happening. And this is really interesting: You mentioned that so many of the participants in this partnership are actually big tech firms that are shaping the very thing we’re talking about.
Rebecca Finlay: Yeah. I think it also goes to show that it’s all very new. So many of the questions that we’re dealing with, whether you’re a large technology company or a small startup [that] wants to develop systems based on the release of these foundation models or otherwise, so many of these questions are being sorted out in real time. Now, that doesn’t mean that we shouldn’t be looking to other sectors, and I know you both have experience in sectors where we should be able to learn better about “What does it mean to be safe and responsible as we deploy these AI systems?” as well. But I do think that, even for policy makers, both keeping up to speed on the latest developments and the pace of the technology, but also understanding the nuances of the technology, it’s a tricky time. And so finding places where we can learn together is really crucial.
Sam Ransbotham: Learning together makes a lot of sense because I think one of your key missions is about [the] collective good and the idea of, you just said, that we are learning as we go, and this is a technology that you can’t look at the back of the book and see the right answers.
But we tend to learn better from things that don’t go well. The news media — they pick up examples of extreme wonders and extreme terrors. There’s very little of this nuance that you just mentioned. How do we elevate, and how do you get people to share, this nuance in the good experiences and the bad experiences? How do you get them to learn for this collective good?
Rebecca Finlay: Yeah. That is just such a fundamental question. And, of course, from your experience and from other sectors, we know that building a culture of safety means building a culture of transparency and disclosure.
Sam Ransbotham: Exactly.
Rebecca Finlay: Right?
Sam Ransbotham: Which we don’t see much of.
Rebecca Finlay: Right. We were really happy, a number of years ago, to work with some researchers to develop an incident-reporting mechanism, which is now thriving. And you’ll see, if you’re looking at any of the emerging frameworks around foundation model or frontier model, whether it’s the G7 work or work coming out of the OECD or elsewhere, this question of incident reporting is becoming very, very clear. And how do we create ways so that we better understand, once these systems are deployed, what is actually happening and how can we all learn from them? So I think that’s one piece of it [that] is going to be crucial.
Shervin Khodabandeh: When I think about the past 25 years that I’ve been in this field, it’s all, typically, a one-sided story from the perspective of the technology developer or proponent, which then forces a very negative sort of dialectic on the other side, from the group or organization or people who are trying to rein it back in, which gives this very polarized way of dealing with this. But we are at a precipice or an inflection point where the speed with which we learn and adapt is really critical, which means we need to share things that aren’t working.
Just a couple of days ago, a YouTube video came on my [feed] that says “how we messed up this customer’s order,” and the whole point was about how that company screwed up and what mistakes they made, and then how they corrected it. And that’s all they do: They basically post all these videos about how they’ve messed up something and how they respond to it. And they get a lot of views because they just literally celebrate their mistakes and learn from it, which is not something you see often.
Sam Ransbotham: Maybe I should start a YouTube channel with all the screwups I do in class.
Shervin Khodabandeh: You’d get a lot of views, for sure.
Rebecca Finlay: You know, I’ve been thinking a lot about this question of openness lately, because you’ll both know that there was a debate last year between open models and open-source models and closed models and which were safer. And so, in the executive order, for example, over the past year, there was a real interest in finding out more about open source and what are the marginal risks associated with those type of models being deployed. But for me, this question of openness relates directly to this point I think you’re making about sharing and how do we build a culture of open sharing into the AI ecosystem. And so I think, first and foremost, it has to start in the classroom. It has to start in the research community itself.
As you know, in the AI research community, we don’t have the same culture of ensuring that everything is published openly. We have a lot of research that’s now happening behind closed doors. We don’t have the same sort of journal and editorial and oversight perspective that we do in some of the other fields of science. We have a lot of things that are published via conferences or published directly to archive. So, what do we need to do? What does publicly funded research need to do at that very, very early stage in order to incent a culture of true scientific openness and scrutiny?
And then, yes, the next piece of it has to be, what is the level of transparency and disclosure when these systems are being deployed out into the world? We put together a set of guidance all the way from pre-deployment R&D to post-deployment monitoring. What does it mean, at 22 places along the development and deployment ecosystem, to be consciously disclosing and attending to risks and ensuring that guardrails are in place? We think about the disclosures that are required in the financial services sector, for example, or elsewhere. What does that mean, to have that type of disclosure regime in place for some of these very, very large models?
And then I think the last piece of it is, how are we making sure that we are bringing the public and citizens into this conversation about the way in which these tools are developed and deployed? So, what does “openness” mean when we think about citizen engagement in this process as well [and] being really part of the tech development process to ensure that their voices are heard, about how they want the technology to work for them and not on them? So I do think, as we sort of build our skills and capacity around the deployment of this technology, we need to be thinking about openness, sharing, and disclosure in order to ensure that it really does work for us.
Sam Ransbotham: You mentioned public funding, but I think that’s part of the thing too. That’s one thing I’m interested in the partnership, in that so much of basic science used to come out of universities, and it used to come out of [the National Institutes of Health] or NSF [National Science Foundation] funding, but that’s not who’s really on your board. That’s not the FAANG companies that started the partnership. That’s why it seems particularly important that maybe the partnership has a role here that might not have existed with prior types of technologies, because the costs are staggering and universities are not really able to participate in the research budgets of … You know, OpenAI has close to $10 billion of funding from Microsoft, but the NSF had about a $9.9 billion budget in 2023. So that one project — it would wipe out the NSF budget entirely.
So it’s not clear exactly how that plays out. And, because we talk about openness, I think the community’s been very open about sharing algorithms; those algorithms are very widely dispersed. But openness really depends on data, too.
Shervin Khodabandeh: I also feel like it’s really openness on deployment and …
Sam Ransbotham: Like where it’s being used? Is that what you mean?
Shervin Khodabandeh: How it’s being used, where it is messing things up. It’s beyond the algorithm, right? It’s the data that gets fed into it.
Sam Ransbotham: Right.
Shervin Khodabandeh: It’s all the decisions that get made. It’s the human role. Just in general, Sam, your comment around sharing and, Rebecca, your point around transparency — it really resonates with me. Because I just feel like, generally speaking, when it comes to technology, it’s sort of an alpha game of like “This is the best one. It’s better than the other one. Its accuracy is more; its errors [are] less.” Like, everything is always so great and is better than everything else. But I do feel like there is something here, and maybe it’s also more on the evolution of us as a society, as to just accepting and acknowledging that these systems make mistakes. And if you’re not hearing about it, it’s because somebody’s hiding it rather than it’s not [happening].
And so I think the more some of these big players begin to openly talk about these things, so that they’re not the only one …
Sam Ransbotham: That’s a great example.
Shervin Khodabandeh: I mean, if many of these larger players in this world really exhibit the same kind of openness that we’re talking about here — “Look; we made a mistake. This is how we’re fixing it. This is what happened” — I actually think that would demystify a lot. I think that will increase, a lot more, the public trust and will allow a real dialogue because I actually feel like this conversation is very much polarized at all levels. It’s very polarized for those who understand AI and those who don’t. Within those who do understand AI, it’s very polarized between the proponents and the folks against it.
Sam Ransbotham: Fearmongering.
Shervin Khodabandeh: Yeah. I feel like we need to break that polarization and bring these two sides together.
Sam Ransbotham: So maybe just to shift that slightly, Rebecca, we’ve been talking a lot about a Western-centric world, but so much of this is going to affect the globe. What does the partnership think about and how does it try to work with those outside of San Jose [in] the rest of the world?
Rebecca Finlay: Absolutely. And, of course, we know today that many workers outside of the Global North are very much consumers of the technology that is being developed in the Global North as well. So we need to make sure that they are part of the global governance conversation. I was really heartened to see, just this September around the UN general assemblies, that there was a really big initiative to think about bringing many, many, many of the voices that are not around the table at the G7 or many of the other discussions, whether it’s the safety institutes that are being developed in the different countries, to bring those countries and those voices into the conversation about what is needed from a global governance perspective.
We had the High-Level Advisory Body [on Artificial Intelligence] to the secretary-general at the UN release their report really starting to think about, what does it mean? There’s a lot of work that we’ve been doing to try to hear from organizations across the globe in, how is AI currently working within your populations? How would you like it to be working? And that’s both from companies who are developing and startups who are developing technologies through to many academics who are developing all sorts of new technologies — thinking about the way, for example, to develop data sets with languages that are not prevalent in those that are being developed out of the West. So there’s really so much interesting work that’s happening, and thinking about how to get those voices into this conversation is core to what we do at PAI as well.
Shervin Khodabandeh: So now we have a segment [that’s] called Five Questions, where I’ll ask you a series of rapid-fire questions. Just tell us the first thing that comes to your mind. What do you see as the biggest opportunity for AI right now?
Rebecca Finlay: The biggest opportunity for AI is for companies to start experimenting with it, to see how it can truly drive innovation and potential in the work that they’re doing. Start experimenting in a low-risk, high-value way as soon as possible. The more you use the technology, the more you understand what it can do for you and what it can’t.
Shervin Khodabandeh: What is the biggest misconception about AI?
Rebecca Finlay: That it is a technical system that sits outside of humans and culture. I always say, AI is not good, and it’s not bad, but it’s also not neutral. It is a product of the choices we make.
Shervin Khodabandeh: Mm-hmm. Very good. What was the first career you wanted? What did you want to be?
Rebecca Finlay: I think when I was younger, I wanted to be a journalist. I wanted to tell the stories of people and societies, and I’m very happy that, as part of my job, I get to do that today.
Shervin Khodabandeh: When is there too much AI?
Rebecca Finlay: I think there’s too much AI when we see people deferring to AI systems or overtrusting AI systems. You know, one of the things that we know is that we tend to overtrust something that happens through computers or machines.
Shervin Khodabandeh: What is the one thing you wish AI could do right now that it cannot currently?
Rebecca Finlay: Oh my goodness. Where do I get started?
Sam Ransbotham: You have to pick one.
Rebecca Finlay: You know, my favorite use of AI right now is my bird app. I don’t know if you’ve used the Merlin bird app, but it’s a great app for bird-watchers because you can take a very fuzzy picture of a bird, and it will give you a nice match in its system. Or you can take a recording of a bird’s song, and it will tell you what the bird is. So I guess what I would love AI to do is to help me see the birds, help me find the birds in the trees.
Sam Ransbotham: It’s like Pokémon GO for birds, I guess.
Rebecca Finlay: I would really love that. I’ve been enjoying so much learning more and more about birds. But I need to find them.
Shervin Khodabandeh: Rebecca, it’s been wonderful speaking with you. Thank you.
Rebecca Finlay: It has been my pleasure entirely. Thanks so much for having me.
Sam Ransbotham: Shervin, Rebecca, I thought, has a really interesting perspective. This idea of, we’re all in the same boat, we’re all developing this technology, none of us has any experience with it, but it affects all of us. That’s a great perspective, and it makes me optimistic and worried at the same time.
Shervin Khodabandeh: Well, I agree with you. I also think that you had a very good point on [the] importance of transparency and sharing. Like the question you asked about, how do we get people to share more about the things that don’t work? We hear all the positives. I actually think between the two points of “Look, we’re all in this together; it will help us all if we are transparent and collaborative about it.” And in many ways, I mean, AI has gotten to what it is mainly because of the open-source nature of it, right?
Sam Ransbotham: Absolutely. That’s huge.
Shervin Khodabandeh: But when we say “open source,” we don’t just mean “Let’s share the algorithms”; it also means, in this new paradigm that we’re entering, it also means “Share lessons from deployment and things that don’t work.” And, for me, the YouTube thing I was referring to was really eye-opening.
Sam Ransbotham: I liked that.
Shervin Khodabandeh: Because usually when you see a tagline that’s like “how we screwed up this customer’s order,” you’re like “Why does this have so many views?” and you’re looking for something chaotic. And they literally are showing all the mistakes they made and who made those mistakes, and it doesn’t matter the mistakes you make; it matters how you correct them and what you do in correcting them. I actually think we have entered, really, a phase where we need to, as a society and as a community of technologists and innovators, share a lot more about things that don’t work and why they don’t work. If we’re truly going to be open source, we should be open source about that, too.
Sam Ransbotham: Yeah, that’s a great expansion. If I hand back an exam to people, they skim right over all the things that they did correctly and they focus on the ones they did wrong because that’s where we have the opportunity for learning. And actually, to make it even sort of more of a machine learning example … the fundamental idea behind boosted trees and the boosted algorithms is that you pay a lot more attention to the places where your model makes an error than where the model gets it right.
Shervin Khodabandeh: That’s right.
Sam Ransbotham: Shervin, I think we’ve got another example here where we’re getting into a little more depth behind what it means to go beyond the platitude. You know, we’ve got these ideas that no one disagrees with of doing the right thing; don’t do the wrong thing. But like Jeremy Kahn said back when we were talking with him, adaptability? Sure, be adaptable, but how do you be adaptable? Well, today we were talking about open. How do you be open? I don’t think we have the right answer to this, but it’s an interesting thing to think about.
Shervin Khodabandeh: We need to celebrate transparency around mistakes and how we correct them.
Sam Ransbotham: One thing I liked that she also said was about this idea of sociotechnical. I think that a lot of people view this as a technical problem [that] has a technical solution. For example, we have copyright infringement, and so watermarking is a technical answer for that.
Shervin Khodabandeh: That’s right.
Sam Ransbotham: And, you know, I’m not against watermarking; I think there’s a lot of potential for that. But it’s not just going to be solved by a technical solution. These are things that are operating within societies, within cultures. And that’s a theme that keeps coming through. We did the report a couple of years ago about the intersection between culture and the technology of artificial intelligence. It seems like there’s a growing recognition that it’s more than just a new set of algorithms.
Shervin Khodabandeh: That’s right. That’s right. I mean, it is, when I think about any technology that’s widely available to people, whether it’s cars or whatever it is, it is a question of responsibility of an individual too, right? But I think that the cultural aspect of this, which can start from within corporations … I mean, we did talk about how AI makes individuals and then teams and then organizations more proud and happy, etc., right?
I do really think that the point around the intersection of society and culture with technology is going to be really, really key here rather than just a bunch of regulations and bunch of technology artifacts that sort of correct mistakes or prevent mistakes. It’s how we choose to use something, right? When I get behind [the wheel of] a vehicle, it doesn’t matter whether there is a speed limit. I mean, it does matter, but it doesn’t matter whether I’m being followed or there is a cop or whatever. There is something about being responsible, and it’s ingrained in us as human beings.
Sam Ransbotham: And it transcends regulation itself.
Shervin Khodabandeh: Yeah. It’s like, look, you can’t be reckless because you’re going to endanger your own life, other people’s lives. It is something that we’ve now begun to accept, as a society, that just goes with being responsible with a tool. And Rebecca said it very nicely: It’s not good or bad; it just is. And it’s how you use it that’s going to make a difference.
Sam Ransbotham: If I step back and I think back to some folks we’ve had — we’ve had Mozilla, for example; we’ve had the Partnership on AI; we’ve had Amnesty International — these are not some of the traditional companies using artificial intelligence. And, you know, I think some of these organizations may fail. Some of their initiatives might not end up working well. But I think the fact that they’re trying, and they’re pushing toward these things can help that collective good, even if they don’t quite reach the goals that they set for themselves.
Even if they come close, I think it’s an important force, and I’m glad that we’ve had some of these guests on here to share that.
Thanks for listening. Our next episode is a bonus episode, where I speak with Oxford’s Carl Frey and LinkedIn’s Karin Kimbrough at a recent conference on jobs in the age of artificial intelligence.
Also, if you have a question for us about AI strategy, implementation, use, Shervin’s favorite flavor of ice cream, or anything else, we have an email address: smr-podcast@mit.edu. We’ll include that email in the show notes. Send us your name, where you’re from, and what question you have, and we’ll dedicate an episode to airing some of those questions and the best we can come up with answers for them. Thanks.
Allison Ryder: Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn’t start and stop with this podcast. That’s why we’ve created a group on LinkedIn specifically for listeners like you. It’s called AI for Leaders, and if you join us, you can chat with show creators and hosts, ask your own questions, share your insights, and gain access to valuable resources about AI implementation from MIT SMR and BCG. You can access it by visiting mitsmr.com/AIforLeaders. We’ll put that link in the show notes, and we hope to see you there.