Artificial Intelligence and Business Strategy
In collaboration withBCG
ExxonMobil is an energy company that’s existed since 1870, well before artificial intelligence. So, what does an AI manager at ExxonMobil do? In the latest episode of the Me, Myself, and AI podcast, hosts Sam Ransbotham and Shervin Khodabandeh interview Sarah Karthigan, AI operations manager for IT, to find out.
Sarah leads a data science team tasked with making use of large volumes of data, with the goal of offering reliable and affordable energy to a variety of populations. A major focus of Sarah’s efforts has been around self-healing, a method for internal process improvement. Listen in to learn how her group secures buy-in for various technology initiatives and works to continually improve human-machine collaboration for the organization.
Read more about our show and follow along with the series at https://sloanreview.mit.edu/aipodcast.
Subscribe to Me, Myself, and AI on Apple Podcasts, Spotify, or Google Podcasts.
Shervin Khodabandeh: An all-you-can-eat sushi buffet and artificial intelligence — how are they related? Find out today when we talk with Sarah Karthigan of ExxonMobil.
Sam Ransbotham: Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I’m Sam Ransbotham, professor of information systems at Boston College. I’m also the guest editor for the AI and Business Strategy Big Ideas program at MIT Sloan Management Review.
Shervin Khodabandeh: And I’m Shervin Khodabandeh, senior partner with BCG, and I colead BCG’s AI practice in North America. Together, MIT SMR and BCG have been researching AI for five years, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities across the organization and really transform the way organizations operate.
Sam Ransbotham: Today we’re talking with Sarah Karthigan. She’s the artificial intelligence for IT operations manager at ExxonMobil. Sarah, thanks for joining us. Welcome.
Sarah Karthigan: Thanks for having me.
Sam Ransbotham: ExxonMobil is one of the world’s largest oil and gas companies, and it’s existed since the 1870s, long before artificial intelligence. Sarah, can you tell us about your current role at ExxonMobil?
Sarah Karthigan: I am currently responsible for leading the design and execution of self-healing strategies for IT operations, using artificial intelligence. Self-healing, at its core, is proactively monitoring, detecting, and remediating issues without human intervention.
Sam Ransbotham: How did you end up in that role?
Sarah Karthigan: My background is in electrical engineering, and I started at ExxonMobil as a technical lead. I then went down the management career path, but one of my jobs took me up to Clinton, New Jersey, to support the corporate strategic research function. So it is there I got exposed to data science, artificial intelligence, and machine learning. I was a part of several pilots where we were assessing artificial intelligence capabilities. This inspired me to go back to school, and I pursued my graduate certification at Harvard in data science. One thing led to another, and I came back to Houston to take on my data science manager role.
Sam Ransbotham: Maybe let’s start with an example of a project that — maybe self-healing, maybe one of these projects — [is] an example of a concrete way that you and your team have used artificial intelligence in a way that you couldn’t have done before artificial intelligence.
Sarah Karthigan: There are plenty of opportunities for artificial intelligence in the energy sector, but before I actually give you some examples, I think it’s worthwhile to just understand the scale of operations in the energy sector. So, starting with the basics here, energy is constantly evolving, and when you think about energy, it underpins every area of modern life, right? So when you think about mobility or economic prosperity or social progress, access to energy underpins all of that. And at its core, what we do here at ExxonMobil is ensure that we are able to offer reliable, affordable energy to the masses. So the scale of energy itself is quite unimaginable, and the data that we work with is also massive. Big data is not new to the energy sector, so we deal with just huge volumes of data. Without artificial intelligence, without data science, or without machine learning, you can imagine the amount of effort that goes into just processing and analyzing that data. And with artificial intelligence, it is such a big, big, big advantage. The potential that AI carries with respect to just overall improving efficiency and cost effectiveness is huge.
We also use artificial intelligence for areas where we are able to automate manual tasks, thereby improving safety and productivity. And if we are able to get people [out] of harm’s way, that’s a huge application for artificial intelligence in the energy sector. Additionally, ExxonMobil is an energy company, but at its core, again, we are a technology company, and so we can use AI to help our scientists and engineers in their decision-making process. We are able to augment their decision-making, connect the dots, and help discover insights of value [for] them at a much faster pace, so there are plenty of applications.
My team and I, we have worked on several use cases. And again, when you think about big data, clearly you can think of potential applications of deep learning when it comes to image processing. Now whether that’s [at] the front end of the value chain — you know, you can start with seismic image processing to even leak and flare detection — so we can use artificial intelligence for just, again, plenty of use cases. So that’s one side of things. You can also use artificial intelligence — and we have used it for demand sensing, for dynamic pricing, for dynamic revenue management. Also, we have used it for trading. So there [are] just so many different applications that my team has been involved in.
Shervin Khodabandeh: Sarah, tell us a bit about self-healing. I think you mentioned building AI systems that can preempt issues or problems or errors or faults — I don’t want to put words in your mouth — without human intervention. Could you give us some examples of those?
Sarah Karthigan: It all starts with monitoring, right? How well can we monitor our systems, capture the right type of data, and then integrate data, which is probably sitting across silos today? It all begins with that: capturing the data and bringing it all together and integrating it so you’re able to have visibility across the different silos that we have in place. It starts with observability. And then, once you have the data in place, now we are talking about: How can we utilize the data? How can we analyze it? How can we teach a machine? How can we train a machine to extract insights out of that data, to look at patterns, to see what typically happens before an incident occurs? It is able to look for those patterns. It’s able to understand the history and detect anomalies, and thereby it is able to prompt — either an end user, or you can just go ahead and close the loop out with automation altogether — and kick off the necessary automations that need to happen, need to occur, so we are able to remediate the issue even before it becomes an issue. That is kind of the life cycle of self-healing.
Shervin Khodabandeh: Yeah, that’s very helpful. And tell us a bit about the number of use cases, if you will. How big is this group’s span of impact and work?
Sarah Karthigan: There are multiple groups within ExxonMobil, because, as you were saying, given the scale of the company, it’s not possible to just centralize all of the data science capability in just one group, so we do have data scientists. We have AI engineers — machine learning engineers — embedded into the different business functions so they are able to work very closely with the business. And the opportunities — there are many. We are working on a myriad of those use cases, and they only continue to grow.
Sam Ransbotham: Who initiates these projects? Are these things that your group comes up with, or [do] the business units bring them to you? What’s the working relationship there?
Sarah Karthigan: The nature of the AI project, as well as who initiates them. … It typically comes down to where a business line is in their AI adoption and utilization journey. If they are in the early stages, what you will see is typically they are looking at a few potential use cases. They are exploring a few enterprise-scale opportunities. That’s where it kind of starts. But as they continue down that maturity curve, you will notice that now we’re talking about systemic introduction of AI capabilities into core businesses. We’re talking about true enterprise-scale opportunities, so we are able to drive data-driven decisions. And so, depending on where the business line is in their journey, that dictates the nature of the project as well as who initiates it. The more mature a business line is, the more the business lines initiate the projects themselves.
Sam Ransbotham: What’s an example of one that someone has initiated, or can you give us just a very specific “Before AI they were doing X, and then they came along and we said, ‘Hey, let’s use artificial intelligence and then we can do Y’?” What’s the difference? And can you give us some concretes around one of those?
Sarah Karthigan: I’ll start with a simple example. I touched on this earlier. ExxonMobil is a very data-rich company, so big data is not new to us. There’s data that is locked up in salt mines, so we have huge volumes of data. In the past, some of our geoscientists and geophysicists, they had to process a lot of unstructured data, pretty much manually. And they were the ones who were connecting the dots. These were the subject matter experts, so they were ingesting all of this unstructured data, and they were connecting the dots, and they were identifying the right place for us to go pursue.
But now, with the introduction of artificial intelligence, we were able to build an intelligent system that, using natural language processing, had us able to ingest huge volumes of data. And we’re able to train that system to look for the right type of patterns and to help augment the decisions that a geoscientist or a geophysicist would make. So that is one example of how we use machine learning insight.
Shervin Khodabandeh: I was going to ask you, Sarah — it seems like there is a large amount of human-AI collaboration that has to happen in this example that you gave, because we’ve got to imagine that a series of decisions that used to be performed by human experts and geologists and engineers that is, over time, being augmented and maybe even entirely automatically performed by AI must have gone through a pretty robust journey to get to a level where those experts are comfortable and actually seek out the machine rather than rely on their judgment.
So comment a bit about how that process happens and how you bring the experts and the geologists and the engineers and others from the old-school way to the new-school way. What does that feel like?
Sarah Karthigan: It is a journey, and it starts with, No. 1, understanding what is the appetite for new, emerging solutions with the end-user base, because this is not just a technology challenge; this is very much a cultural challenge. And then, of course, we make sure that we have advocates in the business before we start on any of these AI pilots, AI solutions … because ultimately, the end users need to be bought in. They shouldn’t be fighting the solution. They should very much be the ones who are adopting those solutions and who are helping propagate the changes that this would produce. We have seen that having a very robust management of change process is crucial for the adoption of an AI solution, for it to become a success.
And what we have also learned is, giving the end users an under-the-hood experience of what the tech actually does — what it brings — is extremely helpful. They are able to see that this is going to augment what they’re doing [and is] not going to replace them.
Sam Ransbotham: What is their reaction? When you give them this solution that does a lot of what they have been used to [doing] before, what is their reaction? How do they feel? What do they say?
Sarah Karthigan: They actually love it when they realize that the machine is actually helping them. And sometimes it is able to even lead them to areas that they may have not checked themselves. I have seen that the partnership goes really, really well once they understand the value that the new solution is able to bring to the table.
Shervin Khodabandeh: You led, actually, in your response to this question with several nontechnical factors first, right? So, “What’s your appetite, what’s the openness to change, and how badly do you want it?” Which is really quite insightful, because over the last 10 years, it’s just been indexed so much toward the technical side of things, and then the change management becomes an afterthought, and I was really energized that you actually led with the change management: “Before I do anything, before I write a single line of code, how badly do you want it?”
I want to follow on the appetite question. The first time I was offered sushi, my appetite for it was zero. But when somebody effectively forced me to try it, then it sort of became my [go-to] food. So how do you balance that act of not forcing the end user, but also helping them understand that what they think their appetite is before they try it is going to be different than what their appetite will be after they try it?
Sarah Karthigan: When I first founded the group, when I had my first set of data scientists, we actually met with quite a lot of skepticism, to your point — so a lot of people thinking, “All of this is just hype. … Why are we doing this? We know what we are doing. We have done what we do very successfully. So why do we have to change it?”
So when we started out, it really came down to demonstrating the art of the possible. We were knocking [on] a lot of doors and asking people, “Hey, just give us your data. And you don’t have to even engage with us,” because folks were at that time a little bit skeptical about the amount of time it would require on their part, and they were not necessarily ready to offer that at the get-go. So we started out with, “Just give us your data, and let us come back to you with what we can discover on our own and see if that is of interest to you or not.”
Shervin Khodabandeh: And now you have many sushi restaurants?
Sarah Karthigan: Very much so.
Sam Ransbotham: An all-you-can-eat buffet. So let’s say that you’ve got these people somewhat convinced and interested, and then you start to put things into production. How do you keep them going? How do you keep them improving? How do you keep them continuously getting better? Do you have processes around that, and, if so, how is that organized?
Sarah Karthigan: I’ll tell you this much: It’s been an interesting learning experience. Because it’s one thing to go build out a model. It’s one thing to go ahead and create a prototype and have everything working. But it’s another thing altogether when you’re trying to operationalize it. After you operationalize AI solutions, what we have learned is, No. 1, [in order] to make sure that it is fully integrated into the business processes, there are several things that you have to be aware of and keep tabs on. We ensure, after a solution has been operationalized, that it is being monitored.
So that is extremely important. Now, we learned very quickly [that] you cannot monitor all the features of the model, so there are some features that you have to home in on that have the potential to disrupt — to, I would say, not necessarily break the model, but it has the greatest potential to impact the predictions. So we want to home in on those types of features and monitor them and see if concept drift is setting in, because once a model moves into production, it starts degrading. That’s the reality. So we need to ensure that we are keeping our eyes on the model to make sure that the predictions are still accurate, that they are still useful. We also make sure that our models are being retrained with the latest and the greatest data.
We are looking into adopting a weighting mechanism so that more recent data is weighted [more] heavily in retraining a model than older data. And we’re also looking into continuous improvement, continuous training, and continuous learning methodologies for our models. So these are some things that we do once a solution has been productized.
Sam Ransbotham: Within your organization — and that’s about how the models get better — how do you help the end users get better? You mentioned initially working with them to make sure that it’s not too much resistance to even consider trying a model — that’s Shervin even trying sushi in the first place — but how do you get them to appreciate the finer culinary aspects? I mean, maybe for all we know, Shervin’s stuck on the same piece of sushi that he started with years and years ago, but there [are] lots of other types out there. How are you growing that understanding in the user base?
Sarah Karthigan: We have several efforts in progress within the company where we are looking at upskilling our employees, making sure that we are able to train them on the latest and the greatest emerging technologies so they have enough of an understanding of what AI offers, what are the potential use cases we can consider. … So there’s a lot of training work that is happening.
Sam Ransbotham: What are you excited about? I mean, what’s new and what are we going to read about tomorrow that ExxonMobil is doing with artificial intelligence? What’s something you’re excited about, either a technology or a project?
Sarah Karthigan: What I’m really excited about, and what I hope you get to read about soon, is this self-healing pilot that we are gearing up to do. The self-healing pilot is looking at taking an application that is end-user facing and seeing how many of these self-healing wins we can realize. We have been investing our time in building out the foundation, the fabric that is important to really bring this whole solution together, so now we are very much excited about testing that out and putting the strategy into action.
Shervin Khodabandeh: Sarah, as you think about your own team — building and cultivating and expanding that team — two questions: What are you looking for in the candidates that you’re bringing in? What are some technical and nontechnical capabilities you’re looking for? That’s my first question. And No. 2 is, how do you keep them interested and excited in data science and AI, with everything that’s going on and all the other options out there for them?
Sarah Karthigan: Let me start with answering your second question. So how do we keep them interested? We keep them interested by exposing them to diverse use cases. You don’t have to leave the company to work on a finance problem. There are opportunities here within the company. And so just the myriad of use cases that the data scientist gets to work on, gets to solve, is what I have found that keeps them excited, that wants them to continue their career here within the company. So that is our secret to retaining talent internally.
As far as what do I look for in a candidate? I am quite keen on diversity. I don’t want a team that is an echo chamber. I specifically go seek out skills that are in adjacent areas. I have had data scientists on my team whose background is biostatistics. I have even had people with English and political majors. Of course, now I am looking for people with data science skills, too, so either they had an undergrad degree in that area but then they also studied data science. I go seek out those types of candidates because it’s extremely important to have very diverse viewpoints at the table when you are trying to solve a problem.
I am looking for someone who’s curious, who is very much interested in problem-solving. And, again, what excites them is challenging problems, and we’re talking about a scale that is truly unimaginable.
Shervin Khodabandeh: Sarah, you’ve been named a leader in tech. You’ve been named one of [the] 25 most influential women in energy, in tech. What do you think companies could do more of to ensure a more fair gender balance in data science roles, and what do you think data scientists out there — female data scientists that are just starting their careers — could be doing more of?
Sarah Karthigan: I would say it all starts with providing equal opportunities. I am here because I got the opportunity to demonstrate what I can do, what I’m capable of doing. Making sure that, that window of opportunity is truly open for both women and men is crucial, so that’s where it all starts. For an aspiring data scientist, for girls in middle school, high school, who are even considering pursuing a STEM career, my encouragement would be, yes, absolutely, we need you.
Women bring a perspective that is so different and that is extremely needed in the work environment. And especially — we talked about responsible AI. It is important to have that type of a diverse perspective right from the get-go — right from building a strategy, all the way to execution. It should not be an afterthought. You shouldn’t try to slap on “Hey, let me go ahead and make sure I address diversity and inclusion at the end.” No; that’s not how it works. You start with that, and that is crucial. And women play a key role in making that happen.
Shervin Khodabandeh: And what do you think women in data science who are either just starting their careers or are in their academic training, what do you think they could be doing to seek out the right opportunities for themselves? What’s your advice for them?
Sarah Karthigan: I would say that ensure you have really good examples of either a capstone project or experiences with internships or co-op opportunities — whatever you want to call those experiences — with companies where you have dealt with real data. I think that absolutely augments your resume. And then, on top of that, once you have found that entry point into a company, just feel free to speak up and bring your solutions very vocally to the table. That’s what I would say.
Sam Ransbotham: Today we learned a lot about starting with the organizational aspects of an artificial intelligence change versus the technical aspects. [We] learned about leading with the idea of showing people what’s possible and what the potential can be from artificial intelligence. We learned about the many steps in the process of data that are fraught with peril but organizations can overcome. And I really appreciate you taking the time to talk with us today, Sarah. Thanks for joining us
Shervin Khodabandeh: Thank you, Sarah.
Sarah Karthigan: It’s been my pleasure. Thank you.
Sam Ransbotham: In our next episode, we’ll talk with Doug Hamilton about how Nasdaq uses AI to mitigate high-risk situations. Please join us.
Allison Ryder: Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn’t start and stop with this podcast. That’s why we’ve created a group on LinkedIn, specifically for leaders like you. It’s called AI for Leaders, and if you join us, you can chat with show creators and hosts, ask your own questions, share insights, and gain access to valuable resources about AI implementation from MIT SMR and BCG. You can access it by visiting mitsmr.com/AIforLeaders. We’ll put that link in the show notes, and we hope to see you there.