Me, Myself, and AI Episode 805

Operational Safety With AI: Chevron’s Ellen Nielsen

Play Episode
Listen on
Previous
Episode
Next
Episode

Topics

Artificial Intelligence and Business Strategy

The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.

In collaboration with

BCG
More in this series

Ellen Nielsen, Chevron’s first chief data officer, sees data as the common thread throughout a career that has spanned systems, digital data, procurement, and supply chain. In her current role, she applies what she’s learned to Chevron’s wide-ranging AI and machine learning initiatives, including the use of robots and computer vision to inspect tanks, digital twins to simulate operations, and sensors to monitor equipment in refineries.

On this episode of the Me, Myself, and AI podcast, Ellen shares examples of the integrated energy giant’s use cases for machine learning and generative AI, and she describes the company’s citizen development program, which puts safe, secured AI and machine learning tools in the hands of employees throughout Chevron.

Subscribe to Me, Myself, and AI on Apple Podcasts, Spotify, or Google Podcasts.

Transcript

Sam Ransbotham: Digital twins? Generative AI for engineering? On today’s episode, find out how one petrochemical company systematically upskills its workforce to benefit from new tech like generative AI.

Ellen Nielsen: I’m Ellen Nielsen from Chevron, and you’re listening to Me, Myself, and AI.

Sam Ransbotham: Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I’m Sam Ransbotham, professor of analytics at Boston College. I’m also the AI and business strategy guest editor at MIT Sloan Management Review.

Shervin Khodabandeh: And I’m Shervin Khodabandeh, senior partner with BCG and one of the leaders of our AI business. Together, MIT SMR and BCG have been researching and publishing on AI since 2017, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities and really transform the way organizations operate.

Hi, everyone. Today, Sam and I are speaking with Ellen Nielsen, chief data officer at Chevron. Ellen, thanks for taking the time to talk to us. Welcome to the show.

Ellen Nielsen: Thank you for having me. I’m really excited to have a very cool conversation today.

Shervin Khodabandeh: Let’s get started. I would imagine most of our listeners — in fact, all of them — have heard about Chevron, but what they may not know is the extent to which AI is prevalent across all of Chevron’s value chain. So maybe tell us a little about your role and how AI is being used at Chevron.

Ellen Nielsen: Maybe talking about my role … it started three years ago. I was the first data officer within Chevron. That doesn’t mean that we [hadn’t already been dealing] with data [for] a long time, but the need to put more focus on the data was starting to emerge, and with that, I was tasked in evangelizing data-driven decisions, and that, of course, includes any kind of data science/analytics along the way. And it was very, very interesting to see it growing over time.

We use AI in many places. Some areas where we use robots — for example, in tank inspection today — you can imagine that was very cumbersome, having the human involved. Now we do this with robots. And we basically take the human beings out of these confined spaces, and that’s a combination of computer vision — taking images, comparing the images — and [making] predictions on what’s the status of this tank and of this equipment. Is it rusting? Does it need maintenance? Do we need to tackle it in a very predictive way so that it’s operating in a much more reliable and safe way in the future?

The other example is, when we talk about sensors in compressors or any kind of equipment, in the past we were, of course, installing them, but the prices dropped so dramatically for those sensors and the data collection, and I just saw recently … Actually, it was a citizen development application which has been created, because these sensors have to be installed, and when you install them, you basically take a QR code, and with one click you can add the geospatial location to the sensor, and then you can see all these sensors you have installed in your facility on a map, so you actually see … what’s going on and whether the thing’s actually working and which sensors have been inventoried there. So we have a combination here of computer vision, of using citizen development, and then, of course, using the sensor in a machine learning, AI-based way to come to predictions and how they work.

Shervin Khodabandeh: One of the things I know that you do quite well is digital twins. Maybe you can comment a little bit about that example.

Ellen Nielsen: Digital twins is one of many examples where we use that. What triggers [one] to do digital twins? One is, you can imagine that we have people out in the field, so we want to make their lives easier and safer. That means that the more data and the more information we can gather about our field assets and how to operate them will serve the purpose of being more safe, more reliable in the operations. That was one trigger.

The second trigger is that you collect a lot of information based on, let’s say, internet of things, [industrial] IoT devices, sensoring … and that feeds into another pool of information where you can drive even predictive decisions in these assets. So with the digital twin, we want to basically serve both: We want to be safer, reliable but also more predictive on what we do that speaks to efficiency and doing the right thing at the right time.

Sam Ransbotham: Can you give us a specific example of a place where you’re using a digital twin? How does that help with safety? How does that help with efficiency?

Ellen Nielsen: If you take a digital twin and, let’s say, you basically “digital-twin” a facility, a refinery … So in a refinery, you can imagine there are lots of pipes, there is lots of equipment, there are compressors, there are generators — there are things very mechanically working, and people have to maintain those to get the products out.

When you see the value chain from product coming in or materials coming in and product comes out, everything between this goes through this refinery. And if you have everything digital-twinned, you can plan better, you can operate better. You know when things are coming in. You can predict better on how to get a better output. And that’s basically how we do it in refineries or facilities where we operate — really looking at the flow of information and the data-driven decisions.

We were always driving decisions with information, you know? In the past, information was more in the heads of the people who are very experienced, and sometimes augmented, of course, with equipment information, but it was maybe more manually collecting or putting things together, and with the digital twin, we have the information right at our fingertips to drive this.

Sam Ransbotham: That’s a great example. I think longtime listeners will know that both Shervin and I are chemical engineers. But you may not know that — well, I’m no longer a chemical engineer, partly because I got so attracted by the idea of simulating chemical processes. We figured out that we didn’t have to build a little process to test something; we could build it on a computer and test it. And you know, that was a long time ago and really some ugly, ugly tools back in those days. I’m guessing you’re far more sophisticated — or I’m hoping you’re far more sophisticated — than that.

Ellen Nielsen: I think that’s an always evolving space, but I’m really excited about the opportunities. I can imagine, you know, when you have a catalyst or you had to test raw materials out, you had to plan it with the production head, you had to stop production, you had to real-time test it. That took away output.

And now you can simulate in a way more efficient way with specifications being at your hand and not doing it physically anymore. I was also in the world, in my past company, where we had to test things, even physically in labs, over and over again. And I think [those] times are [mostly] over. It’s becoming more simulated in a much better way.

Shervin Khodabandeh: Ellen, can you give us another example, maybe around exploration or extraction or something that also used to be quite experiential and expensive and dangerous without the data and AI?

Ellen Nielsen: I think we have also a great example out, posted actually from the company [at the] end of last year. When you think about oil and gas, you think about, how do you get more out of a reservoir? You want to get the best out of a reservoir and to do it in a very efficient, responsible way. And with collecting the data, you can imagine, if you had not the computer power and not the data at hand in a digital way, this is quite cumbersome. I cannot imagine how the people did it in the past. They maybe were printing off things and laying it on top of it, and coming up with assumptions based their experience — and of course they gained a lot of experience. Now we do this with machine learning, algorithms. We understand how the rock composition is. We even created a “rockopedia” to know what are the different rock conditions and compositions so that we can tap into this data every day when we need it.

Shervin Khodabandeh: Yeah, and I think there’s a bigger theme that, with the advent of these technologies, the sky’s the limit, and so the question is, how else can you apply it, and what else can you do with it? And I think this brings me to a question around the mission and the purpose, because there is obviously a ton of data. There are obviously a lot of tools. And the use cases are driven by the mission and what are some of the things we want to do with that.

Ellen Nielsen: Yeah. I would link it actually, in Chevron, back to our strategy. We do higher returns and lower carbon safely. And this is our guiding principle: Everything that we do should of course benefit the success of the company, the impact of the company, but also do it in a low-carbon environment. We know the world [will look] different in a few decades. We look after methane, we look after greenhouse [gas] emissions, we look after our carbon footprint overall. So this is something that we always tackle. And data and AI play their roles but also play a role in how we operate and how we operate safely. Safety is a big component of Chevron’s value system. And when you think about the future and think about AI and robots and digital twins and all of that, there is technology out there where we can help our people to do their work safer and much more reliably and in better ways, and in new ways in the future.

Shervin Khodabandeh: What’s interesting to me about Chevron or a company that’s predominantly an engineering and science company is when AI is being put in production to augment some of the decisions and some of the insights that workers and engineers and scientists are making. But as an engineer, as an operator of these plants, I may not quite agree with it. I don’t know whether this resonates: How do you get scientists and engineers comfortable to use these tools?

Ellen Nielsen: Mm-hmm. I think it’s actually helping because engineers have a very logical mindset and they know the science, and we have a lot of science people in the company. So when you talk about data science and the things behind it, we have many people very interested in learning data science, and we also would say we have started to provide education. So I think, “Where do I start?” You start with learning: “Hey, I don’t understand this.” That’s a typical engineering mindset: “I don’t understand it; I want to understand it. I’m looking for ‘What does it tell me? How can it influence my solution?’”

We have [had] digital scholarship programs [for] a while. And actually we do this with MIT, where we have cohorts going for a year, and they are not coming out of one department; they are really coming out of the whole company, going through a design engineering master’s in one year, which is really a tough thing to do. But they are coming back and understanding the new technology, understanding … how we can use it differently. And they are the first going back into their normal environment and influencing and basically having other people participating from their knowledge and venturing [into] different things that maybe they have not tried out before. So this is one thing to influence culture.

The second thing: In the data science space, we started to work with Rice University. We have a six-, seven-month program also going across the company that’s not only for IT people to learn what data science means, and they bring it back to their environment. So they are not leaving their role completely. They go in six months, seven months, and then they return back in the best way possible to influence the company: “Hey, what is possible?”

The last piece is maybe the broadest way because we call it citizen development. We believe that many, many people in the company get things in their hands now with the evolution of AI. And we just saw gen AI is now in the hands of … everybody who wants it. And with this kind of citizen development overall, we want to bring the technology, which has become much easier, to many people so that they can use it. And, of course, they need data for this, and that’s why we provide the data in these systems — to be more self-efficient. So I would say there is a three-prong kind of approach to influence the culture, leadership, and we have really nice [use] cases over in AI citizen development. We are also publicly talking about it with certain use cases we do. I think that’s the culture piece. It takes a while to get into every artery of the company, but I feel there is really excitement in the company right now to go down that road.

Shervin Khodabandeh: What I like about what you’re saying is that [you’re] actually doubling down on the predominantly engineering and scientific culture of the company and making this a cross-disciplinary collaboration between science and engineering and AI, versus any of these replacing each other. It’s an and, not an or.

Sam Ransbotham: Is there a specific example you have where someone has gone to one of these seven-month programs or the digital scholar program and brought back something that’s made some change, made a difference?

Ellen Nielsen: Yeah, definitely. So we have many because we are, I think, two or three years into this, and, of course, they bring it back and solve several issues. We even have this sometimes with internships; after two or three weeks, they recognized they could solve a planning issue [that] they were chewing on … and it was pretty complex, but with the new views and data and artificial intelligence, the outcomes were really stunning.

We actually have somebody also influencing the planning of our field — field development — and creating a low-code environment, and really this breaks in and it really changes the way we work.

Shervin Khodabandeh: In terms of making the company more productive, more efficient, ensuring it’s safe, ensuring that it does good for people and communities and environment and species in all different forms, what has been challenging? What’s hard?

Ellen Nielsen: I would say there definitely are some challenging parts. This is an early-stage technology, especially the gen AI. Things are moving very fast. So, what is challenging [is], whatever you do today might be different in three months. The challenging part is, you cannot work in the same way you worked maybe in the past. You have to maybe pivot faster. It’s not that you build a solution. I think a company told me they built a solution and that, six months later, if they [were to] build it now again, they would do it totally differently. So you have to watch when you — I call it maybe put the eggs in a basket. You have to think about what’s the right timing for what kind of use case and figure this out because you don’t want to lock yourself in when the technology is still in that kind of an evolution stage. This is something that we watch.

And then the second thing is, not everything in terms of security or handling data in the right way is solved yet in generative AI. That’s just … the technology’s not ready. There are no solutions yet. And you can build a kind of sandbox or kind of a fenced environment, but you have to fence it by yourself. And I think the hyperscalers, like Microsoft and so on, I think they’re working on also adapting those use cases in their normal landscape, where you can have an authorization process, where you have an excess process, how you’re administering and governing this the right way. So this is, I would say, still missing.

I’m very hopeful that this will be closed very fast, but today, you have to pull different technologies. If it’s a vector database — we’ll talk a little bit of tech-tech language here — it’s not all ready to be used on a really wide scale very safely. And you have to imagine, if you have a corporation, there are rights in terms of what information can be shared, what should not be shared, and so on. And that’s something that we think is a challenge.

The third challenge I want to mention is the policy makers, you know. So we follow this very closely with responsible AI. We are a member of the Responsible AI Institute and watching very carefully what’s happening there. What kind of policies are coming around the corner? How do we incorporate that responsibly into our operations, into our productization of AI models? And that’s, of course, evolution. It’s not something you can buy and run it. And, yeah, we’ll see how companies are filling these gaps.

Shervin Khodabandeh: Ellen, can you comment on generative AI, and if and how it’s being used or planned to be used?

Ellen Nielsen: Yeah, absolutely. We [had been] following generative AI already since two years or so, maybe a little longer. We were not totally surprised by the development. Maybe you can say, “OK, when was ChatGTP coming?” That was maybe a surprise for everybody — that it was coming so fast. But we were watching this and already did some use cases on a kind of innovative sandbox environment to see what that will be. And when it came out, we said, “OK, this is new technology. We want to understand it. We [put] it into the hands of the people and use it, and then understand the telemetry of ‘What do we use it for, and how does it resonate?’”

In May/June, we decided to put a more dedicated team on those activities, and we have hundreds of use cases now in the pipeline, which we down-select to the most prominent ones and approach them. But technologywise, we are really, I would say, very much on top of what’s going on and have really super smart people working on it.

I can tell you my own use case. I use it for writing things down. You can talk about maybe writing your performance agreement with your supervisor or with your team. You check on presentations or documentation you have to do to really optimize the writing. I know that my team is using it because we are thinking in product development and product management and portfolio management. So in the past, they took much longer to write down their thinking, and I talked with one of my team members and she said, “You know, in the past, it took me maybe one or two weeks. Now it takes me one hour to get this done.” So there are lots of efficiencies in using, let’s say, ChatGTP in the space.

When we look into other examples, you can imagine we have knowledge databases. We have knowledge around system engineering and other information we have available within the company on a very broad scale. And in the past, if you wanted to know how this generator works, you had to basically type in search criteria and then finally you found the document, and you had to read the document. Or this document was not enough; you need another document. OK, you find the second document, then you complete, basically, your answer, and then you go back [and] basically execute on it.

We have created a chat system where you can [more easily] collaborate with this kind of information and figure this out much faster. So these are maybe two — maybe one more on a daily thing and one more maybe related to kind of how we work in the systems approach.

Sam Ransbotham: If I combine some of your ideas, I see some difficulties. So earlier on, you were talking about citizen developers and the idea of putting a lot of these tools in the hands of people. And then, later, you were talking about problems of security and policy that are not part of the infrastructure yet. Historically, security always follows features. We care about features first, and then we care about security. So we have the combination of a widespread proliferation of tools among citizen developers and low infrastructural guardrails or policies, and then concern about inability to fast-follow. Those seem like they could smash together and create a lot of tension. How do you navigate that?

Ellen Nielsen: Yeah, I would say maybe we have to talk about AI in general and then generative AI. So when I talk about policy makers, this was more the generative AI perspective.

When you think about citizen development, we have models, or algorithms, in the box. We have proven [them]; we have secured [them]. They have followed a review process. We checked on them in terms of responsible AI. So they are ready to use for any citizen developer who wants to use that. So they are secured and safe, and they are actually in our safe environment. So you can already start there and make it safe. But the new technology which is coming on the gen AI, with these large language models and the data behind it where the large language models learn from, that’s maybe not ready yet to put into a citizen development perspective. So to make this very clear, when I talk about citizen development, everything — what is secured, kind of the telemetries there, the space is there — we have ensured that we are doing the right thing. This is made available for everyone in the company.

And the other things which are maybe not secure yet, we are not putting that into the system. We are waiting. So we cannot just afford to have unsecured things in our citizen development program.

Sam Ransbotham: Yeah, that brings out a nice sort of differentiation between the ideas that citizen data-ship, data scientists can’t just build. … There’s a curation process that goes on. And it sounds like you’re pretty active in that curation process in deciding what tools go to citizen developers and which tools you’re still investigating and you’re protecting. That makes sense.

Ellen Nielsen: Yeah, that’s it; exactly.

Sam Ransbotham: Chevron is obviously a giant petrochemical company out there worldwide. Everyone knows it. And you’re the chief data officer. How did you get there? Tell us a little bit about your history. How did you get to this role?

Ellen Nielsen: Yeah, I’m happy to be in this role. It’s a super exciting area I’m always passionate about. When you follow the start of my career, I’m from Germany. I did a system engineering degree and then ventured out into digital data — later on, to procurement and supply chain. I think the big red thread throughout my whole career is the data part, but of course in different ways. So one can say, when I ventured out into supply chain, you deal with a lot of money from the company bought by third parties. How do you organize that? And there’s a lot of data and thinking and strategic thinking about how you do that. And I would say I’m a learner. I’m a humble learner. I like to embrace new things and very diverse perspectives for the best of the company, and it’s just by coincidence maybe that I got into this role, because when I joined Chevron five years ago, I started in a procurement space because I have a procurement and a data digital leg, I would call it.

We tackled data right away because the data was not sufficient to drive these decisions, and maybe the first two years proved me right in terms of “that’s possible.” I’m also a big believer that data and AI will be all around us. So this is an exciting space to be in and to learn and to see what’s coming next there.

So I’m just happy to be there. Actually, a former executive said, when I said to him — not in Chevron — “I’m so lucky at all the opportunities I’ve had in my career,” and he said, “Ellen, you are not lucky.” So he sent me a book; you basically condition your path, so you’re open to things even when you think it’s not on your direct trajectory but it’s really enhancing your skills and how you connect the dots. I like connecting the dots, and that’s why I’m enjoying this role.

Sam Ransbotham: That’s a great story.

Shervin Khodabandeh: OK, so these are a series of rapid-fire questions we ask. Just tell us the first thing that comes to your mind.

Ellen Nielsen: It’s kind of a speed-dating question, maybe. OK.

Shervin Khodabandeh: What do you see as the biggest opportunity for AI right now?

Ellen Nielsen: Health care.

Shervin Khodabandeh: What is the biggest misconception about AI?

Ellen Nielsen: Replacing human beings.

Shervin Khodabandeh: What was the first career you wanted? What did you want to be when you grew up?

Ellen Nielsen: I didn’t want to sit [at] a desk. I failed.

Shervin Khodabandeh: AI is being used in our daily lives a lot. When is there too much AI?

Ellen Nielsen: I would say too much AI would be, if it guides me in the wrong direction and influences me in a way which is not based on the real facts.

Shervin Khodabandeh: I already have too much AI in my car because I cannot open the garage, because it recognizes where I am and which thing it has to open, and if it doesn’t work, I can’t get in.

Ellen Nielsen: I enjoy this. We have a pretty smart home here with all kinds of voice recognition electronics, garage door opener, sprinklers, starters, and whatever. But, I would say, it helps to be more efficient, and if the network is down, that’s really hard now, you know?

Shervin Khodabandeh: That’s right. So last question: What is the one thing you wish AI could do right now that it can’t?

Ellen Nielsen: Hmm; cure cancer.

Shervin Khodabandeh: Very good.

Sam Ransbotham: It seems like there’s a headline every week that this new AI thing is going to solve cancer, and then you look back and none of these seem to pan out. I’m not saying we should quit trying, but it’s always the example, and it seems like it never quite gets there.

Shervin Khodabandeh: But it’s a little bit of a stochastic process, too, right? I mean, if you have enough trials at it, right? We’re for sure trying a lot more things because of AI and our ability to experiment.

Ellen Nielsen: Can I answer it maybe slightly differently? So I think the other thing would be what AI maybe cannot do which would be great [is] really help us with the climate transition, the climate questions we have on this planet. I think it helps here and there, but that would be fantastic if it could help more.

Shervin Khodabandeh: Yep.

Sam Ransbotham: At the same time, though, I don’t think we can abdicate and just hope the machines solve the problems that we’ve created either. I think it’s going to take both of us working together on that. It’s OK; that’s part of the hope.

Is there anything you’re excited about artificial intelligence? What’s the next thing coming that you’re most excited about right now?

Ellen Nielsen: Hmm, good question. I think we want to improve our lives, and I think where I live right now, we are very privileged. We already have AI excess in many ways, you know? We just talked about it — in our smart homes and our cars, etc. — but that doesn’t count for everybody in the world. It would be great if those advances and those benefits would be [more broadly] available.

Shervin Khodabandeh: Yeah, you didn’t ask me, Sam, but I totally agree. I mean, I think that when you think about just in education and the impact that it can have on underprivileged communities and nations, they don’t need to have a school setup anymore. You could just do so much and help so many people just learn and develop and build skills that normally would rely on infrastructure and physical people and teachers and all that.

Sam Ransbotham: You’d think I’d be threatened by that, but I’m not a bit. I mean, I think that’s our biggest opportunity. We have so many people that … I mean, we just cannot get them all through education programs, and the education programs we have are not particularly optimized or fast. And if we could solve that problem and get better resources out of our brains, then that would be a huge win.

Ellen Nielsen: Hey, Sam, can I ask you a question? I know I turned this around now, but if you think that the shelf life of knowledge is decreasing, right, there were some recent articles about it that maybe what you learn today is maybe worth four or five years and then it’s kind of obsolete. So how do you think this will evolve in the education system?

Sam Ransbotham: That’s huge, because I think about that. I mean, I teach a class in machine learning and AI, and I am acutely aware that unless they’re graduating the semester that I teach them, everything that I’m … you know, the specifics that we’re teaching them are likely to be quite ephemeral. We’ve seen how rapidly this evolves. I think that pushes us to step back and be higher level. If we slip into teaching a tool, teaching how to click “File,” how to click “New,” how to click “Open,” how to click “Save,” those are very low-level skills. And when we think about what kinds of things we should be teaching, I mean, my university is a liberal arts university, and I think that’s a big deal because, if we think about teaching technical skills within a world of liberal arts, I think that’s a big deal. We had the sexiest job of the 21st century being data science. [Regarding] the next one, [it’s] not clear to me that data science is involved. And it’s not that data science isn’t important; it’s just rapidly becoming commoditized.

And so then we have things like philosophy, which become more important, and ethics, which, as the cost of data science drops, these things become more important.

Shervin Khodabandeh: Linguistics.

Sam Ransbotham: Linguistics, yeah. There you go.

Shervin Khodabandeh: Large language models, right. Wonderful. Ellen, thank you so much. This has been so insightful, and we thank you for making the time.

Ellen Nielsen: Yeah, thank you.

Sam Ransbotham: Thanks for tuning in. On our next episode, Shervin and I venture into in the use of AI in outer space with Vandi Verma, chief engineer of [Mars] Perseverance robotic operations and deputy manager at NASA’s Jet Propulsion Laboratory. Please join us.

Allison Ryder: Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn’t start and stop with this podcast. That’s why we’ve created a group on LinkedIn specifically for listeners like you. It’s called AI for Leaders, and if you join us, you can chat with show creators and hosts, ask your own questions, share your insights, and gain access to valuable resources about AI implementation from MIT SMR and BCG. You can access it by visiting mitsmr.com/AIforLeaders. We’ll put that link in the show notes, and we hope to see you there.

Topics

Artificial Intelligence and Business Strategy

The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.

In collaboration with

BCG
More in this series

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Subscribe to Me, Myself, and AI

Me, Myself, and AI

Dismiss
/