Me, Myself, and AI Episode 205

Prototypes, Pilots, and Polymers: Cooper Standard’s Chris Couch

Play Episode
Listen on
Previous
Episode
Next
Episode

Topics

Artificial Intelligence and Business Strategy

The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.

In collaboration with

BCG
More in this series

Chris Couch has a unique role, serving as senior vice president and CTO of automotive supplier Cooper Standard as well as CEO of Liveline Technologies, a startup born from the CS Open Innovation initiative. Both organizations use AI to manufacture products the average consumer likely never thinks twice about, such as brake fluid and polymer seals for car doors.

In Season 2, Episode 5, of the Me, Myself, and AI podcast, we talk with Chris about open innovation, automating rote processes without displacing human workers, and attracting talent by fostering a startup culture.

For more on how humans and machines can collaborate successfully, read the 2020 Artificial Intelligence and Business Strategy report, “Expanding AI’s Impact With Organizational Learning.”

Read more about our show and follow along with the series at https://sloanreview.mit.edu/aipodcast.

Subscribe to Me, Myself, and AI on Apple Podcasts, Spotify, or Google Podcasts.

Transcript

Sam Ransbotham: Things like brake fluid and chemical manufacturing may not seem like gee-whiz artificial intelligence, but we all may be benefiting from AI already and just not know it. Today, we’re talking with Chris Couch, senior vice president and chief technology officer of Cooper Standard, about how we’re all benefiting indirectly from artificial intelligence every day.

Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I’m Sam Ransbotham, professor of information systems at Boston College. I’m also the guest editor for the AI and Business Strategy Big Ideas program at MIT Sloan Management Review.

Shervin Khodabandeh: And I’m Shervin Khodabandeh, senior partner with BCG, and I colead BCG’s AI practice in North America. Together, MIT SMR and BCG have been researching AI for five years, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities and really transform the way organizations operate.

Sam Ransbotham: Today we’re talking with Chris Couch. Chris is the SVP and the chief technology officer for Cooper Standard. Chris, thanks for taking the time to talk with us. Welcome.

Chris Couch: You bet. Thank you very much.

Sam Ransbotham: Why don’t we start by learning a little bit about your role at Cooper Standard. What do you do now?

Chris Couch: I am the CTO of Cooper Standard. We’re a tier-one global automotive supplier. I’m also the founder and CEO of an AI startup called Liveline Technologies that has come out of some work that we did as R&D within Cooper Standard. We provide components in the spaces of vehicle sealing and enclosures, as well as fluid handling, whether it’s brake fluid or coolant — all the fluid systems in the vehicle. We also invest in material science technologies that we believe can have an impact beyond automotive.

Many of our products may [not be] visible to the average consumer. In fact, some of our products, hopefully nobody has to worry about them when we’re moving fuel around your vehicle, but they’re critically important to the driving experience and having a safe and reliable vehicle. For example, we developed a brand-new category of polymer that we have called Fortrex. Fortrex provides a much better seal around the doors and windows in your vehicle. Why is that important? It’s important especially as we move into an electrified-vehicle world. As engine and transmission noises decrease — because there’s no more gasoline engine — other sources of noise become more prevalent, and the largest one of those is the noise coming in due to wind around your doors and windows. By providing an enhanced sealing package for those, we believe we’ve got the right products to service an electrifying world.

Sam Ransbotham: How is artificial intelligence involved in that development of the polymer?

Chris Couch: We spent a lot of time and money coming up with advanced polymer formulations. A lot of it historically has been trial and error; that’s what industrial chemists often do. We used AI to develop a system that advises our chemists on the next set of recipes to try as they iterate toward a final solution. We’ve found dramatic reductions, in many cases, with that approach; dramatic means reducing those R&D loops [by] 70% or 80%.

Sam Ransbotham: That’s interesting. But before we talk more about Cooper Standard’s success with AI, can you tell us a bit more about your own background and career path?

Chris Couch: I think the best way to describe myself is a lifelong manufacturing addict, first of all. As a kid, I took apart everything in the house [and] probably got hit with wall current more than once — explains a lot about me today, I suppose. I was the kid that built their first car out of a kit. I was a hard-core mechanical engineer with a focus on manufacturing and controls in school. My side projects include things like building autonomous drones that fly at altitude. I’m just a manufacturing nerd. That has really served me well in my career.

I spent the first third of my working life in a Japanese company; I worked for Toyota and went and joined them in Japan. I spent a dozen years with them designing and building and ultimately being involved in plant operations. I spent the next third of my career running a P&L for an automotive supplier on the business side, mostly based out of Asia, so I have a business bent as well, which may color a lot of what I say today. Then, the last third or so of my career has been in CTO gigs. I’m in my second one here. I’m again at an automotive supplier, but we get our fingers into all kinds of interesting tech domains, just given what’s happening in the world today, whether it’s material science or AI. So here I am. Didn’t really expect to be doing my second job here, if you would have asked me two years ago, but it’s certainly been a lot of fun, and we’re excited about delivering some impact with these technologies.

Shervin Khodabandeh: Chris, tell us about the open innovation at Cooper Standard. What is that all about?

Chris Couch: You know, as we looked around at our tech portfolio a couple years ago when I joined the company, I was, first of all, overwhelmed by the different domains that we really had to compete in. I mentioned materials science earlier, but there’s different aspects of manufacturing technology and product design — the whole topic of analytics and AI that we’re going to talk about today — and was very convinced that there was no way that we could do it all ourselves. Cooper Standard isn’t a small company — we’re just shy of $3 billion in revenue, but we’re not the biggest. Open innovation was really an attempt to reach out and build a pipeline to draw ideas and technology, and maybe even talent, from the outside world. Through that, we engage with universities, with consortia around the world. We engage heavily with startup companies and use that as a source of ideas. In fact, our first proper AI project, if you will, really came through that open-innovation pipeline.

We partnered up with a brand-new startup that was called Uncountable, out of the Bay Area, and they helped us develop a system that would serve effectively as an adviser for our chemists that make new formulations for materials that we use all the time. That wound up being a great accelerator for our R&D process, cutting iterations out of those design-and-test loops, if you will. That was one of those big “aha!” moments — that there is a huge potential to accelerate ourselves in many domains. We can’t do it all ourselves, and so how do we really build that external pipeline? We now call it CS Open Innovation, but that was the impetus.

Shervin Khodabandeh: Sounds like a very unique way of bringing folks with different backgrounds and different talents and getting them all [to] work together. What did you find was the secret sauce of making that happen?

Chris Couch: I think whether it’s AI, whether it’s materials science, whether it’s other domains, my answer is the same: It really is all about the ability to focus. The reason that we, [like] many other companies, have put in place innovation pipelines and processes and stage-gate processes that govern innovation is because of the focus. How do we quickly narrow down where we’re going to allocate our precious R&D dollars, and how do we govern those correctly? So we think like a startup: We’re doing the minimal investment to sort of answer the next, most important question and either wind up killing things quickly or take them to fruition.

Shervin Khodabandeh: And a fair amount of fail fast, and test and learn, and sort of go big behind things that are working and shut down things that aren’t, right? Did I hear that correctly?

Chris Couch: Exactly, and that’s not unique. I think that there’s nothing special about AI-based projects, right? We sort of think in the same way and very quickly try to motivate those with a clear-eyed view of ROI. Frankly, one of the things that I think we’ve seen over the years when it comes to analytics, AI — especially coupled with manufacturing and Industry 4.0 — ROI has sometimes been hard to come by. [There are] a lot of creative ideas, a lot of interesting things to do with data, but the question is, how does it translate to the bottom line? And if that story can’t be told, even as a hypothesis that we’re going to prove through the innovation project, then it’s hard to justify working on it.

Sam Ransbotham: It seems like the opposite, though, is that, just to push back a little bit, if you get too focused on ROI, where are you going to get something weird and big and unusual?

Chris Couch: Absolutely.

Sam Ransbotham: How are you balancing that tension between focusing on ROI and also trying not to miss out on [opportunities], or trying not to be too incremental?

Chris Couch: I think the stage-gate mentality is useful here. I think in the early state, we look at a lot of crazy stuff. We have crazy ideas that come in through open innovation. We have crazy ideas from our own teams, and that’s fantastic. We don’t hesitate to look at them and maybe even spend a little pocket money to chase them down to some degree. The question then is, what are we going to invest in to try to productize? That’s really the next gate, if you will.

So absolutely, the exploration is important. We certainly do some of that. I hesitate to say it almost, but it’s having some space to play with ideas and technologies, but then, when it’s time to go productize, you have to be clear-eyed on what you’re going to get out of it.

Sam Ransbotham: That seems like something that might differ for an AI approach. I mean, you said, “Well, AI’s no different,” just a second ago, but … I guess I wonder if there is something different about these new technologies that may require a little more freedom up front to do something weird than perhaps some others.

Chris Couch: I think that’s fair. In our experience, I think one of the differences with AI is that you probably have less familiarity with the tools and the applications among the general technical population. If you’re talking to design engineers, or talking to manufacturing process engineers, they may have read some things and maybe seen an interesting demo somewhere but may not be so versed in the nuts and bolts of how it works, much less the nuts and bolts of what would it take to scale that at an enterprise level. Because getting models running in a Jupyter Notebook off of a CSV file on your hard drive is a whole different story from production on a global scale.

Sam Ransbotham: Exactly.

Chris Couch: I think just that lack of exposure to the technologies makes it a bit different. If we’re talking about traditional robotics, or maybe simpler types of IoT concepts, plenty of engineers have a good clue and maybe have used some things in their career, but much less so when it comes to AI. That is a difference, I would agree. The good news is, I am very convinced that one of the wonderful things about AI is that it is cheap to pilot. I was just sort of making up a silly example about Jupyter Notebooks and CSV files, but that’s a great way to explore some concepts, and the cost of that is very close to zero, other than acquiring the knowledge to do it. Even then, I think that we’ve proven over and over in our internal teams that even the knowledge acquisition is reasonably priced, if you will.

Shervin Khodabandeh: Chris, I want to build on that point you said, that AI is relatively inexpensive to pilot. I agree with that because we see, of course, a proliferation of proofs of concept and different teams trying different approaches, different ideas. It also seems there is a fact that AI is also quite hard to scale.

Chris Couch: Right.

Shervin Khodabandeh: I want to sort of get your reactions to, “Something’s easy to pilot, quite hard to scale; the real meaningful ROI will come after you scale it.” How do you make that transition? How do you sort of make things that are really easy to pilot and get excitement around but then harder down the line to actually embed into business processes and ways of working? How do you envision that transition working?

Chris Couch: Right. Yeah, it’s a great question, and it’s definitely not easy and maybe not for the faint of heart, right? Because sometimes it does take a leap of faith and the ability to scale, ultimately. The best I can say [is] from our experience with Liveline: We did some very early prototyping, we thought we understood the data science aspect, but that was only the beginning. That was nearly two years ago. Only in the past months have we begun to go to a global scale-out. The only insight I have there is, as you prototype, as you pilot, you’ve just got to try to be as judicious as you can about selecting use cases that are realistic and that everybody can get their heads around and connect the dots to the ROI at the end of the day.

Sam Ransbotham: How are you getting people. … Once you’ve got these solutions in place, what about the adoption within the organization? How are you getting people to work on teams that used to have human partners and now have machine partners?

Chris Couch: With Liveline … the basic concept of Liveline is to automate the creation of automation for complex manufacturing environments. We’re using machine learning techniques to design the control policies that we deploy onto the lines to control machine parameters in real time. We think this is very useful for attacking a diverse range of processes that have been too complex or too costly to automate otherwise, and our early successes have been in continuous-flow manufacturing processes, chemical conversion, polymer extrusion, and we think there’s a broad applicability to this to areas like oil and gas, wire and cable, etc.

One of my fears when we first got into plants to do live production trials is that the plant personnel might view this as sort of a threat. We’re automating; that has some negative connotations sometimes in terms of impacts on people’s jobs and so forth. But there’s a couple of things I think [that] really gained us some traction, and the reception has been quite warm. In fact, the plants are pulling very hard now to roll this out.

My attitude is to really democratize the information and what’s happening with the tool. For example, we spent quite some effort to make sure that operators in the plant environment had screens where they can see data streams in real time that they couldn’t before. Sometimes they were data streams that we had created for the sake of the machine learning. We give them visibility into it. We give them visibility, if they want, into the decisions that the system is making. We also give them the ability to turn it off — [hit] the big red button if they’re not comfortable with what HAL 9000’s doing on their production line. Also, we give them the ability to bias it. If they feel, based on their experience, that the system is making parts that are a little, let’s just say too thin or too thick, they can bias it down a little bit.

I think that sort of exposure and opening up the black box, at least in the plant environment, is very critical to people buying in and believing in what’s going on. One of our learnings at Liveline with that was the enhanced feedback that we get. We have received several very influential and useful ideas from people that were really doing nothing more than looking at data streams and watching the system making decisions. They asked good questions back to us and gave us good insights; [they] suggested new types of data that we could be tagging that might be useful once they really began to get a little intuition about what we were trying to do with the data science. I think that sort of democratization, if you will, of the system and opening up and exposing the guts as it does its thing has been, at least in this case, one of the success factors.

Shervin Khodabandeh: That’s a great example. It covers, Sam —

Sam Ransbotham: Exactly.

Shervin Khodabandeh: Sam, I feel like it covers a lot of what we’ve talked about in our report, in terms of different modes of human-AI interaction. No black box. Allowing the human to override or bias, but also — I was going to ask you, Chris, [but] you hit the point before I got a chance to ask you, which is the feedback loop. I guess my follow-on question is, how has that feedback loop been working in terms of maybe skeptics having become more sort of AI-friendly, or more trust having been formed between humans and AI? Any anecdotes you can comment on that?

Chris Couch: Absolutely. I’ll give you a great anecdote from one of our plants in the southern U.S. In fact, this was the plant where we did our final pilot for Liveline before we made the decision as a company to go do a global rollout. We first had the line running in what we call automatic mode. Gosh, I think it was about Q3 of last year. One of the criteria for the pilot was that we would do some A-versus-B runs. [The] concept’s very simple: “For these four hours, we’re going to run with the system engaged in automatic mode. For these four hours, we’re going to turn it off, and you all can run the plant like you always do. Then, over a series of days and weeks, we’ll add up the statistics about scrap rates and quality and unplanned line stops, and we will quantify exactly what the value was.”

We came to the first review point a few weeks into that, and as I sat with the team, they sort of pulled up chairs and looked at their shoes and said, “Hey, we have an issue. We don’t have the B data with the system off.” I said, “Why is that?” They said, “Because once the plant turned on, they refused to turn it off again. They don’t want to run with the system disengaged anymore because the impact was so significant to them and helped them to operate the lines better that they don’t want to run with it off anymore.” That was very consistent with the type of reaction we saw in other pilots in Canada and our tech center in Michigan.

Shervin Khodabandeh: That’s great.

Chris Couch: Yeah, that sort of feedback is very reassuring, but again, I think that from the get-go, having a philosophy of really just opening up and showing people what’s going on, letting them look at data, be participants in problem-solving and tuning and enhancement, really sets the stage for that emotional connection and commitment to the project.

Sam Ransbotham: Those seem like some very different ways of getting feedback to a system, and then the other one you mentioned was the idea of suggesting new tags or new data to come back in.

Chris Couch: Right.

Sam Ransbotham: I can see, for example, adjusting the bias being a real-time sort of feedback, and clearly, pressing the red button would happen immediately, I hope. That’s the point of a red button.

Chris Couch: That’s right.

Sam Ransbotham: How do the processes work for some of these non-immediate feedback [loops]? What do you do with the suggestions for new data and tags? Is there a process around those?

Shervin Khodabandeh: This is, by the way — sorry to interrupt your response — this is Sam’s and, to some extent, my chemical engineering background coming out.

Chris Couch: And you can think of, at least for Cooper Standard, the majority of our lines are chemical processing lines. We’re taking different types of compounds, and we’re extruding them and, in the case of thermosets, putting them through oven stages — 200 meters of processes, and a lot of it is chemistry, as it goes. Yeah, so you guys are in your sweet spot.

Sam Ransbotham: What’s the process for new data tags? How do you formalize that process in something that’s less real time?

Chris Couch: I’ll give you a real example. We were doing a pilot maybe a year ago, and we had a process engineer who’s not a machine learning expert [who] was watching the system run, was looking at the data, was looking at the analysis that the machine learning has generated and how predictable the outcomes from the line were. At that stage, we weren’t getting the results that we wanted, and we were seeing variation in output in the real world that we weren’t picking up and predicting in the silicone world.

As he was watching the line he said, “Look, I have a theory. I have a theory that there’s something going on with one of the raw materials we’re feeding the line. My theory is that that material is more susceptible to the history of temperature and humidity that it’s experienced as it was shipped to the plant. Why don’t we throw some data-logging devices on those palettes as we ship it around the country and be able to look at that time and temperature history and integrate that into the analytics and see if it would help us be more predictive?”

Lo and behold, that was actually helpful. That’s a real-life example of a non-AI expert interacting with the system and using their human judgment to suggest ways to improve it, even though they can’t write AI code. Once we had exposed enough of them to what’s going on that they were able to get some human intuition about what’s happening here, then they were able to participate in the process. That’s a very powerful thing.

Shervin Khodabandeh: Chris, I want to ask you about talent. You’ve been talking about a lot of innovation, a lot of cool ideas — different groups, internal and external, coming together to really experiment new things, try new things, make a really game-changing impact. What do you think it takes to get the right talent, motivate them, keep them excited, and sort of get that virtuous cycle of excitement and energy and innovation going?

Chris Couch: It’s a great question. I think the answer may be a little different depending on what sort of technical talent you’re talking about. The way that we would think about a manufacturing process engineer or controls engineer may be a bit different [from] how we think about folks with different skills in the world of AI, and sometimes the talent is in different places in the country. I’m not sure there’s a one-size-fits-all answer. I think, in general, when we find people that we would like to bring into the company, I think if we can show them that the sustained commitment to innovation and doing cool stuff is real, that helps a lot. I think being able to prove to people that you’re willing to stay the course in what you’re investing in is part of the story.

Then the second thing that I think is important is just the culture. Having people believe that, in addition to investments in resource availability, we’re just serious about being innovative. We’re just serious about doing things better. We’re serious about winning through technology, from the boardroom to the shop floor. If that culture is real, people know it, and if it’s not real and you’re faking it, I think people know it. You can’t earn that in a quarter; you’ve got to earn it over a couple or several years. I like to think that we’ve done a pretty good job with that, but that’s really key in my mind.

Sam Ransbotham: Chris, many thanks for taking the time to talk with us today. You brought out some quite interesting points. Thanks for taking the time.

Shervin Khodabandeh: Yeah, Chris, thank you so much.

Chris Couch: You’re more than welcome. Hopefully, you can tell I’m excited about AI. I’m excited about what it can do in manufacturing as well as other industries. I think it’s going to be a fun future and [am] looking forward to [helping to] build it.

Sam Ransbotham: Shervin, Chris covered quite a few key points. What struck you as particularly noteworthy?

Shervin Khodabandeh: I thought that was really, really insightful. Obviously, they’ve done a ton with AI and a lot of innovation and cool ideas, and they’ve put many of those into production. I felt a lot of the key sort of steps in getting value from AI that we’ve been talking about [were] echoed in what he talked about. The notion of experimentation and test and learn; the culture and importance of allowing folks to try ideas and fail fast and then moving on; the notion of focusing on a few things to scale — focusing on a lot to sort of test and prototype, but a few to scale and invest behind. I thought that was really interesting.

Sam Ransbotham: I thought Chris also had a nice blend of both excitement and patience. I mean, [he’s] clearly excited about some of the things they’re doing, but at the same time, some of the initiatives were taking two years or so to come to fruition. That has to be hard to balance — being excited about something and then also waiting two years for it to come out. I thought that was a nice blend.

Shervin Khodabandeh: Also, to that point, the importance of focus. I mean, once you’ve picked it and you’ve decided that this is the right thing to do, and you’re sort of seeing it progressing toward that, realizing that it’s not time to give up now — you just have to mobilize and double down on it.

The one thing that really struck me a lot was the importance of culture and how he said, “From [the] boardroom all the way to middle management, they have to believe that we’re behind it, and we’re investing, and this is not just a fad.” That has to be sort of permeating across the entire organization to keep talent really excited and interested.

Sam Ransbotham: And it went down even into the people who were using the systems. I thought that was a beautiful example of people who may have been so busy trying to just get their job done that they couldn’t step back and think a little bit. He gave a great example of how that freedom of having the machine do some of the work let the human do things that humans are good at. He covered almost all of the steps in our prior report about the different ways of people working with machines. We didn’t prompt him for that.

Shervin Khodabandeh: The other thing I really liked was his view on talent. I asked him, “What does it take to recruit and cultivate and retain good talent?” He said, “It’s not a one-size-fits-all [approach].” That recognition that not all talent is of the same cloth, and different people, different skill sets, have different sensibilities, and they’re looking for different things, but the common theme of people that go there want a continuous focus and commitment to innovation, and they want to see that, and maybe that’s the common thing. Then, data scientists and technologists and chemists and engineers might have different career paths and career aspirations, but they all share in that common striving for innovation.

Sam Ransbotham: I don’t think he mentioned it, but Chris is a Techstars mentor, and I’m sure that some of that background also influences the way he thinks about different people and different ideas and how that talent can come together.

Shervin Khodabandeh: Yup, that’s right. He didn’t mention it, but that’s true.

Sam Ransbotham: Thanks for joining us today. Next time, we’ll talk with Huiming Qu about how The Home Depot continues to build its AI capabilities. Please join us.

Allison Ryder: Thanks for listening to Me, Myself, and AI. If you’re enjoying the show, take a minute to write us a review. If you send us a screenshot, we’ll send you a collection of MIT SMR’s best articles on artificial intelligence, free for a limited time. Send your review screenshot to smrfeedback@mit.edu.

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Subscribe to Me, Myself, and AI

Me, Myself, and AI

Dismiss
/