Responsible AI / Panelist

Brian Yutko

Boeing

United States

Brian Yutko is vice president and chief engineer for sustainability and future mobility at Boeing. He has also served as chief technologist for Boeing NeXt and chief strategy officer for Boeing subsidiary Aurora Flight Sciences, where he held multiple roles before its acquisition. Previously, he was a research engineer and postdoctoral associate at MIT, focusing on aircraft design and optimization, and a mechanical design engineer at NASA’s Kennedy Space Center. He holds a doctorate and a masters degree in aeronautics and astronautics from MIT and a bachelors from Pennsylvania State University.

Voting History

Statement Response
Executives usually think of RAI as a technology issue. Neither agree nor disagree “RAI will largely manifest as part of the existing safety processes within aerospace. This will be iterative between technology and nontechnology stakeholder inputs.”
Mature RAI programs minimize AI system failures. Neither agree nor disagree “RAI programs may eliminate some types of unintended behaviors from certain learned models, but overall success or failure at automating a function within a system will be determined by other factors.”
RAI constrains AI-related innovation. Disagree “RAI is a basic requirement for AI-related innovations in the aerospace industry. Performing poorly on RAI attributes could impact safety, which is an inviolable requirement for building aerospace systems. So whenever we develop an unpiloted system to be used undersea, in the air, or in space, we start by considering what steps we need to take to introduce all systems, including AI, safely. Aerospace is an unforgiving domain with zero tolerance for mistakes.

AI is involved in how we design, manufacture, operate, and analyze data in our autonomous aircraft, spacecraft, and submarines. We need to be thoughtful about every interaction humans have with the automation throughout each of these applications. The innovations that matter will ultimately be the ones that pass the rigorous safety and regulatory processes we have in place.”
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Agree “Given the complexity of rapidly evolving AI-driven innovations, companies should consider their potential impact on corporate responsibility efforts, especially in terms of possible environmental, ethical, and economic impacts. As an example, autonomous aircraft and spacecraft are able to go places where it might be unsafe for humans; they might also be able to be used more efficiently in airspace. But how, where, and why these systems are used can introduce complexities that should have some level of oversight tied to clearly stated principles.”
Responsible AI should be a part of the top management agenda. Neither agree nor disagree “The ethical questions arising from automation are important to consider at the most senior levels of every company that uses it, but management should be tailored to the risk of the specific business or technology application. Some applications need significantly more senior leadership oversight than others. As an example, computer vision that is used to spot debris in composite part manufacturing likely doesn’t pose significant ethical issues. So if this is the only focus of a business, it may not require senior oversight. But defense systems may need frequent ethics inputs, intervention, and established guiding principles. It’s most effective to have experts that are both well versed in the application subject area as well as the ethical principles for designing automated systems in general. Companies or applications that have significantly higher risk of ethical dilemmas are likely to benefit from a direct line to the CEO or the board, in the same way that safety is managed in safety-critical domains such as aviation.

I lead teams that build technologies for autonomous transportation. In this domain, the most common ethical conundrum is known as the trolley problem. In this well-known problem, a human subject is faced with a choice: A trolley is rolling down a track, and in the absence of any corrective action, it will harm a handful of people. If the subject of the experiment chooses to flip a switch, the trolley will change tracks and harm one person instead of many. What’s our unfortunate subject to do? Many person-years have been spent debating this subject and its ethical implications for how we design robots that may face the trolley problem. In autonomous aviation, I believe the solution to the trolley problem is a simple one: Design a system that does not encounter a trolley problem in the first place.”