Responsible AI / Panelist

Slawek Kierner

Intuitive

United States

Slawek Kierner is senior vice president of data, platforms, and machine learning at Intuitive. Kierner previously served as senior vice president and chief data and analytics offer at Humana, chief data and analytics officer for the Microsoft Business Applications Group, and has led digital marketing operations and information systems for Procter & Gamble’s European business. He also served as a board member and CIO for P&Gs Central Europe division.

Learn more about Slawek Kierner’s approach to AI on the Me, Myself, and AI podcast.

Voting History

Statement Response
Executives usually think of RAI as a technology issue. Strongly disagree “Recent geopolitical events increased the sensitivity of executives toward diversity and ethics, while the successful industry transformations driven by AI have made it a strategic topic. RAI is at the intersection of both and hence makes it to the boardroom agenda, seen as much more than a technology issue.”
Mature RAI programs minimize AI system failures. Strongly agree “Mature responsible AI programs work on many levels to increase the robustness of resulting solutions that include AI components. This starts with the influence RAI has on the culture of data science and engineering teams, provides executive oversight and clear accountability for every step of the process, and on the technical side ensures that algorithms, as well as data used for training and predictions, are audited and monitored for drift or abnormal behavior. All of these steps greatly increase the robustness of the whole AI DevOps process and minimize risks of AI system failures.”
RAI constrains AI-related innovation. Strongly disagree “A comprehensive responsible AI program engages executive leadership in the oversight of AI efforts and provides a platform for education, discussion, and ideation on opportunities for the use of augmented intelligence to accelerate strategy execution. This is important because in enterprises with a mature machine learning program, key opportunities lie in the change management of processes that can be enhanced with AI. Therefore, I have seen RAI acting as an accelerator of use cases that advance the use of AI just by ensuring that it is targeted at appropriate use cases — fair, ethical, and free of unintended bias.”
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Neither agree nor disagree “Every company likely needs to find its unique answer depending on its industry and state of maturity in AI and ESG. Most often, though, with AI maturity evolving rapidly, offering significant opportunity, and requiring specific skills, it needs multifunctional, diverse, and capable governance, with dedicated focus and visibility to the highest levels of management. Hence, dedicated steering is likely more appropriate currently, while, with time, we may seek alignment on management of its specific risks with other oversight areas, like the area of corporate social responsibility.”
Responsible AI should be a part of the top management agenda. Strongly agree “Augmented intelligence has significant potential to accelerate the transformation of health care by improving health outcomes through novel therapies, drug discovery, robotic automation, and new approaches to care, and by significantly lowering administrative costs. Our progress in such a delicate space that touches human life is highly dependent on acceptance of AI technology by patients and clinicians, and this requires that they trust that its use is responsible. Therefore, direct engagement and oversight by the top leadership group is fundamental, as it creates trust while also helping in sponsoring programs of adoption of AI and accelerating the path to value.”