Responsible AI / Panelist

Nitzan Mekel-Bobrov

eBay

United States

Nitzan Mekel-Bobrov is chief AI officer at eBay. He leads the company’s vision and strategy for transforming how it delivers value to sellers and buyers around the globe through AI-led experiences, such as semantic recommenders, reasoning systems, visual understanding, and immersive visual experiences. Mekel-Bobrov has led the AI organizations at some of the largest brands in health care, financial services, and e-commerce, spanning AI science, engineering, and product development. He holds a doctorate in computational genomics and masters in computer science from the University of Chicago.

Learn more about Niztan Mekel-Bobrov’s approach to AI on the Me, Myself, and AI podcast.

Voting History

Statement Response
RAI programs effectively address the risks of third-party AI tools. Agree “Similar to data governance or information security frameworks, an effective RAI program should account for both internally developed AI solutions and third-party AI tools, broadly defined as any software solution that includes AI models or enables the development or execution of AI models. Third-party AI tools, including open-source models, vendor platforms, and commercial APIs, have become an essential part of virtually every organization’s AI strategy in one form or another, so much so that it is often difficult to disentangle the internal components from the external ones. Consequently, an RAI program needs to include policies on the use of third-party tools, evaluation criteria, and the necessary guardrails. The most scalable approach is usually to apply the same policies that govern internal solutions to third-party tools, but rather than insist on the same practices being applied to ensure that these policies are followed, the desired outcomes defined in the policies should be measured in a consistent fashion, regardless of the tool’s provenance.”
Executives usually think of RAI as a technology issue. Strongly agree “Executives usually understand that the use of AI has implications beyond technology, particularly relating to legal, risk, and compliance considerations. This is particularly true in more regulated industries. The challenge, however, is that RAI as a solution framework for addressing these considerations is usually seen as purely a technology issue. In other words, there is a pervasive misconception that technology can solve all the concerns about the potential misuse of AI. In reality, technology is only part of the solution. This is precisely why a mature RAI framework includes many additional components, with key examples being processes, accountability and governance, and a corporate culture that embeds RAI practices into the normal way of doing business.”
Mature RAI programs minimize AI system failures. Strongly agree “AI systems inherently take over decision-making that has historically been in the hands of human actors. This shift in where decision-making occurs results in two key challenges when trying to prevent failures: (1) AI systems are often the product of many different components, inputs, and outputs, resulting in a fragmentation of decision-making across many different agents; and (2) AI algorithms are increasingly opaque and difficult to interpret, rendering both the forward-looking prediction and backward-looking reconstruction of a decision extremely challenging. Consequently, failure estimation and prevention are difficult to set up. Moreover, when failures occur, it is difficult to ascertain the precise reason and pinpoint the point of failure.

A mature responsible AI framework accounts for these challenges by incorporating into the system development life cycle end-to-end tracking and measurement, governance over the role played by each component and integration point in the final decision output, and explainability/transparency analytics. In doing so, the responsible AI framework provides the enabling capabilities for preventing AI system failures ahead of their decision-making.”
RAI constrains AI-related innovation. Disagree “Organizations should promote a culture that empowers individuals to raise concerns over AI systems in a way that doesn’t stifle innovation, ensuring that any constraints that are set evolve AI in a beneficial way. Having measures like clear success criteria, incentives, and training are all critical requirements. By establishing responsible, transparent governance structures and accountability, organizations can have the confidence and trust in their AI technologies to further their innovation efforts.”
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Strongly agree “AI is no longer something that happens in an isolated research lab; it’s become part of standard business-as-usual operations and therefore needs to be tied directly to a company’s overall corporate citizenship. Corporate social responsibility is a general concept and is already well suited to incorporate responsible AI efforts as well. In fact, many of the core ideas behind responsible AI, such as bias prevention, transparency, and fairness, are already aligned with the fundamental principles of corporate social responsibility, so it should already feel natural for an organization to tie in its AI efforts.”