Responsible AI / Panelist

Suhas Manangi


United States

Suhas Manangi is the product head of the AI/ML Defense Platform team at Airbnb, where he leads work on accelerating the use of AI and machine learning in fighting fraud and abuse to ensure trust and safety on the online marketplace. Before joining Airbnb, he spent many years working with trust and safety teams at Amazon, Lyft, and Microsoft. Manangi is also active in the product management community, helping product managers transition to using AI/machine learning in their products.

Voting History

Statement Response
Executives usually think of RAI as a technology issue. Neither agree nor disagree “There is not enough awareness for executives to have an opinion on responsible AI. As AI is a complex technology and also often seen as a magic solution to all problems, there is an underlying assumption that AI can be easily built to be responsible. There are good intentions, but checks and balances are needed to increase the awareness.”
RAI constrains AI-related innovation. Strongly disagree “We humans are biased in so many ways, based on our experiences and motives. Our world is plagued with discrimination, unconscious bias, racism, nepotism, tribalism, and more. When raw data generated from the real world with these real problems is used to produce an efficient AI, then that AI acts as a mirror that reflects and amplifies societal problems.

AI is good at optimizing for an objective function, and often we use objective functions as revenue or profit. If we are not conscious while choosing what data we feed into AI, then we may end up using data such as “type of credit card used” or “credit score” that indirectly represents age, gender, race, or income, which amplifies the societal inequalities even further.

Can we build for an equitable world by building an equitable AI? And isn’t that the innovation, and the only innovation, we want? If yes, then responsible AI helps in that innovation but does not constrain it.”