Artificial Intelligence Disclosures Are Key to Customer Trust
A panel of experts weighs in on whether organizations should disclose how their products use AI.
Topics
Responsible AI
In collaboration with
BCGFor the third year in a row, MIT Sloan Management Review and Boston Consulting Group (BCG) have assembled an international panel of AI experts that includes academics and practitioners to help us gain insights into how responsible artificial intelligence (RAI) is being implemented in organizations worldwide. This year, we’re examining organizational capacity to address AI-related risks. In our previous article, we asked our experts about organizational readiness for the first comprehensive AI law on the books — the European Union’s AI Act. This month, we asked them to react to the following provocation: Companies should be required to make disclosures about the use of AI in their products and offerings to customers.
Our experts are overwhelmingly in favor of mandatory disclosures, with 84% either agreeing or strongly agreeing with the statement. At the same time, they have a wide array of views on implementing such disclosures. Below, we share insights from our panelists and draw on our own RAI experience to offer recommendations for how organizations might approach AI-related disclosures, whether they’re mandated by law or voluntarily provided to promote consumer trust and confidence.
AI Disclosures Foster Trust in Line With Core RAI Principles
Our experts point out that disclosures promote transparency, which, as Aboitiz Data Innovation’s David R. Hardoon contends, is “the foundation of an effective RAI framework.” Mark Surman of the Mozilla Foundation observes, “Disclosures are the most basic form of transparency” required in all facets of life. Universidad Católica del Uruguay’s Carolina Aguerre argues that just as “many products and services are labeled for sustainability, climate, fair trade, or nutritional value,” AI products and services should come with transparent disclosures. Dataiku’s Triveni Gandhi agrees that “disclosing the use of AI to customers is a cornerstone of transparency in an ever-evolving landscape” with broad precedent.
Our experts also believe that companies have an ethical obligation to be transparent about their use of AI. H&M Group’s Linda Leopold contends that transparency is “an ethical obligation toward customers, enabling informed decisions and enhancing trust.” National University of Singapore’s Simon Chesterman agrees that the purpose of transparency is “to enable informed decision-making and risk assessment.” As such, ForHumanity’s Ryan Carrier argues that AI users “have a right to know the risks associated with the use of a tool, just like the side effects of a drug.” Armed with information, he adds, users of a large language model can seek to mitigate risks, such as “potential hallucinations and made-up sources,” by checking the outputs before using them.
Strongly agree
“Companies should disclose their use of AI in their products and offerings to customers for transparency, trust, informed consent, accountability, ethical considerations, and consumer protection. Disclosures help companies comply with laws and regulations, ensure ethical practices, and encourage investment in robust, ethical, and reliable AI systems.”
Tshilidzi Marwala
United Nations University
Beyond any ethical obligation, disclosures help promote trust and build confidence among customers, investors, and employees. Ellen Nielsen, formerly of Chevron, says, “Transparency is paramount to maintaining consumer trust” and argues that “companies should be required to disclose the use of AI, providing consumers with the necessary information to make informed decisions and understand decisions made by AI.” And OdiseIA’s Richard Benjamins asserts that there are “business reasons for voluntary disclosures,” even where they’re not required by law. For investors “looking for RAI governance practices in companies they want to invest in,” customers considering RAI behavior in buying decisions, and employees who value RAI, such transparency can be a key factor in selecting and staying with a company, he says.
In addition to fostering trust with market participants, transparency from disclosures contributes to societal trust and confidence in AI. As Surman observes, “Transparency is key to building public confidence in AI and giving people agency over how they interact with automated systems.” RAI Institute’s Jeff Easley adds that “requiring disclosures … will serve to hold companies accountable for the ethical use of AI, [which] can encourage responsible AI development and deployment, mitigating risks such as bias, discrimination, and unintended consequences.”
Implementing Effective Disclosures Is Not Without Challenges
While most of our experts endorse disclosures, they also acknowledge that their implementation may not be easy. Nasdaq’s Douglas Hamilton observes, “While transparency is generally desirable … mandating such disclosure poses challenges.” For example, Hamilton says, “no good definition exists today or has ever really existed that differentiates AI from software or other decision-making systems, so clarifying when disclosure is required may be difficult.” Relatedly, MIT’s Sanjay Sarma says he’s “generally hesitant to mandate declarations if the specific harms that are intended to be addressed are not explicitly listed.” In addition to these challenges, Automation Anywhere’s Yan Chow cautions that disclosures “can spill competitive secrets,” and experts like Carrier recommend that “disclosures need not cover intellectual property and/or trade secrets.”
Disagree
“Mandatory AI disclosures would impede innovation and overburden businesses, especially smaller ones. Rapidly evolving AI technology means that requirements could quickly become outdated, leading to high compliance costs and legal risks. Such disclosures would clutter interfaces and create unnecessary confusion, potentially overwhelming average consumers who lack the technical understanding to interpret this information meaningfully. Mandating AI disclosures … could lead to notification fatigue, diminishing the impact of more critical disclosures.”
Amit Shah
Instalily.ai
While Franziska Weindauer of TÜV AI.Lab believes that “adding a disclaimer should be feasible,” other experts say poor-quality disclosures could actually undermine transparency and accountability. Chow cautions, “AI can be hard to explain,” and EasyJet’s Ben Dias contends, “The biggest challenge companies will face is in how to explain, in customer-friendly language, what type of AI they are using and for what.” Johann Laux of Oxford Internet Institute says, “Disclosures should be made in plain English, not hidden away in terms and conditions.” GovLab’s Stefaan Verhulst agrees that “disclosures should be user-friendly and visually accessible to ensure comprehension.”
Neither agree nor disagree
“As AI is a general-purpose technology, it will inevitably become integral to many products in the future. Requiring companies to disclose the use of AI (and details) for every single product is impractical. Although organizations could provide such information easily, it might overwhelm customers, and‚ much like privacy statements, they become seldom read and often ignored, especially as AI becomes ubiquitous.”
Rainer Hoffmann
EnBW
Locus Robotics’s Gina Chung cautions that “as AI becomes more pervasive, detailing every application may become impractical.” Stanford CodeX fellow Riyanka Roy Choudhury agrees that “universal disclosure may not always be practical, as AI is now integral to many workflows, and not all applications necessitate disclosure.” For example, Harvard Business School’s Katia Walsh says that disclosures may not be necessary when companies “use AI for secondary research, to look up contact information, or prepare for a sales or client meeting … the same way they do not have to disclose the use of Google, a social network, or a mobile phone.” Similarly, Harvard University researcher Laura Haaber Ihle says, “If a company is using AI to prioritize its internal emails, point employees to free parking spaces, or produce slides, then such disclosure would seem unnecessary, [but if it is using AI] to profile and categorize customers and make decisions about them, then they should be made aware.”
Disclosures Should Be Required in Certain Contexts
Most experts agree that companies should disclose when customers are interacting with AI and when AI is used in consequential decisions. Laux argues, “Companies should most certainly have to disclose when a consumer is interacting with an AI system,” and Walsh agrees that “customers should know that they are interacting with a technology instead of a human.” Leopold adds, “Customers should know when … an AI system is making important decisions that affect them.” Easley emphasizes that “informed consent [through disclosures] is crucial, particularly in sectors like health care, finance, and hiring, where AI can have significant impacts on individuals’ lives.” Finally, Dias says companies should disclose “how a human can be contacted to challenge any decisions or actions taken by the AI.”
Many experts also believe that companies should be required to make data-related disclosures about AI. Dias contends, “The main concern most customers will have is around how their personal data will be used by the AI and, in particular, whether the AI will use their personal data to train itself.” Nielsen agrees that “consumers have a right to know how data is being used and manipulated.” Carrier argues that “data sets and data management and governance practices should be disclosed.” Verhulst remarks, “As a best practice, companies should not only disclose the use of AI in their operations but also detail how they will manage and protect the data generated and collected by these AI applications.” The Wharton School’s Kartik Hosanagar adds that “disclosures should cover the training data for models” and explains that even where “not required by regulations, it is important from a customer trust standpoint to disclose AI use and the kinds of data used to train AI and its purpose.”
Recommendations
In sum, for organizations seeking to make AI-related disclosures, we recommend the following:
1. Consider core RAI principles. Disclosures are a key method of giving effect to core RAI principles such as transparency and accountability. When making AI-related disclosures, companies should always consider how they promote these core principles in practice and adjust and adapt those disclosures to fit their context.
2. Make disclosures easy to understand. How organizations implement disclosures is important. In order to promote transparency and accountability, they need to be as easy to understand and as user-friendly as possible. Beyond including disclosures in legal documents (such as terms of service or privacy policies), consider engaging a variety of cross-functional teams and stakeholders, including UX/UI designers to design and implement more effective disclosures that appear in the right place at the right time.
3. Go beyond legal requirements. Even as laws are beginning to mandate AI-related disclosures, companies should consider going further than what they will require. Companies have an ethical responsibility to make disclosures to promote trust and confidence in AI, particularly where AI is being used to make decisions of consequence to individuals. This includes establishing clear internal policies specifying when disclosures are necessary, what should be included in them, and how they will be made available to users.
4. Publish details of RAI practices. Moving beyond product-level disclosures, companies should publish their AI code of conduct or details of their responsible AI program. Included in these documents should be details on when and how they will disclose the use of AI in their products and offerings and what will be included in them. This is a key step in building trust with customers, investors, and employees.
Comment (1)
Rick Cranston