Paula Goldman is the first chief ethical and humane use officer at Salesforce, where she leads efforts to create a framework to build and deploy ethical technology that optimizes social benefit. Previously, she served as vice president, global lead, for the Tech and Society Solutions Lab at Omidyar Network, where she also created and led its global efforts to build the impact investing movement through its portfolio, partnerships, and thought leadership. She has a doctorate from Harvard, a masters degree from Princeton, and a bachelors degree from the University of California, Berkeley.
Learn more about Paula Goldman’s approach to AI in the Me, Myself, and AI podcast.
|Most RAI programs are unprepared to address the risks of new generative AI tools. Neither agree nor disagree||
“Companies, including Salesforce, have been working to advance ethical AI for a while now. For example, we’ve been operationalizing responsible AI guidelines, incorporated ethical AI guardrails in our external-facing acceptable-use policies, and launched an AI ethics maturity model to encourage consumers to build and deploy AI responsibly.
But generative AI introduces new risks with higher stakes in the context of a global business. What makes this technology so unique is that we’re moving from classification and prediction of data to content creation — often using vast amounts of data to train foundation models. It’s unclear how ready companies are for these fast-moving advances in tech, but there are teams actively working in this area. At Salesforce, we’ve assembled more-specific guidelines to help guide the responsible development of generative AI at the company and beyond.”
|RAI programs effectively address the risks of third-party AI tools. Strongly agree||
“Today, it’s not enough to simply deliver the technological capabilities of AI. Our organizations must prioritize responsible innovation, both in our own technology and the tools we leverage from partners.
This is especially important now, as businesses race to announce new partnerships and invest in startups to help them bring generative AI to market. We all have a responsibility to guide how this transformative technology can and should be used, and to ensure that the tools we bring to market are safe, accurate, and ethical for all.”
|Executives usually think of RAI as a technology issue. Agree||“Tech ethics is as much about changing culture as it is about technology. While it’s important to have a dedicated team to drive and measure the responsible design, development, and use of technology, responsible AI can be achieved only once it is owned by everyone in the organization. At Salesforce, we’re working to help every employee understand and bring this perspective into their work, no matter where they sit in the company.”|
|Mature RAI programs minimize AI system failures. Strongly agree||“Some of AI’s greatest failures to date have been the product of bias, whether it’s recruiting tools that favor men over women or facial recognition programs that misidentify people of color. Embedding ethics and inclusion in the design and delivery of AI not only helps to mitigate bias — it also helps to increase the accuracy and relevancy of our models, and to increase their performance in all kinds of situations once deployed.”|
|RAI constrains AI-related innovation. Strongly disagree||
“AI has the power to transform the way we live and work in profound ways‚ but we’ve also seen it exacerbate social inequities. As we look to deliver on the technological potential of AI, we have an important responsibility to ensure that AI is ethical, safe, and inclusive for all.
We believe investing in principles and guardrails can unleash the positive potential of AI by mitigating harms and building trust. New practices like consequence scanning and Salesforce’s Build With Intention program are already bringing diverse perspectives and innovative new ideas to this space. These investments in responsible AI not only deliver better products; they help set new industry standards and create a shared vision for an innovative and ethical future of AI.”
|Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Disagree||“Ethical AI is an essential component of business strategy, especially for tech companies. Our technology ethics team sits at the heart of our product strategy and operations so we can shape how products are built and used, drive excellence and innovation, and deepen trust with customers and stakeholders. Alignment to CSR goals is no doubt helpful, but the goal of technology ethics is bigger and drives at business fundamentals.”|
|Responsible AI should be a part of the top management agenda. Strongly agree||
“Artificial intelligence is already reshaping our society and world. In the workplace, AI is powerful; it can augment and extend the capabilities of employees, enhance human decisions, and increase productivity. For businesses, it’s a remarkable asset, making companies smarter and driving better decisions and outcomes. And AI benefits society in countless ways‚ from automating dangerous tasks to the faster detection of cancer. In the years to come, it has the power to transform every trade, profession, and company.
But while there are tangible benefits from AI, it is important to consider the potential negative impacts on people’s lives from the technology’s role in significant decisions such as loan approvals or criminal investigations. Harnessing AI responsibly requires collective focus and action from tech leaders, in partnership with civil society and government, to ensure that it’s being built and used in a way that is responsible, accountable, transparent, empowering, and inclusive. As we navigate increasing complexity and the unknowns of an AI-powered future, establishing a clear ethical framework isn’t optional. It’s vital for its future.”