Trust in technology is being eroded by concerns about data security — and if tech leaders want that trust back, they need to think about their products’ privacy implications.
At Google’s annual developer conference, Google I/O 2018, the company’s CEO Sundar Pitchai proudly demonstrated Google Duplex, a new artificial intelligence voice technology, making a remarkably human-sounding reservation over the phone. The problem was that the actual human on the other line did not know she was interacting with a bot. Only after Google faced backlash over concerns about this kind of deception did the company agree to release Duplex with disclosure built in.
As software continues to “eat the world,” the potential for privacy and ethics violations increases. It’s clear that technology executives and managers need to recognize the industry-wide factors that have contributed to the current fractured state of customer trust and move toward a framework that puts users first.
First, let’s examine some of the contributing factors of the current status quo.
Believing Moore’s law for too long. Intel cofounder Gordon Moore famously predicted that computing power, measured by quantity of transistors, would double every year, leading to exponential growth in this field. Moore’s law persisted throughout the hardware and software age, and only recently have we begun to consider its demise. With such a focus on growth and velocity of innovation, many technologists have found themselves ill-prepared to consider the impact of their technology.
Favoring the individual company over the collective users. In the tragedy of the commons, individual rationality and collective rationality are at odds with one another and are contradictory. This same conundrum exists today in tech — companies capture and use customers’ personal information but fail to show concern about the overall damage they cause by their individual actions.
Companies have acted in favor of increasing market share, but in the process have eroded the confidence and trust of customers. This was quite clear as we watched Facebook’s Mark Zuckerberg grilled by Congress over the consequences of Cambridge Analytica’s data privacy scandal with Facebook user data.
Leading with tech first, questions second (or not at all). The rise in artificial and augmented intelligence has led to a proliferation of technologies that create, mimic, and facilitate conversation. This means designers are now introducing empathy, personality, and creativity to machine-human interaction in ways that affect user experience. The relationship a machine has with (and to) a user becomes a new competitive advantage.
Everyday objects are now becoming smart objects with the ability to interact with humans. What are the guidelines for structuring these conversations? Google has raised the question of whether users should be informed that they are interacting with a computer. What ethical rules should be in play when it comes to using these products, whether it’s a voice assistant, a TV, or even a car?
Companies that excel in addressing these questions to gain the trust of users will be given the opportunity to offer new products and services to those users. The key ingredient here — and this cannot be stated too often — is trust.
Moving From a “Can We?” to a “Should We?” Framework
Technology and business experts must do a better job of anticipating challenges before making decisions, by asking key user-centered questions before launching new products into the market. The following questions on a technology’s impact must be systematically addressed before bringing it to market:
- Will this technology result in overall good?
- What might be some unintended consequences of this technology?
- What are the social and ethical impacts of the technology?
- Will this technology augment human intellect, disrupt it, or substitute for it?
- How could this technology be used negatively against users?
Technologists won’t be able to answer these questions by themselves — which brings us to the most important question all executives need to ask: What leadership structures do we need to have in place to guide the future evolution of the technology while controlling for unintended negative consequences?
We argue that the answer to this last question needs to be more than simply “we need more engineers.” Instead, it is important for leaders to embrace the following six principles and ensure they are introduced at every level of the organization.
- Assume responsibility. Companies need to assume ethical and legal responsibility for the impact of their technology on society. The burden of proof should be on companies to provide reasonable assurances that they have scrutinized the impact that their products would have.
- Offer transparency. It is important that individuals have the ability to access information about any technology they use. Companies should provide frequent impact disclosures on all developing technology, including answers to the questions about their impact. Companies working on the cutting edge of AI should be subject to external review.
- Give users the right to be forgotten. If customers would like to leave a product or system, they should be able to do so easily, with one click. This would apply to user accounts or personal and transactional data stored by a company. With the European Union’s General Data Protection Regulation having taken effect in May 2018, this is now a legal requirement for companies doing business in Europe, not an option.
- Anticipate technology adoption challenges. Questions about a technology’s impact should not be addressed only after the technology has been developed or in the case of public backlash. Concerns of intended and unintended impact need to be addressed during the engineering process and embedded in the development of a technology. Ethical considerations can no longer be an afterthought.
- Conduct experiments. Companies must seek empirical evidence to determine how people react to new technology or changes in existing technology. When introducing technology-enabled product features, companies should conduct statistical experiments to determine if users like the changes. For example, if Facebook decides to provide automatic updates on news feeds, it must first conduct multiple tests with a subset of users and then release those data to the public.
- Assemble a team of diverse thinkers. Tech firms must integrate individuals with expertise outside of business and technology into decision-making points across organizations. New skill sets are required when, for example, companies trying to develop conversational commerce technologies seek to design a user experience that is more accessible and humane. Linguists, scriptwriters, human development specialists, sociologists, physicians, scientists, psychologists, and ethicists can help to evaluate the quality of interaction and appropriateness of responses, how machines make users feel, and how technology could impact society. Technology projects power, and how that power should be used is not a technological but an ethical, social, and political question.
Read Related Articles
In summary, it’s time to stop thinking of Moore’s law as if it were a natural law. Humanizing technology should be a core capability of companies for both ethical and competitive reasons. By striking a balance between technological innovation and concerns for users, organizations can achieve a new competitive advantage — one that legacy companies may, in fact, be better poised to gain as many digital natives face rebuilding customer trust as their next challenge.