AI, Robots, and Ethics in the Age of COVID-19

Reading Time: 6 min 

Topics

Column

Our expert columnists offer opinion and analysis on important issues facing modern businesses and managers.
More in this series
AI Robots

Before COVID-19, most people had some degree of apprehension about robots and artificial intelligence. Though their beliefs may have been initially shaped by dystopian depictions of the technology in science fiction, their discomfort was reinforced by legitimate concerns. Some of AI’s business applications were indeed leading to the loss of jobs, the reinforcement of biases, and infringements on data privacy.

Those worries appear to have been set aside since the onset of the pandemic as AI-infused technologies have been employed to mitigate the spread of the virus. We’ve seen an acceleration of the use of robotics to do the jobs of humans who have been ordered to stay at home or who have been redeployed within the workplace. Labor-replacing robots, for example, are taking over floor cleaning in grocery stores and sorting at recycling centers. AI is also fostering an increased reliance on chatbots for customer service at companies such as PayPal and on machine-driven content monitoring on platforms such as YouTube. Robotic telepresence platforms are providing students in Japan with an “in-person” college graduation experience. Robots are even serving as noisy fans in otherwise empty stadiums during baseball games in Taiwan. In terms of data, AI is already showing potential in early attempts to monitor infection rates and contact tracing.

No doubt, more of us are overlooking our former uneasiness about robots and AI when the technology’s perceived value outweighs its anticipated downsides. But there are dangers to this newfound embrace of AI and robots. With robots replacing more and more job functions in order to allow humans to coexist as we grasp for some semblance of normalcy, it’s important to consider what’s next. What will happen when humans want their former jobs back? And what will we do if tracking for safety’s sake becomes too invasive or seems too creepy yet is already an entrenched practice?

A New Normal Comes Racing In

After a vaccine for COVID-19 is developed (we hope) and the pandemic retreats, it’s hard to imagine life returning to how it was at the start of 2020. Our experiences in the coming months will make it quite easy to normalize automation as a part of our daily lives. Companies that have adopted robots during the crisis might think that a significant percentage of their human employees are not needed anymore. Consumers who will have spent more time than ever interacting with robots might become accustomed to that type of interaction. When you get used to having food delivered by a robot, you eventually might not even notice the disappearance of a job that was once held by a human. In fact, some people might want to maintain social distancing even when it is not strictly needed anymore.

We, as a society, have so far not questioned what types of functions these robots will replace — because during this pandemic, the technology is serving an important role. If these machines help preserve our health and well-being, then our trust in them will increase.

As the time we spend with people outside of our closest personal and work-related social networks diminishes, our bonds to our local communities might start to weaken. With that, our concerns about the consequences of robots and AI may decrease. In addition to losing sight of the scale of job loss empowered by the use of robots and AI, we may hastily overlook the forms of bias embedded within AI and the invasiveness of the technology that will be used to track the coronavirus’s spread.

Bias and Privacy Issues Have to Matter

There are many critical considerations we have to make before we become more reliant on AI and robots during the pandemic.

First, as society’s adoption and comfort level increases, organizations need to be mindful that the opportunities for bias that we know exist in AI are still a concern. For example, the potential of AI algorithms to assist with health care decision-making is vast in part because they can be trained on large data sets. AI may be called upon to help handle cases where a triage decision needs to be made in an intensive care unit — such as who gets access to a ventilator — which can have life and death ramifications. Given that heart disease is often misdiagnosed in women and black patients are frequently undertreated for pain, we know that many forms of bias underlie data sets and can interfere with data quality and how data is analyzed. These problems predate the advent of AI, but they could become more widely encoded into the fabric of the health care system if they are not corrected before AI becomes widespread.

Second, privacy concerns with respect to data collection and data accuracy are a growing problem, and organizations need to pay special attention to this issue. Vast data collection may be necessary for curtailing the spread of disease: Companies around the world are proposing phone-based apps that track individuals’ contact with those diagnosed with or recovering from the virus. Google and Apple, for instance, are partnering on an opt-in app for individuals to self-disclose their COVID-19 diagnosis. One might make a compelling argument that this is justified until the pandemic ends. Yet, once the precedent for this type of surveillance is established, how do you remove that power from governments, companies, and others? Are sunset clauses going to be built into organizations’ data collection and use plans?

The secondary uses of the vast troves of tracking data will undoubtedly entice organizations to hold on to them, especially given the financial profits that could be made off the data. Take the case of the app from Google and Apple. What happens when members of the public demand to get their data back or EU data protection and privacy rules require the disposal of the data when it is no longer needed? Cases of abuse from covert data collection and sharing are already well documented. Organizations involved in data collection and analysis — and their oversight — need to address these issues now versus later, when individuals will be less forgiving if their data is appropriated for other uses or used in ethically dubious ways.

It certainly is tempting to cast aside certain norms, regulations, and other protections, such as those around data privacy, in an emergency, when that may be what is needed in the short term to protect people and save lives. Yet we must not fail to prepare for what comes after this global emergency. This includes developing retraining strategies to help those whose jobs have been disrupted by the pandemic — already well over 30 million people in the U.S. alone — as they try to return to the workforce, given that some of those jobs are highly susceptible to replacement by automation. We need to rethink the harmful biases that might have cropped up in the AI applications we’ve adopted — biases they will have learned from our adaptive behaviors and will have modeled through their interactions with us. And although we are living in an unprecedented situation, we proactively need to address planning and protections in relation to the adoption of robots and AI. Otherwise, a crisis of another form may be looming.

Topics

Column

Our expert columnists offer opinion and analysis on important issues facing modern businesses and managers.
More in this series

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Comment (1)
Dante Ciolfi
Drs. Howard and Borenstein:
Thank you very much for this much-needed reality check for all U.S. citizens and our leaders. As we all know, in times of crisis, there is sometimes a tendency to discard the liberties and ethics upon which any civil society is based. I hope that our leaders will take heed of your clarion call. I'm truly grateful for your vision, leadership, and scholarship.