As algorithms become increasingly embedded in business — and in life — researchers ask a series of provocative questions meant to spark discussion about their use in research, policy and practice.
The growing importance of algorithms to business and society is a little-discussed feature of our increasingly digital world. These algorithms are the underpinnings of NSA surveillance, online search engines, corporate security, modern matchmaking and many other activities in both the private and public sector. These algorithms can be a source of competitive advantage (think Google), play an operational role or drive marketing. Just what are algorithms, how are they used, and what happens when influential algorithms go wrong?
Wikipedia and Merriam-Webster define an algorithm as a step-by-step procedure for calculations, data processing and automated reasoning, expressed as a finite list of instructions that result in an outcome.
In a recent paper, Governing Algorithms: A Provocation Piece, three New York University researchers — Solon Barocas, Sophie Hood and Malte Ziewitz — explore how algorithms are influencing research, policy and practice by raising more than three dozen thought-provoking questions, including:
- How do algorithms change existing networks of accountability? By portraying [algorithms] as autonomous decision-makers, their operators can defer accountability. Where should accountability lie when an algorithm goes awry, and how could this accountability be engineered?
- Should algorithms be subject to more or less scrutiny in different contexts, such as high-frequency trading, predictive policing, retail marketing, political campaigns, and medical diagnosis?
The issue of algorithm-related accountability is less esoteric than you might think. In a 2012 Limn article, Cornell University researcher Tarleton Gillespie explores the controversy over Twitter Trends and the “algorithmic ‘censorship’ of #occupywallstreet.”
As Gillespie explains, Twitter Trends is a simple list of 10 terms provided by Twitter on its homepage. The algorithm behind Trends digests the 250 million tweets every day and indexes the most vigorously discussed terms, either globally or for a user’s chosen country or city.
The issue with Occupy Wall Street, according to Gillespie, was that “even as the protests were gaining strength and media coverage, and talk of the movement on Twitter was surging, the term was not ‘Trending.’ Even in cities where protests were occurring and tweets spiked, the term didn’t trend.” This omission led some to suggest that Twitter was deliberately dropping the term from its Trending list.
In the article, Gillespie writes:
Much like taking over the privately owned Zuccotti Park in Manhattan in order to stage a public protest, more and more of our online public discourse is taking place on private communication platforms like Twitter. These providers offer complex algorithms to manage, curate, and organize these massive networks. But there is a tension between what we understand these algorithms to be, what we need them to be, and what they in fact are.
We do not have a sufficient vocabulary for assessing the intervention of these algorithms. We’re not adept at appreciating what it takes to design a tool like Trends — one that appears to effortlessly identify what’s going on, yet also makes distinct and motivated choices. We don’t have a language for the unexpected associations algorithms make, beyond the intention (or even comprehension) of their designers. Most importantly, we have not fully recognized how these algorithms attempt to produce representations of the wants or concerns of the public, and as such, run into the classic problem of political representation: who claims to know the mind of the public, and how do they claim to know it?
There are, it seems, many questions still remaining to be answered as algorithms take on, in the words of the NYU researchers, “mythological proportions” in business and in life.