With Great Platforms Comes Great Responsibility

The openness provided by Facebook, Twitter, Google, and other leading digital platforms is working against them and their users. Everyone – including the companies that created these platforms – needs to find ways to fight against their malicious use.

Reading Time: 4 min 

Topics

As the probe into Russian interference in the 2016 U.S. presidential election continues, Americans already have a first look into the breadth and depth of the campaign against our democracy and just how effectively Russian influencers were able to wield our most popular technology and social media platforms against us. The reports of how they leased virtualized computing infrastructure and services based in the United States to rapidly scale their capabilities and mask the real geographical origin of their attacks and how they made ruthlessly efficient use of social media platforms shows that the convergence of digital platforms with politics has taken a new and alarming turn. So, what is the proper response by the companies that design and deliver these powerful platforms? And how should we, the people of the United States, react and change?

We’ve seen how the landscape of platform technologies has enabled intellectual property and relationships to fundamentally alter our economy and disrupt industries. It has led to innovative, asset-light companies being able to scale at a dizzying pace, approaching nearly $1 trillion in market capitalization — an unimaginable valuation for any one company only a decade ago. Just as we underestimated the value and impact that these companies — including Facebook Inc. and Alphabet Inc. — would create, we have also been slow to realize how their platforms enable individuals and organizations to dramatically change our social and political discourse. The U.S. Department of Justice’s recent 37-page indictment against Russian nationals and companies details the activities performed by the Internet Research Agency LLC, a “Russian organization engaged in operations to interfere with elections and political processes,” with the goal of manipulating social media platforms in order to “spread distrust toward the candidates and the political system in general.”

The activities described in the indictment call into question the strategy and capabilities of leading platform companies to monitor and moderate the usage of their products and to enforce defined acceptable uses. For traditional companies, it is relatively easy to understand and limit who will be using your product or service, and in what ways. However, for companies like Amazon, Microsoft, Facebook, Twitter, and Alphabet, the use cases for their platforms are as varied as their user bases of hundreds of millions, if not billions, of people. This is what makes platforms so valuable — and also so dangerous. By their very nature, platforms push against boundaries and restrictions on their usage and possibilities through openness and extensibility.

This openness is both a tremendous strength and a critical weakness. We must find ways to combat the continued malicious use of platforms — and by “we,” we mean some combination of users, regulators, and leaders of these platform companies. Since most of our legislators seem dramatically out of touch with technology, any change will likely come down to user demand and goodwill from platform leaders.

Here’s what we think executives like Mark Zuckerberg and Jeff Bezos should do to demonstrate their willingness to own and correct problems created by their platforms:

1. Adopt a universal end-user licensing agreement (EULA) akin to “Don’t be evil.” As Google sought in the early 2000s to create clear distinctions between search engine results and promoted content, platform companies should develop a strategy for better identifying the boundaries of acceptable use of their platforms beyond the concretely illegal and determining whether the platform is serving the greater good. Platforms must recognize their responsibility to define good and bad use, and to protect against the latter.

2. Enable stronger insights for flagging. Once the good/bad use distinction is understood, companies should strive to develop technology that provides leadership with more profound insights into their platform’s user base and unobtrusively allows the community to identify and understand how and when bad actors are executing an organized and sophisticated campaign of ill intent on the platform. Artificial intelligence will clearly be a part of the solution, as the size and complexity of these platforms make human observation and interference impossible at the scale needed. Leveraging the capabilities of artificial intelligence to gather and analyze data, interpret the results, and determine a recommended course of action should be a central component of any plan to identify and curb usage that falls outside the well-identified boundaries of acceptable use.

3. Be open to regulation. Finally, companies should have an open dialogue about the regulation of these platforms. These world-connecting platforms are clearly becoming monopolies on a scale not previously seen. In a world where everyone can access everyone else through platforms, awareness of the potential for abuse of these positions is the most basic starting point.

As the indictment shows, the Internet Research Agency leveraged the tools that companies like Facebook and Twitter have provided to marketers to determine the impact and efficacy of their campaigns to sway opinion and spread their messaging. Examining our use cases and clarifying our objectives when logging onto platforms becomes harder as we grow accustomed to their capabilities and their use becomes almost reflexive. We all must recognize our shared responsibility for helping platforms evolve in a constructive way. While everyone enjoys a laugh at their crazy uncle’s partisan political rant, the knowledge that he may be targeted and leveraged as a pawn in a grander game of political deception is sobering.

Ultimately, the responsibility is on each of us to consider very carefully the things we say and do on the platforms we use. However, this does not abolish the responsibility of the platform creators themselves. They too have a part to play in defining and reinforcing good use of very powerful tools. Together, hopefully we can all not be evil.

Topics

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.