Three Big Points

Should the U.S. Government Slow Its Roll in Technology Regulation?

Play Episode


In his new column for MIT Sloan Management Review — “How Should the Biden Administration Approach Tech Regulation? With Great Care” — author Larry Downes cautions the incoming administration of President Joe Biden not to move too fast or too aggressively in regulating and litigating big tech. Pointing to the benefits of a history of relatively unfettered innovation in the United States and a lack of government expertise in emerging technology, among other factors, Downes argues that all stakeholders are better served by a less-is-more approach to regulation. MIT SMR editor in chief Paul Michelman is not entirely convinced. Downes and Michelman get into the thick of the regulation debate in this week’s episode of the Three Big Points podcast.

For Further Reading
Larry Downes is the author of five books on disruptive innovation’s economic, social, and legal impacts on business, including the New York Times bestseller Unleashing the Killer App (Harvard Business School Press, 1998), with Chunka Mui; Pivot to the Future (PublicAffairs, 2019), with Omar Abbosh and Paul Nunes; and The Laws of Disruption (Basic Books, 2009). You can learn more about his work at his website,

Subscribe to Three Big Points on Apple Podcasts, Spotify, or Google Podcasts via the Subscribe dropdown menu above.


Paul Michelman: Before we get to the show, a quick programming note. We recorded this episode of Three Big Points before the events of Jan. 6 and the various actions taken by social media and major technology platforms that followed. Thank you. I hope you enjoy the show.

Paul Michelman: Look, folks, there’s only one way to curb the runaway powers of Amazon, Facebook, and Google. There’s only one way to prevent technologies like AI and self-driving cars and delivery drones, and the endless array of devices that monitor and record our every facial twitch, heartbeat, and degree of body temperature from taking over our lives, taking away our privacy and our rights as consumers, and otherwise robbing us of what it means to be human. The European Union knows the solution, the U.S. Congress knows the solution, and business leaders know it, too: We need regulation, and plenty of it. The incoming administration of U.S. President Joe Biden should join with governments across the world to step in, break up the monopolies, and implement controls that protect their citizens from the vast manipulations and abuses of big tech.

I’m Paul Michelman, editor in chief of MIT Sloan Management Review. And this is Three Big Points. So let’s get ready to regulate! Right, Larry Downes?

Larry Downes: Gosh, Paul, if I didn’t know better, I’d think that the “T” in MIT stood for something other than technology. I don’t agree. We’ve had a wonderful period of regulation-free innovation, which has generated tremendous value. I wouldn’t want to stop that train now, especially not, you know, derailing it completely from the tracks.

Paul Michelman: So, of course, Larry, I was being slightly over-the-top, but clearly, this is a discussion with many stakeholders and many points of view. So first, who are you in this conversation? What is your background? What are your credentials as a voice we should be paying heed to?

Larry Downes: Sure. So I’ve had a lot of careers in my life. I’ve worked as a strategist, a tech entrepreneur, in investment, a lawyer, and an academic. And over the last 10 years, in particular, I’ve sort of self-appointed myself ambassador between Silicon Valley and Capitol Hill. I’m trying to bridge the gap between innovation and public policy — a mission, I have to admit, [that’s] largely failed.

Paul Michelman: You call yourself a self-appointed ambassador, but there’s more to it than that. You’re actually active in this conversation.

Larry Downes: Yes, that’s right. I actually commute between Silicon Valley and Washington, or I was until recently, about every two months. I testify regularly before Congress, and for the last several years, I’ve been working with the Georgetown business school at the Center for Business and Public Policy.

Paul Michelman: So Larry, in your new article in MIT Sloan Management Review, you lay out your point of view and your belief for how we should be approaching regulation in innovation and technology. Why don’t we just walk through the key points in your argument.

Larry Downes: Sure. To start with, we should acknowledge the policy we’ve had, and it’s been remarkably stable for most of the last 20 years, at least, even under both Republican and Democratic administration. And that has been essentially to leave, tactically, new tech, disruptive tech, to leave it alone, not to try to regulate it, not to try to intervene, not to try to shape it, not even to try to fund it (largely publicly; mostly it’s been funded privately). And that’s worked remarkably well, at least from the standpoint of looking at value creation — how much new value has been generated, the value of companies that have grown up under that regime, and in, I think, a lot of the new products and services that we’ve had the pleasure to have used over the last 20 years — particularly things like smartphones and obviously new things like drones and self-driving cars and new medical technologies as well.

Paul Michelman: And I think that’s all well and good, right? And [it] speaks to the importance of allowing innovation to innovate, or to allow innovators to innovate. But surely we can recognize that not all technology innovation has been good. Fair?

Larry Downes: Absolutely. We’re obviously all victims in one way or another of things that have gone wrong. And, you know, a lot of it is what economists refer to as negative externalities. They’re things that happen outside or indirectly from the technology and so aren’t the kinds of things that markets, as a general rule, are particularly good at correcting.

Paul Michelman: So let’s get specific, right? We’re coming out of an election cycle — arguably, [the] second presidential election cycle — where social media loomed incredibly large and powerful, where misinformation and, if we believe the wealth of reporting, foreign agents have been able to operate and manipulate the way we see the world. Surely regulation could have played a role in stemming some of that, yes?

Larry Downes: Well, maybe. Obviously, there’s plenty of regulation already. It’s largely self-regulation. We certainly can argue, and I don’t necessarily disagree, that it wasn’t completely successful or even largely successful. The problem is, what’s the alternative? In any legal discussion, you always want to ask yourself, what is the remedy that you think will solve the problem? And that’s where we immediately get into trouble. We have so many limits, particularly when it comes to political speech, that the First Amendment in this country places on the ability of regulators to intervene in any kind of discussion — but, particularly, a political discussion — that you start proposing any kind of control or regulation or oversight of that speech [and] you immediately run into constitutional problems. But you also run into practical problems. I mean, there are millions, maybe billions, of interactions going on at any time. What possible agency in our government would have the resources, the expertise, or even the ability to keep up with the discussion, let alone to try to moderate it or in some way steer it into more healthy channels?

Paul Michelman: Well, I think not having the resources or the expertise seem like addressable issues, and if there’s a greater public good, shouldn’t we focus on building the expertise and developing the resources?

Larry Downes: Well, then the question becomes an institutional one. Yes, I agree with you. The question is, where do you want to build that expertise? Do you want to build it inside governments, which typically move very slowly and are, of course, themselves very much subject to political pressures? We’ve certainly seen that in the last four years. Or do you want to build it into the ecosystem itself? You want to make sure that there are incentives and indeed penalties for the market, for private providers, if they don’t build that kind of capacity and do it themselves — hopefully in a more responsive, more quick, and a more technology-based set of solutions.

Paul Michelman: How might that look, say, in the case of Facebook?

Larry Downes: So, you know, Facebook, as I say in the piece, has been congenitally mismanaged. I think from birth, it’s been a very poorly run company — probably not our best poster child for how this could be done. But they have been doing a lot, especially over the last few years, to build up sort of automated technology, as well as human-moderated technology to try and find these bad pages, the hate speech, the foreign intervention, all of the stuff that’s truly garbage and root it out. Now, you could say they haven’t done it fast enough, they haven’t done a good enough job. But they certainly have filtered out quite a bit of bad stuff. We would never know — we don’t know how much worse it could have been in this last election cycle if they hadn’t done what they’ve already done.

Paul Michelman: So let’s pull back and look at some of the higher-level, let’s call them philosophical, issues here. History does show, I think, that every major technological advancement has ultimately improved the human condition, right, which is an argument in favor of letting innovation blossom. But does that necessarily mean we have to accept the short-term costs of that long-term improvement? Carl Frey argued this point in his book The Technology Trap. This issue is especially resonant against a strong set of evidence that demonstrates that the value of new technology is not evenly distributed, especially in the early stages of a major transition, such as the one that we’re in. Regulation doesn’t necessarily prevent innovation. It can help provide a pathway to best unleash its power of good, can it not?

Larry Downes: Right. And, you know, there’s a reason why Schumpeter described this as creative destruction. It is true that in the early stages of these kinds of innovations that we’re seeing — that transform industries, that transform life as we know it — it’s very messy at the beginning. There is a lot of fallout; there is a lot of uneven distribution of the value. The problem, however, with the regulatory solution is, again, what kind of a remedy can you propose? We’ve designed our governments to be deliberative and slow for very good reasons — that’s all the checks and balances in the U.S. system, in the Constitution — because we don’t want to regulate in the middle of a crisis. We don’t want to make decisions based on emotions. We want to slow the process down.

And in most situations that’s a good thing. It works very well. The problem is it doesn’t work well when things are changing quickly. And in some ways, what you’re really seeing here is kind of the struggle between traditional law and Moore’s law — Moore’s law being that wonderful principle that computing power continues to double every couple of years. That’s really what’s introducing so much chaos and uncertainty. And the problem is that traditional governments are extremely poor in responding to that level of change in terms of its pace, in terms of its uncertainty, and in terms of its trajectory. Now, that’s not to say that even in the short term, you can’t regulate. In fact, all these companies are regulated in ways, you know, that traditional companies are in terms of taxation, health, safety, and all the things that sit on top of any company, regardless of what kind of thing it’s doing. But it’s when you start to talk about specific regulation of specific products and services or specific technology that I think government is very poorly suited, especially in the short term, to do anything that won’t make things actually worse.

Paul Michelman: So I think there really kind of are two parts to this conversation, right? There is the role of regulation or lack thereof with respect to allowing companies to produce new innovations and potentially disruptive innovations. That’s one. The second part of the conversation is about how free these companies are to exploit the full value of their innovations over time, right? The monopoly part of this conversation. And, you know, is there not a reasonable argument that Facebook and Google and Amazon, to name three, but others too, pursue monopolistic business strategies by exploiting the size and wealth they’ve built on the back of their disruptive innovations and using their footprint to block new entrants, exploit business partners, copy and expand other companies’ strategies, and therefore restrict consumer options? There’s certainly a wellspring of government and media activity in support of that argument.

Larry Downes: And there’s certainly a lot of activity outside the U.S., particularly in the European Union, to sort of pursue that theory. The problem is it does not jibe with a U.S. approach to unfair competition and antitrust, at least not over the last 50 years. We’ve had a standard, and of course it’s supported by not only statute but case law as well, what’s called the consumer welfare standard, that says, “We don’t know what a monopoly is. We don’t define it explicitly in terms of market share or in terms of any other sort of numerical criteria. We define monopoly in terms of its impact on consumers.” And as long as consumers are not experiencing monopoly pricing — they’re not seeing things [like], you know, once the person has control of the market, they suddenly raise prices — that’s a good indication of an antitrust problem that would be covered by the consumer welfare standard. But when a lot of the products and services we’re talking about here from the companies you named are not charged — at least we don’t pay monetarily for their use, things like YouTube and Facebook and Google Search and Amazon — the prices actually go down. If you’re shopping on Amazon, things get cheaper. And under those sort of economics, U.S. antitrust law, at least as it’s been practiced for the last 50 years, just doesn’t recognize a harm.

Paul Michelman: Is there any argument, kind of, is there a B2B argument here? Is there an argument that upstart businesses can mount, basically saying that they cannot compete, even if there is no overt and obvious consumer harm?

Larry Downes: Yes, it’s a much more limited argument, and many have made it, certainly in the U.S. The Justice Department over the last four years, even going back into the Obama administration, tried to bring several cases like that, arguing that some of these tech companies were excluding competitors or were, in fact, conspiring to keep them out of the market. I have to say, not really a single one of those cases has succeeded in the courts. There have been some settlements, monetary penalties and fines, and small changes to behavior have been agreed to, but by and large, the Justice Department just does not have antitrust law that supports harm to competitors, as opposed to harm to consumers, to work with.

Paul Michelman: I think a counterargument here is that the speed with which the new big tech has kind of gone from nascent idea to worldwide juggernaut, the speed with which the major tech players have kind of invaded seemingly all facets of our lives, is unprecedented. And perhaps a historical view of regulation, particularly with respect to potential monopolistic business practices, doesn’t work here. At some point, don’t we have to recognize that certain elements of what we’re seeing today are really unprecedented and demand, perhaps, a point of view that is not purely historical in nature?

Larry Downes: Well, it’s funny — I have an article taped up on my desk from a few years ago, and it’s from the newspaper The Guardian, and the headline is “Will MySpace Ever Lose Its Monopoly?” And it’s not ironic. I mean, at the time this article was written —

Paul Michelman: Touché.

Larry Downes: At the time the article was written, MySpace was the dominant social media platform. Of course, it was backed by Rupert Murdoch and, you know, this was a company that had every reason to believe it was going to become a worldwide juggernaut, as you described. Look, it’s certainly possible that we have entered the “end of history” in terms of how tech and markets interact. But we should remember that Google had to beat out a lot of other search engines that at the time looked like they were the ones that were going to be with us forever. Now, of course, we don’t even remember their names. So it is possible. My belief, and it’s really almost an artifact of religion more than anything else, is that the way to discipline markets dominated by a few technology providers is with more technology — that, you know, sort of new generations of tech, new opportunities, new potential, will lead to new startups that will do things differently and that, in fact, will displace even the ones that today seem undisplaceable. I could be wrong about that, but in the meantime, it’s, I think, very dangerous to start tinkering with the machinery — unless you actually know how the machinery works and, as I said earlier, you’ve got a remedy that you think will do it in a way that won’t make things worse than they already are.

Paul Michelman: OK, Larry. So if what appears to be the prevailing wisdom of, I think, both U.S. political parties, and certainly governments across the EU, that we do need much more aggressive regulation and legislation and litigation — if that’s not the right approach, what is? Can you spell out kind of what you think is the more appropriate and seemingly nuanced approach we should be taking?

Larry Downes: Sure. So, and I think, you know, it’s not new by any stretch. It really is sort of baked into any kind of deliberative government, and particularly our constitutional system, which is that we take things slowly in terms of making regulation that applies to specific business practices or specific products or specific companies. So, what we would start doing is kind of calming down the rhetoric. Congress loves to stand up and yell and shout about perceived problems with the technology, but a lot of this is rhetoric. We should really collect actual evidence of actual harms, whether it is to consumers or to competitors or to the political process itself and analyze that evidence in a very traditional cost-benefit approach, and then start to look, as I said before, for what we can do. What kind of remedies can we design? What kind of regulators will have the capacity and the ability to fix those market failures that aren’t fixing themselves and do so in a way that isn’t going to introduce costs and problems that are actually more expensive than doing nothing (which, of course, is always an alternative remedy)? If you imagine that you can’t make it any better, then don’t do anything.

There’s an old joke about emergency room doctors on their first day; they’re told, “Don’t just do something, stand there.” Because until you actually know what’s going on, the likelihood is much greater that you’re going to do something bad than something good. I’m not saying that it’s not possible, but right now, you know, government, typically Congress — you’ve heard the hearings when they talk to social media companies or other tech companies, they don’t even understand the name of the company or the product. They have obviously no idea how it works or who’s using it and what they’re doing with it. That is not a good foundation for careful and deliberative analysis of problems and the design of effective solutions. So, you know, one short-term fix is, bring back something called the Office of Technology Assessment, which went out with the Contract With America for no other reason than it was just an easy line item to strike. That was an office that really helped Congress understand how technology worked. And in some sense, we’ve needed it more than ever during the period when it’s been gone. There have been some calls by some platforms to bring back the OTA; if nothing else, that would be a really good first step.

Paul Michelman: So let’s call that one “keep calm and learn.” So, what’s the second point in your approach?

Larry Downes: So the second point really is to really let the technology regulate itself wherever possible. And again, you know, there’s a lot of bad press about self-regulation or what some refer to as “soft law” — industry standards, trade associations, or things of this nature — setting best practices, determining, you know, when things aren’t working. But actually, there’s a long history of that working, and even in this sector, even in high tech, we have a lot of — again, they’re not perfect by any means; the incentives aren’t always best aligned, particularly when there are these externalities involved. Sometimes we don’t see them. Certainly, that’s true in terms of climate change. But wherever possible, let the technology do its own regulation and wait for, as I said before, new generations of disruptive innovation to displace today’s giants or at least to discipline them to behave in ways that are more effective.

Paul Michelman: All right, Larry. So point 2 is let the technology regulate where possible, which leads to point 3.

Larry Downes: You know, again, we’re not going to be completely supine here. We are going to intervene. We’re going to intervene when we have actual evidence that the market has failed and is not correcting itself, and that’s, of course, the time when regulators are best suited to intervene. Now, the problem here is that the first time we’re going to hear about this is when the kind of incumbent competitors go to the regulators and say, “Wow, this new thing just showed up,” whether it’s Uber, or blockchain, or cryptocurrencies if you’re in banking. And they’re going to say, “This new technology, it’s not regulated, it’s messing with my business. We want you, our traditional regulator, we want you to stop it. We want you to slow it down or ban it or regulate it the way you regulate us, because we don’t really know how to compete with that, and rather than figuring it out, we would rather have a regulatory solution.” So there’s a caveat here. We do want to intervene when markets have truly failed, but we don’t want to do it just because the incumbents are pressuring the regulators to do it.

Paul Michelman: And your fourth point.

Larry Downes: Again, it’s really the approach that we’ve embraced in the last hundred years with this sort of administrative state or with these expert regulatory agencies. We give them sort of a charter. We give them a domain, and then we have a process by which through careful evidence collection, through expert testimony, through public notice and comment, they come up with the rules. Now, again, it’s not perfect because, you know, even though they’re largely isolated, insulated from the political processes we’ve seen, there always is the temptation for the administration or for Congress, which share authority over these agencies, to tweak the system for political reasons. But the Constitution does protect them, in a large sense, from those kinds of interventions. And by and large, I think, you know, that has worked. It’s expensive, it’s slow — slower than we want it to be — and maybe it is becoming increasingly political, but it’s mostly worked. And certainly, I think it’s the basis for any kind of expansion of the regulation of disruptive innovation.

Paul Michelman: So we shouldn’t regulate until harm has been demonstrated. When we do regulate, it should be focused very specifically on the measurable harm. And then that brings us to number five: Even then, it should be temporary.

Larry Downes: Yeah, we have so many laws and regulations on the books. I mean, literally, the Code of Federal Regulations — which is the kind of statutory embodiment of all the regulations from all the agencies — if you start looking at how many pages have been added, it just goes up exponentially over the last couple of decades. One of the problems is we don’t set sunset provisions. Some regulations, some laws do have sunset provisions, but they say … it doesn’t mean it necessarily goes away, it just says, “If you don’t renew this in two years or five years, then it will sunset, it will go away.” That’s a technique that I think is especially appropriate when, again, we’re talking about innovative technologies. We’ve got laws on the books dealing with spam, dealing with computer security — they’re so outdated, they’re so obsolete, they can cause unintended harm because now they’re there, and they can be misused. Better just to let them go away. And if we still haven’t solved the problem, which in [the] case of spam, for example, we certainly haven’t, then we just start over. We need new laws to deal with the new reality, the new technology, the new way in which spam is delivered and created and abused. The old law just can’t handle it and never could. But rather than leave it on the books, potentially to pop up in unexpected and unhelpful ways, let’s set it aside to sunset and then start over with a fresh slate.

Paul Michelman: Larry, I will say that I think you present a very cogent argument. I think plenty of listeners will not fully agree, as you know, right? I think you make some very good points. I’m not totally there. I think there probably is a role for somewhat more aggressive involvement by government, given the state of where we are, but you state your position very well, and you certainly have an incredible voice in this conversation, and I thank you. All right, Larry, three big points for business leaders on tech innovation and regulation for 2021. Number one.

Larry Downes: There are key technologies, including drones, autonomous vehicles, blockchain, and so on that are already being discussed and already being considered for regulation. How they ultimately are regulated could have a large impact on your industry and how it changes.

Paul Michelman: Number two.

Larry Downes: Number two is to reengage really with trade groups and industry groups, and make sure that they’re following not just sort of traditional competitors, but new entrants and new technologies, to make sure, again, if those new technologies are going to affect your industry, you have a voice in how they are going to be regulated.

Paul Michelman: And number three.

Larry Downes: Number three is I think we can learn from the pandemic. We’ve just had kind of a natural experiment in how tech can be used in a crisis. And a lot of things we learned, both good and bad, about that should have a big influence on how we want to regulate tech in the next decade.

Paul Michelman: And make sure to check out Larry’s new article, ““How Should the Biden Administration Approach Tech Regulation? With Great Care” at That’s all for this week’s Three Big Points. Remember, you can find us on Spotify, Apple Podcasts, Google Podcasts, Stitcher, TuneIn, and wherever fine podcasts are streamed. If you’d like to support our show, please post a rating or a review on whatever podcast platform you prefer. Three Big Points is produced by Mary Dooe. Music by Matt Reed. Marketing and audience development by Desiree Barry. Our coordinating producers are Michele DeFilippo and Mackenzie Wise.


More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Comment (1)
Carmen Lam
Nothing worth doing is easy so the issue is not how hard the tech companies had to fight to achieve what they have achieved. The issue is that the consumer is harmed by their practices. Even if the consumer is not being charged more in money terms for each transaction, s/he is in actuality paying more in the long run because lower prices come with terms and treatment that manipulate, mislead and victimize them in the long run to make more money for these tech companies.

Subscribe to Three Big Points

Three Big Points