AI, corporate power and democracy
Last week, there was a controversy on Twitter about Amanda Askell, a philosopher working at Anthropic where she’s in charge of alignment, after the Wall Street Journal published a profile on her. Some people think she isn’t the right person for the job, on the ground that she has moral views that are unusual and that it’s wrong for someone with such views to decide what moral principles should be hard-coded into models that may soon have a profound influence on society (this argument was often mixed with various ad hominem attacks on her), which she responded to by arguing that she was trying not to inject her own views into her work on Claude. I don’t personally know Askell and I’m also not familiar with the details of her work at Anthropic, but I’ve been following her on Twitter for several years and she seems perfectly decent, so I don’t want to join her detractors and argue that someone else should be in charge of alignment at Anthropic. However, I think that beyond her personal case, this controversy raises, if only indirectly, a serious concern about AI that I suspect will soon become a major political issue and that people should think about seriously. The problem is that, as some of the arguments people made against Askell hinted at, there is a tension between the way in which AI companies seem poised to impact society and democratic principles.
The observation that large corporations are hard to square with democratic theory isn’t new. It was most famously made by Charles Lindblom in Politics and Market, a book he published in 1977, which though largely forgotten today made a huge splash at the time.1 Lindblom points out that, although in practice elected officials have a wide latitude to govern as they see fit, because ordinary people have neither the knowledge nor the time to attend to public affairs consistently and most of them don’t even have the desire to do so in the first place, in a democracy voters can still replace them periodically and this indirectly gives them a degree of control over government because it creates incentives for politicians not to stray too far from popular preferences. I think Lindblom may have overestimated that mechanism and failed to fully appreciate the degree to which politicians could ignore popular preferences even on politically salient issues as long as there is a strong enough consensus among the elite on them, but he still understood that at best it gave people a very indirect way to control the government.
In theory, people have a much tighter control over corporations, because as consumers they have a veto power over their products since they are not forced to buy them, whereas they are forced to abide by government decisions even if they disagree with them. If corporations ignore people’s preferences, so the argument goes, they will go out of business. Thus, while people can’t choose the leadership of corporations in the way they can choose political leaders, they still exercise a lot of power over them and the decisions they make through their ability not to buy their products. This fact is supposed to reconcile the freedom that corporations enjoy in a market economy with democratic principles. In a market economy, which Lindblom calls a market system, a large category of decisions with enormous consequences for the public is effectively turned over to businessmen and taken off the political agenda, as the law protects corporations from interference by the state into their decision-making.2 Lindblom says that in practice businessmen are a kind of public official, but unlike political leaders, people can’t vote for them. This would be democratically problematic if it meant that people have no control over their decisions, but as we have seen, they are supposed to control them through their power as consumers.
The problem is that, as Lindblom explains, this story doesn’t really hold up. His main argument is that, even in a perfectly competitive sector, there is no obvious way to infer a profit-maximizing strategy from the behavior of consumers unless you also wish uncertainty out of existence. Whether consumers keep buying their products or stop doing so, it doesn’t tell businessmen what they should do to make sure that consumers keep buying their products or start doing it again. For instance, when Steve Jobs decided to create the iPhone, there was no way to know that it would be successful. In general, from the fact that consumers have a veto power over the products of corporations and that corporations have to maximize their profits if they want to stay in business, it doesn’t follow that businessmen face a single correct strategy.3 This means that, at the end of the day, the control that people indirectly have on businessmen’s decision through their veto power over their products is very loose and leaves businessmen with even more discretion over their decisions than political leaders have despite the fact that as consumers people exercise their veto power continuously whereas as citizens they only do so every few years during elections. In particular, even in a world with no market imperfection, strategic choices would remain almost entirely at the discretion of businessmen.
But as we have seen, those decisions can have enormous consequences, not just for corporations, their employees and their customers, but also for the rest of society. For instance, when a company makes a decision to abandon a product line and close the factories that made the products in question, it may economically destroy entire communities even though it may not actually improve profitability in the long run. To be clear, Lindblom understood that the protection from state interference that corporations enjoy in liberal democracies was one of the reasons why market economies have historically been good at delivering broad-based prosperity relative to other systems, so he was not trying to make the case for a planned economy.4 One may well argue that, because of the tension between corporate power and democratic principles, we should abandon the market system and switch to a system that gives people more control over corporate leaders and the decisions they make, but that conclusion doesn’t logically follow from Lindblom’s argument and he wasn’t making it. He was just pointing out that, while democratic institutions systematically discipline political leaders but not corporate leaders, even though both contribute to determining social outcomes.
In other words, despite the fact that collectively the leaders of large corporations shape have a degree of influence on society that is comparable to that of political leaders and arguably even larger, their decisions are not subjected to any kind of effective democratic control. There is a pragmatic justification for allowing businessmen to enjoy that kind of unfettered power, again that’s one of the reasons why market economies are relatively efficient and can deliver broad-based prosperity in a way other systems can’t, but that justification has nothing to do with democratic principles and, as we have seen, it’s even at odds with them. People just don’t see that contradiction because they have largely internalized that it’s how liberal democracy works. It doesn’t change the fact that, just like political leaders, corporate leaders make decisions that don’t just affect people as a result of transactions they have freely agreed to but entire communities and in many cases even society at large. This is why Lindblom famously concluded his book with the words: “The large private corporation fits oddly into democratic theory and vision. Indeed, it does not fit.”
Which brings me to AI companies. As we have just seen, the existence of a tension between democracy as people imagine it should work and large corporations isn’t new, but if AI companies are right about the impact their work is going to have on society, then in their case that tension is going to reach unprecedented heights. Just think about it for a second. They are promising that, within a few years, virtually every white-collar job will have been automated and that artificial super-intelligence will replace even the smartest and most knowledgeable humans. While manual workers may be safe at first, this won’t last, because artificial super-intelligence will come up with robots that can automate even physical tasks. Not only will labor cease to be a bottleneck, but humans will become economically obsolete. Now, you may disagree with that narrative and think it’s unrealistic, but that’s what AI companies themselves are saying. Moreover, even if the timeline for AI-induced disruption turns out to be more spread out than people at AI companies believe and artificial general intelligence is still more than a decade away, I think at this point there is little doubt that AI will result in massive social and economic disruption.5
For what it’s worth, I personally think that AI companies overestimate how fast adoption will be, not only because I’m still not sure that general artificial intelligence is really just a few years away, but also because I think they underestimate the social, cultural and political obstacles that will slow down adoption. For instance, a lot of people are paid to do a job not just because they have the necessary know-how, but also because they can be held legally responsible for what they do. What it means is that, even once it has become technically possible for AI to automate those jobs, actually moving toward full automation will probably require a pretty extensive overhaul of current law. AI may actually help with that, sorting through current law and figuring out what changes are required is precisely the kind of things I expect it to be pretty good at doing, but the bottleneck will be politics and AI isn’t going to speed that up because it won’t magically prevent different people from having different interests. That’s one of the reasons why, even though I'm pretty bullish on AI, but I still think there is a pretty high probability that it's a bubble, in the sense that revenue will not grow fast enough to justify the enormous capital expenditures they’re making now.
But this won't make the technology or its consequences disappear and, while I’m very unclear about what the actual timeline will be, I have no doubt that it will result in massive social and economic disruption eventually. I also think that, although many people say that we’ve been through technological revolutions before and that it never made humans economically redundant, there are good reasons to think that “this time it will be different”, a phrase that is often used derisively but that may actually be true with AI. I think it’s quite possible that, down the line, AI will change human civilization in unprecedented ways and I also think that it will probably do so relatively quickly by historical standards. It might be like going through the neolithic revolution and the industrial revolution at the same time within a few decades or perhaps even a few years on the fastest timelines. The very notion of what it means to be human may change. In fact, if we end up creating artificial super-intelligence, the human race may not even survive that development. I know it sounds preposterous to most people, but as Richard Chappell recently argued, it’s not so easy to escape the conclusion that we should take AI-safety seriously. The same is true for the hypothesis that AI will make humans economically redundant.
It’s hard to see how one could dismiss that possibility unless one assumed that we’ll never create artificial general intelligence, which may be correct in the short to medium term, but I think it’s surely wrong in the long term and may well be wrong even in the short to medium term. Even if people in the field are wrong that we are just a few years away from artificial general intelligence, unless you think that it’s in principle impossible for a machine to exhibit human-like intelligence (which indeed seems to be what many people are think based on what I think are deeply confused arguments), it will happen sooner or later and, when it does, the questions that people ask right now about how to ensure that AI is aligned and what this even means, how to deploy it so as to minimize economic transition costs, etc. or even whether we should really pursue artificial general intelligence in the first place will probably rank among the most important questions our species has ever had to answer. And the truth is that, along with a few politicians, they will for the most part be answered by a handful of people at AI companies.6
Now, if Lindblom was right that the power of large corporations to wrought economic devastation upon entire communities was inconsistent with democratic principles, then surely that’s even more true of the power of AI companies to make humans economically and intellectually redundant and perhaps even to cause the extinction of the human race also is. Again, just as Lindblom wasn’t drawing the conclusion from his argument that we should replace the market system by a command economy, I’m not arguing that we should nationalize AI companies or even shut them down. I’m just pointing out that, if or when AI companies manage to create artificial general intelligence (which again may take much longer than most people in the industry assume), the gap between the mythology about how democracy is supposed to work and the unprecedented influence that a handful of people had on human civilization will be hard to ignore. In practice, the kind of inconsistency between the power of large corporations and democratic principles that Lindblom was talking about was not a serious threat to liberal democracy, because it took a lot of reflection to even notice it. But if virtually every white-collar job is automated within 2-3 decades because of AI and, with only a relatively short time lag, robots start coming for blue collar jobs as well, it will probably be a different story.
The irony is that most anti-AI people, especially on the left, can’t really make that argument because they spend most of their time trying to downplay the achievements of AI companies and the impact that AI will have on society. It’s hard to make the case that it’s unacceptable for a handful of people at AI companies, over which nobody else has any control, to make decisions that may completely revolutionize society and may even bring about the end of the human race if you insist that large language models are nothing more than “stochastic parrots” and that no machine will ever truly be intelligent or creative. As anyone who follows me on Twitter knows, I’m very pro-AI and I regularly criticize this kind of argument, but I can see where things are headed politically and that’s precisely why I think it’s important that people who share my enthusiasm for AI start thinking seriously about how we’re going to deal with the disruption it will unleash, because otherwise anti-AI politics will become a very powerful force.7 However, this essay was not about that except perhaps indirectly, but about how AI may well bring to the fore the tension between the power of large corporations and democratic principles that Lindblom was talking about.
I think that contrary to the picture that democratic mythology paints, democracy doesn’t abolish the distinction between the people who govern and the people who are governed in any meaningful way, because that’s impossible.8 In other words, I think it’s less a mode of government than a mode of legitimation, just like the doctrine of the divine right in 17th century Europe or the idea that the party was the avant-garde of the proletariat in the Soviet Union and the associated institutions. But just as with those other modes of legitimation, for it to work, people still need to believe in the mythology to some extent. The point I was trying to make in this essay is that, if AI really causes massive social and economic disruption over a relatively short period of time, the incompatibility between the power of large corporations and democratic principles, which up until now was hard to grasp and easy to ignore, may become so obvious that people stop believing in the democratic mythology on which democracy rests to bestow legitimacy on the government. It may be that, just as magic deserted the thrones and kings became men after the French Revolution, the epoch-making changes that a few people working at AI companies will unleash on the world in that scenario will destroy any illusions people may still harbor about the control that democracy gives them on the decisions that shape society.
Of course, if AI really makes humans economically and intellectually redundant eventually, the fact that AI companies will make the tension between corporate power and democratic principles harder to ignore will neither be the only reason why democracy may become unsustainable or even the main one. Once we have developed artificial super-intelligence, even if we somehow manage to keep it under control, it will be very hard to resist the temptation to delegate a lot of decisions to them. In a way, something like that is already operating in liberal democracies today, because technocrats have been given a lot of power to make important decisions and various mechanisms have been created to insulate their decision-making from democratic pressure. But I think it’s fair to say that artificial super-intelligence would take that logic to a completely different level. We tend to think of liberal democracy as natural and assume it will be eternal, but nothing is eternal and, in general, both morality and political institutions are largely downstream from technology. Perhaps AI will do to democracy what gunpowder did to feudalism.
I’m only going to talk about one aspect of the book, which is what it’s mainly known for, but it’s a fantastic book on the interplay between democracy and markets that I strongly encourage you to read if you’re interested in democratic theory. It’s a pity that it has been reduced to a caricature of anti-corporation argument, because not only is there nothing caricatural about Lindblom’s views on the influence of corporations in liberal democracies, but there is a lot more to the book than that.
Of course, even in a market economy, this protection is not absolute. Corporations are still subjected to various legal regulations and more generally the decisions of businessmen are shaped by the political and institutional environment, but on the whole they enjoy a remarkable degree of freedom to make the decisions they want, especially relative to societies where similar legal protections don’t exist.
Lindblom also points out that, while consumers have a veto power over the products that corporations try to sell them, this power is not as absolute as it may seem because corporations can influence people’s preferences through advertising, but I think that argument is weaker and, perhaps more importantly, it’s irrelevant to the point I want to make in this essay.
That Lindblom’s argument on the tension between democracy and corporations didn’t imply that we should replace the market system was made explicit in “The Market as a Prison”, a short paper he published in 1982 that I also strongly recommend, especially if you have already read Politics and Markets but even if you haven’t. In this paper, he clarifies that his goal in Politics and Markets was merely to point out that tension, not to make a case against the market system.
For what it’s worth, I personally think that AI companies overestimate how fast adoption will be, not only because I’m still not sure that general artificial intelligence is really just a few years away, but also because I think they underestimate the social, cultural and political obstacles that will slow down adoption. For instance, a lot of people are paid to do a job not just because they have the necessary know-how, but also because they can be held legally responsible for what they do. What it means is that, even once it has become technically possible for AI to automate those jobs, actually moving toward full automation will probably require a pretty extensive overhaul of current law. AI may actually help with that, sorting through current law and figuring out what changes are required is precisely the kind of things I expect it to be pretty good at doing, but the bottleneck will be politics and AI isn’t going to speed that up because it won’t magically eliminate
To be clear, I’m not saying that any one of them will have a decisive influence, but that a small number of people will collectively determine how we answer those questions.
As I explained above, I’m skeptical that AI will make humans economically redundant across the board as quickly as people in AI companies think, but I have no doubt that it will cause serious disruptions in many sectors of the economy within the next few years. That will be enough to fuel a power anti-AI movement, especially since white-collar workers, who are better connected and have more influence than blue-collar workers, will be the most affected group at the beginning.
I plan to write a post to present my theory of democracy in more detail, but there are several things I have to finish before that, so I don’t know when I will have time.
