Capitalism relies on competition. In practice, however, this core principle is often violated, because ambitious capitalists will naturally seek to eliminate competition and secure a commanding market position from which they can keep new would-be competitors at bay. Success, in this respect, can make you rich and establish your status as a visionary; but it can also make you feared and hated.
Hence, China – arguably one of the most successful market economies of the twenty-first century – has been waging war against its own tech giants, most notably by “disappearing” Alibaba Group co-founder Jack Ma from the public stage after he criticized Chinese financial regulators. At the same time, the Europeans, deeply worried that they lack a Big Tech sector of their own, have focused on enforcing competition (antitrust) policies to limit the power of giants like Google and Apple. And in the United States, Big Tech’s political allegiances (to both the “woke” left and the “red-pilled” right) have become focal points in the country’s corrosive culture wars.
It is only natural to worry about the market power and political influence of such massive – and massively important – corporations. These are companies that can single-handedly decide the fate of many small and even medium-size countries. Much of the debate about corporate influence is rather academic. But not so in Ukraine, where private-sector technology has played a decisive role on the battlefield over the past year.
Thanks to Elon Musk’s SpaceX Starlink satellite internet service, the Ukrainians have been able to communicate in real time, track Russian troop movements, and radically improve the precision of their strikes on enemy targets (thus saving precious ammunition). Without Starlink, Ukraine’s defense probably would have crumbled.
But given the capriciousness of would-be corporate dictators, such technological dependencies are inherently risky. Last October, Musk used his ownership of Twitter to stage a virtual “referendum” on a half-baked peace plan that would cede Crimea to Russia. When Ukrainian diplomats objected, he petulently threatened to cut off Starlink (and for some time, access was indeed lost in contested areas).
Paradoxically, the new debate about corporate power comes at a time when competition between tech companies is intensifying. By its very nature, radical technological change introduces radical uncertainty, especially for existing corporations and business models. New, apparently transformational breakthroughs in artificial intelligence could render even the most powerful tech giants obsolete if they fail to keep pace with innovation. Until this year, the dominance of Alphabet’s Google search engine was unquestionable; now, the service is suddenly at risk of being overtaken by OpenAI/Microsoft’s ChatGPT. Facebook and Twitter used to be regarded as indispensable social-media platforms; now, they are quickly being succeeded by others, such as TikTok.
These developments should not come as a surprise. In the annals of business history, failure is far more common than lasting success. Remember Kodak? Its days were numbered when it failed to adapt to the arrival of digital photography. The oldest companies in the world are those with a niche in localized, nontechnical sectors that do not depend on passing fashions. Unless you occupy such a niche – like a Japanese sake producer or a Tuscan winemaker – you are not safe.
Faced with the abiding threat to their existence, large companies generally have two strategies at their disposal. The first is to block or frustrate further innovation by claiming that it will be dangerous and destabilizing. For example, in the twentieth century, big railroad companies lobbied aggressively against automakers’ demand for highways.
Today, the stakes are much higher, and the rhetoric is more overblown. Some leading figures in the tech world are warning that without stringent AI regulations, the latest innovations in the sector could bring about civilizational collapse. This was one of the messages of the widely circulated AI moratorium letter signed by AI researchers and tech icons like Musk, who was later revealed to have invested in a new startup that will compete with OpenAI.
According to this narrative, today’s rapid progress could lead to an artificial general intelligence that is so powerful and so unpredictable that humanity might unwittingly end up at its mercy. Science-fiction writers (and some philosophers) have long articulated such scenarios. If you task a superintelligence with protecting the environment, it might well decide that the obvious solution is to eliminate the source of the problem: humans.
Or perhaps an AI would simply pursue its assigned task so monomaniacally that it would be unstoppable, as in Goethe’s poem “The Sorcerer’s Apprentice.” Such arguments reflect the general mood of anxiety that is characteristic of any age of rapid change. The example of the nineteenth-century machine breakers, the Luddites, always has a certain romantic appeal.
The second option for an anxious tech elite is to seek government protection by conjuring up risks to national or economic security. Microsoft Vice Chair and President Brad Smith, for example, warns that since training AI systems requires such massive investments, there are really only a few institutions that can do it, and chief among them are Chinese ones like the Beijing Academy of Artificial Intelligence.
Both strategies involve formulating a narrative that can secure a political backstop against market competition. Companies that are inherently endangered – because they are engaged in high-stakes wagers with unknowable outcomes – will always call on the political process in big countries to protect them. Whether by adding to the regulatory burden on new entrants or creating barriers against foreign competitors, they want to preserve the status quo.
We should keep these natural tendencies in mind, especially now that the pandemic and rising geopolitical tensions have created a new impetus for technical innovation. As always, technological change will be deeply disruptive and generate new winners and losers. Many commentators (and interested parties) will inevitably fixate on the dangers. It is ironic, but hardly novel, that the new narrative of techno-pessimism is being promoted most loudly by those at the forefront of yesterday’s innovations.
Project Syndicate