AI is entering an era of corporate control

/

A new report on AI progress highlights how state-of-the-art systems are now the domain of Big Tech companies. It’s these firms that now get to decide how to balance risk and opportunity in this fast-moving field.

An illustration of a cartoon brain with a computer chip imposed on top.
Illustration: Alex Castro / The Verge

An annual report on AI progress has highlighted the increasing dominance of industry players over academia and government in deploying and safeguarding AI applications.

The 2023 AI Index — compiled by researchers from Stanford University as well as AI companies including Google, Anthropic, and Hugging Face — suggests that the world of AI is entering a new phase of development. Over the past year, a large number of AI tools have gone mainstream, from chatbots like ChatGPT to image-generating software like Midjourney. But decisions about how to deploy this technology and how to balance risk and opportunity lie firmly in the hands of corporate players.

The AI Index states that, for many years, academia led the way in developing state-of-the-art AI systems, but industry has now firmly taken over. “In 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia,” it says. This is mostly due to the increasingly large resource demands — in terms of data, staff, and computing power — required to create such applications.

In 2019, for example, OpenAI created GPT-2, an early large language mode, or LLM — the same class of application used to power ChatGPT and Microsoft’s Bing chatbot. GPT-2 cost roughly $50,000 to train and contains 1.5 billion parameters (a metric that tracks a model’s size and relative sophistication). Skip forward to 2022 and Google created its own state-of-the-art LLM, called PaLM. This cost an estimated $8 million to train and contains 540 billion parameters, making it 360 times larger than GPT-2 and 160 times more expensive.

AI development’s increasing resource requirements firmly shift the balance of power toward corporate players. Many experts in the AI world worry that the incentives of the business world will also lead to dangerous outcomes as companies rush out products and sideline safety concerns in an effort to outmaneuver rivals. This is one reason many experts are currently calling for a slowdown or even a pause in AI development, as with the open letter signed last week by figures including Elon Musk.

The report’s authors note that as industry players take AI applications mainstream, the number of incidents of ethical misuse has also increased. (The AI, Algorithmic, and Automation Incidents and Controversies Repository, an index of these incidents, notes a 26-fold increase between 2021 and 2012.) Such incidents include fatalities involving Tesla’s self-driving software; the use of audio deepfakes in corporate scams; the creation of nonconsensual deepfake nudes; and numerous cases of mistaken arrests caused by faulty facial recognition software, which is often plagued by racial biases.

As AI tools become more widespread, it’s no surprise that the number of errors and malicious use cases would increase as well; by itself, it’s not indicative of a lack of proper safeguarding. But other pieces of evidence do suggest a connection, such as the trend for firms like Microsoft and Google to cut their AI safety and ethics teams.

The report does note that interest in AI regulation from legislators and policymakers is rising, though. An analysis of legislative records in 127 countries noted that the number of bills containing the phrase “artificial intelligence” increased from just one passed in 2016 to 37 passed in 2022. In the US, scrutiny is also increasing at the state level, with five such bills proposed in 2015 to 60 AI-related bills proposed in 2022. Such increased interest could provide a counterweight to corporate self-regulation.

The AI Index report covers far more ground than this, though. You can read it in full here or see some selected highlights below:

Private investment in AI decreased for the first time in a decade. Global private investment in AI has been climbing for years but decreased by 26.7 percent from 2021 to $91.9 billion. in 2022.
Training big AI models has environmental costs. A 2022 paper estimates that training a large AI language model called BLOOM emitted 25 times as much carbon as that of flying one passenger from New York to San Francisco and back. By comparison, OpenAI’s GPT-3 was estimated to have a carbon cost 20 times that of BLOOM.
AI can potentially help reduce emissions. In 2022, Google subsidiary DeepMind created an AI system called BCOOLER that reduced energy consumption by 12.7 percent in a three-month experiment in the company’s data centers by optimizing cooling procedures. (It’s not clear if Google ever adopted this system more widely.)
Chinese people are more hopeful about AI than Americans. An Ipsos survey in 2022 found that 78 percent of Chinese respondents agreed with the statement that “products and services using AI have more benefits than drawbacks.” Chinese citizens were most enthusiastic about AI, followed by respondents in Saudi Arabia (76 percent) and India (71 percent). US respondents were among the least enthusiastic of those surveyed, with only 35 percent agreeing with the above statement.

Go to Source