Alexander Sikov/Getty Images
Scholar Gary Marcus has become a kind of Poet Laureate on the shortcomings of artificial intelligence (AI), chronicling in numerous books and articles how the technology often comes up short and is less impressive than the average individual believes.
When, five years ago, Marcus wrote about the technical shortcomings of AI, in Rebooting AI, with co-author Ernest Davis, his critique was still somewhat theoretical, pointing to potential dangers of the technology.
Also: For a more dangerous age, a delicious skewering of current AI
In the years since, ChatGPT has taken the world by storm. That situation leaves Marcus with a real-world canvas upon which to reflect on the actual, present danger of AI.
His latest work, Taming Silicon Valley, published last month by MIT Press, catalogs those real-world effects. The book should be required reading for anyone whose life will be touched in any small part by AI, which is pretty much everyone today.
As always, Marcus writes in a strong voice that moves through the material in a breezy, commanding style, backed by a solid command of science and technology. The payoff is often blunt, with Marcus predicting worse is to come from each anecdotal example he cites.
Also: Kirk and Spock reunite: AI gives us the Star Trek farewell we always wanted
As he’s knee-deep in the Twitterverse (Xverse), Marcus makes the most of other voices, quoting scholars and concerned individuals who have themselves written about the dangers at length.
The first 50 pages of the book are a story of mishaps with AI, especially generative AIs, and the risks and harms they may — and, in some cases, already do — bring. This is familiar territory for anyone who follows Marcus’s Substack or his X feed, but it’s useful to have the context in one place.
The danger has nothing to do with super-intelligent AI, or “artificial general intelligence“. Marcus has long been a skeptic of present-day machine learning and deep learning achieving super-human capabilities. He pokes fun at notions that such an intelligence will annihilate humanity in one fell swoop.
Also: The best AI for coding in 2024 (and what not to use)
“In the field, there is a lot of talk of p(doom), a mathematical notation, tongue-slightly-in-cheek, for the probability of machines annihilating all people,” writes Marcus in Taming Silicon Valley. “Personally, I doubt that we will see AI cause literal extinction.”
Instead, Marcus is concerned with the quotidian dangers of what not-so-smart machines are already doing to society via ChatGPT and other similar programs.
Marcus takes us on a tour of the 12 worst present dangers of AI. These dangers include large language models (LLMs) functioning as machines to mass-produce information, such as the dilemma of “scammy AI-generated book rewrites [that] are flooding Amazon,” according to a Wired magazine article he cites.
More serious are deep-fake voices pretending to be kidnapped individuals, which is something that has already taken place multiple times. “We can expect it to happen a lot more,” he predicts of the extortionist scams.
Marcus spends many pages discussing intellectual property theft in the form of copyrighted works appropriated by OpenAI and others for training LLMs, without obtaining consent. As is often the case, Marcus can give a greater weight to the matter than one might initially think.
“The whole thing has been called the Great Data Heist – a land grab for intellectual property that will (unless stopped by government intervention or citizen action) lead to a huge transfer of wealth – from almost all of us – to a tiny number of companies,” he writes.
Also: Google’s Gems are a gentle introduction to AI prompt engineering
The middle third of the book moves beyond discussing harms to a critique of Silicon Valley’s predatory practices and how giant tech firms hoodwink the public and lawmakers by weaving mythology around their inventions to make them seem simultaneously important and utterly benign.
The broken promises of OpenAI receive a deserved skewering from Marcus. They include the company no longer being a non-profit, and no longer being “open” in any meaningful sense, instead hiding all of their code from public scrutiny.
But the problem is more than a single bad actor: Silicon Valley is rife with misdirection and disingenuousness.
Among several rhetorical tricks, Marcus highlights companies such as Microsoft, OpenAI, and Alphabet’s Google claiming they’re too busy contemplating Doomsday scenarios to bother with the kinds of present dangers he outlines.
Also: With GPT-4, OpenAI opts for secrecy versus disclosure
“Big Tech wants to distract us from all that, by saying — without any real accountability — that they are working on keeping future AI safe (hint: they don’t really have a solution to that, either), even as they do far too little about present risk,” he writes.
“Too cynical? Dozens of tech leaders signed a letter in May 2023 warning that AI could pose a risk of extinction, yet not one of those leaders appears to have slowed down one bit.”
The result of this approach by big tech has been the co-opting of government, known by policy types as “regulatory capture”.
“A tiny number of people and companies are having an enormous, largely unseen influence,” writes Marcus. “It is not a coincidence that in the end, for all the talk we have seen of governing AI in the United States, mostly we have voluntary guidelines, and almost nothing with real teeth.”
Given the demonstrable dangers and the self-serving maneuvers of Big Tech, what can be done?
In the final third, Marcus reflects on how to tackle the present dangers and the tech culture of misrepresentation and selfishness. He is an outsider to the policy world, even though he has testified before US Congress as an expert.
As such, he doesn’t provide a blueprint for what should happen, but he does an intelligent job of offering suggestions that make obvious good sense — for the most part.
For example, copyright law should be updated for the age of LLMs.
“The point now should be to update those laws,” he writes, “to prevent people from being ripped off in a new way, namely, by the chronic (near) regurgitators known as large language models. We need to update our laws.”
Also: The data suggests gen AI boosts software productivity – for these developers
There also needs to be new statutes to protect privacy in the face of the “surveillance capitalism” that uses sensors to suck up everyone’s data.
“As of this writing, no federal law guarantees that Amazon’s Echo device won’t snoop in your bedroom, nor that your car manufacturer won’t sell your location data to anyone who asks,” observes Marcus.
The biggest gaping hole in the regulatory world is Section 230 of the Communications Decency Act of 1996 passed by the US Congress. Section 230 frees online services, including Meta’s Facebook and X, of any responsibility for content, including brutal, disparaging, violent content. It also frees the users of those services to bully their fellow users while hiding behind the excuse that it’s only one person’s opinion.
“Newspapers can be sued for lies; why should social media be exempt?” Marcus rightly observes. “Section 230 needs to be repealed (or rewritten), assigning responsibility for anything that circulates widely.” Amen to that.
Also: 6 ways to write better ChatGPT prompts – and get the results you want faster
Marcus also explores broad preventative measures as first principles, including transparency, such as not only demanding open-source code but in many cases forcing companies to disclose how LLMs and the like are being used.
“Have large language models been used, for example, to make job decisions, and done so in a biased way? We just don’t know” because of the lack of transparency, traceability, and accountability, observes Marcus.
Amidst all the possible ways Marcus explores for reckoning with risk, his book makes its strongest case in arguing for a regulatory agency for AI — both domestic and international — to handle the complexity of the task.
Making laws is too slow a procedure, ultimately, to address present harms, writes Marcus. “Litigation can take a decade or more,” as he observed in testimony before the US Senate in 2023. An agency can also be “more nimble” than lawmakers, he observes.
Also: Google’s new AI course will teach you to write more effective prompts – in 5 steps
Although there is little appetite for an AI agency, he writes, “the alternative is worse: that without a new agency for AI (or perhaps more broadly for digital technology in general), the United States will forever be playing catchup, trying to manage AI and the digital world with infrastructure that long predates the modern world.”
In arguing for an AI agency, Marcus faces two challenges. One is demonstrating substantial harm. It’s one thing to show risks, as Marcus does in the first 50 pages; however, all those risks have to add up to sufficient harm to rally both public opinion and the urgency of lawmakers.
The modern history of regulation shows that rules and agencies have been created only after great harm has been demonstrated. The Securities Act of 1933, which brought strict requirements to what public companies can and cannot do to protect the investing public, followed the stock market crash of 1929, which erased whole fortunes and wrecked the global economy. It was a moment of such unprecedented disaster that it galvanized regulatory efforts.
The International Atomic Energy Agency was created to regulate nuclear fission only after the atomic bomb that killed 200,000 people in Hiroshima and Nagasaki, Japan. On a smaller scale, the US Food and Drug Administration emerged in the early twentieth century after journalists and progressive activists shed light on the massive harms of tainted substances.
Also: Think AI can solve all your business problems? Apple’s new study shows otherwise
In other words, governments have rarely acted to promote regulation in advance of the demonstration of substantial harm.
Do Marcus’s first 50 pages make a convincing case? It’s not clear. Given how many individuals use ChatGPT and the like, any notion of harm has to compete with the mass appeal of the tools. Every harm identified by Marcus might be excused away by the enthusiastic user of ChatGPT, Perplexity, or Google Gemini as merely the price to be paid for getting a new tool.
I emailed Marcus to ask, “Is there enough harm demonstrated in the first 50 pages of the book to justify the measures proposed in the last 60 pages of the book?”
In reply, Marcus noted “someone did actually commit suicide in what seems to be LLM-related circumstances,” referring to an incident in April of 2023 where a man in his thirties committed suicide after six weeks of interacting with a chatbot.
Also: Google survey says more than 75% of developers rely on AI. But there’s a catch
Marcus noted that beyond an actual incident, “we already are facing many negative consequences of generative AI, ranging from misinformation to covert racism to nonconsensual deepfake porn to harm to the environments; these are all growing fast.”
“We already are facing many negative consequences of generative AI,” argues Marcus.
Gary Marcus
Marcus’s reply reasserts the question of how much harm is too much before society says, enough, and establishes controls.
In fact, the best reason for an AI agency may be that there is unlikely to be societal agreement about harm. For some, such as Marcus, any harm is too great, while others want first and foremost to nurture the incipient technology without imposing too many restrictions.
An AI agency could reconcile that deep divide by rigorously logging and following up on reported harms, and exploring theoretical harms, to move beyond conjecture to a comprehensive sense of the dangers posed to society.
The second big challenge to Marcus’s call for an AI agency is how to define what it should oversee. Most agencies have a mandate based on at least rough outlines of what is in their purview.
Also: Agile development can unlock the power of generative AI – here’s how
The US Securities and Exchange Commission, for example, regulates the transaction of “securities”, which may include equities, such as common stock of a publicly listed company, or debt instruments, such as bonds. The SEC does not regulate other tradable instruments, such as derivatives contracts, which are left to another agency, the Commodities Futures Trading Commission.
The problem with the term “artificial intelligence” is that it has never been a real term. Marcus and other scholars use the term as a shorthand. But, in reality, AI has no precise meaning. AI is a grab-bag, a catch-all for any kind of computer science work that one feels like designating as AI.
The late AI scholar Marvin Minsky of MIT coined the term “suitcase words” to mean expressions that could represent whatever anyone wanted them to include. AI is a suitcase word. It can include LLMs but, with the increasing marketing of any software as AI, all sorts of code could be labeled as “AI” whether or not it has anything in common with LLMs.
That situation presents a problem for regulators: where to draw the line for their regulatory authority.
Also: Could AI make data science obsolete?
Should regulators regulate only things that use a certain technological component linked to machine-learning forms of AI, such as stochastic gradient descent, which is used to train most LLMs?
Or should they regulate all things that have a certain observed effect, such as appropriating copyrighted material or producing output in natural language? Or should they broaden their bailiwick to anything and everything that claims to be AI?
I asked Marcus in an email, “How will regulators know what is in their bailiwick?”
Marcus’s view is that experts will sort out the definitional questions: “Obviously we need experts, same as we do e.g. for FDA or regulating airplanes.”
Marcus continued: “Somehow we manage to navigate these definitional questions relatively well for ‘food’ ‘drug’, etc.” True, except those things are real things, for which definitions can, ultimately be found; AI is not.
Marcus urged me not to “overthink” the matter: “Each regulation is going to have a scope relevant to the regulation, e.g. the bill [California Governor Gavin] Newsom just signed on training and transparency is solely about models that are trained,” noted Marcus. “A regulation that is about employment discrimination should be about any algorithm, trained or hard-coded, that is used to make employment decisions, etc.”
Also: Your dream programming job demands this language, every site agrees
That approach is true of individual measures, but it doesn’t solve the question of the mandate for an AI agency. Any agency that comes into being will have to have a mandate just like the FDA. That’s going to prove tricky while the term AI is a murky one, filled with hype.
That situation, however, shouldn’t deter society from the project of an AI agency. Marcus’s main argument is still the strongest one: law-making simply can’t keep pace with the proliferation of AI, however one defines the technology. An oversight body, staffed with experts in the field, armed with regulatory powers, needs to be vigilant on a daily basis.
With the imminent change of power in Washington, D.C. as Donald J. Trump becomes the US President in January, I asked Marcus how he views the odds of such an agency taking shape.
He is not optimistic:
America absolutely needs an AI agency – both to encourage and leverage innovation, but also to mitigate the risks of AI, such as misinformation, discrimination, cybercrime, damage to environment, and the widespread theft of intellectual property. I fear that the Trump administration will ignore many of these risks, letting them escalate, and leaving society to bear them. I very much hope that I am wrong.