A serial artificial intelligence investor is raising alarm bells about the dogged pursuit of increasingly-smart machines, which he believes may soon advance to the degree of divinity.
In an op-ed for the Financial Times, AI mega-investor Ian Hogarth recalled a recent anecdote in which a machine learning researcher with whom he was acquainted told him that “from now onwards,” we are on the brink of developing artificial general intelligence (AGI) — an admission that came as something of a shock.
“This is not a universal view,” Hogarth wrote, noting that “estimates range from a decade to half a century or more” before AGI comes to fruition.
All the same, there exists a tension between the explicitly AGI-seeking goals of AI companies and the fears of machine learning experts — not to mention the public — who understand the concept.
“‘If you think we could be close to something potentially so dangerous,’ I said to the researcher, ‘shouldn’t you warn people about what’s happening?'” the investor recounted. “He was clearly grappling with the responsibility he faced but, like many in the field, seemed pulled along by the rapidity of progress.”
Like many other parents, Hogarth said that after this encounter, his mind drifted to his four-year-old son.
“As I considered the world he might grow up in, I gradually shifted from shock to anger,” he wrote. “It felt deeply wrong that consequential decisions potentially affecting every life on Earth could be made by a small group of private companies without democratic oversight.”
When wondering whether “the people racing to build the first real AGI have a plan to slow down and let the rest of the world have a say,” the investor noted that although it feels like a “them” versus “us” situation, he has to admit that he, too, is “part of this community” as someone who’s invested in more than 50 AI startups.
“A three-letter acronym doesn’t capture the enormity of what AGI would represent, so I will refer to it as what is: God-like AI,” Hogarth declared. “A superintelligent computer that learns and develops autonomously, that understands its environment without the need for supervision and that can transform the world around it.”
“To be clear, we are not here yet,” Hogarth continued. “But the nature of the technology means it is exceptionally difficult to predict exactly when we will get there. God-like AI could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race.”
While the investor has spent his career funding and curating AI research — even going so far as to start his own venture capital firm and launching an annual “State of AI” report — something appears to have changed, where now, “the contest between a few companies to create God-like AI has rapidly accelerated.”
“They do not yet know how to pursue their aim safely and have no oversight,” Hogarth mused. “They are running towards a finish line without an understanding of what lies on the other side.”
While he plans to invest in startups that will pursue AI more responsibly, the AI mega-funder said that he hasn’t gotten much traction with his counterparts.
“Unfortunately, I think the race will continue,” Hogarth wrote. “It will likely take a major misuse event — a catastrophe — to wake up the public and governments.”
More on apocalyptic AI: A Third of Researchers Think that AI Could Cause a Nuclear-Level Catastrophe