What is artificial general intelligence?

Special Feature

What is artificial general intelligence?

An Artificial General Intelligence (AGI) would be a machine capable of understanding the world as well as any human, and with the same capacity to learn how to carry out a huge range of tasks.

AGI doesn’t exist, but has featured in science-fiction stories for more than a century, and been popularized in modern times by films such as 2001: A Space Odyssey.

Fictional depictions of AGI vary widely, although tend more towards the dystopian vision of intelligent machines eradicating or enslaving humanity, as seen in films like The Matrix or The Terminator. In such stories, AGI is often cast as either indifferent to human suffering or even bent on mankind’s destruction.

In contrast, utopian imaginings, such as Iain M Banks’ Culture civilization novels, cast AGI as benevolent custodians, running egalitarian societies free of suffering, where inhabitants can pursue their passions and technology advances at a breathless pace.

Whether these ideas would bear any resemblance to real-world AGI is unknowable since nothing of the sort has been created, or, according to many working in the field of AI, is even close to being created.

What could an artificial general intelligence do?

In theory, an artificial general intelligence could carry out any task a human could, and likely many that a human couldn’t. At the very least, an AGI would be able to combine human-like, flexible thinking and reasoning with computational advantages, such as near-instant recall and split-second number crunching.

Using this intelligence to control robots at least as dextrous and mobile as a person would result in a new breed of machines that could perform any human task. Over time these intelligences would be able to take over every role performed by humans. Initially, humans might be cheaper than machines, or humans working alongside AI might be more effective than AI on their own. But the advent of AGI would likely render human labor obsolete.

Effectively ending the need for human labor would have huge social ramifications, impacting both the population’s ability to feed themselves and the sense of purpose and self-worth employment can bring.

Even today, the debate over the eventual impact on jobs of the very different, narrow AI that currently exist has led some to call for the introduction of Universal Basic Income (UBI).

Under UBI everyone in society would receive a regular payment from the government with no strings attached. The approach is divisive, with some advocates arguing it would provide a universal safety net and reduce bureaucratic costs. However, some anti-poverty campaigners have produced economic models showing such a scheme could worsen deprivation among vulnerable groups if it replaced existing social security systems in Europe.

Beyond the impact on social cohesion, the advent of artificial general intelligence could be profound. The ability to employ an army of intelligences equal to the best and brightest humans could help develop new technologies and approaches for mitigating intractable problems such as climate change. On a more mundane level, such systems could perform everyday tasks, from surgery and medical diagnosis to driving cars, at a consistently higher level than humans — which in aggregate could be a huge positive in terms of time, money and lives saved.

The downside is that this combined intelligence could also have a profoundly negative effect: empowering surveillance and control of populations, entrenching power in the hands of a small group of organizations, underpinning fearsome weapons, and removing the need for governments to look after the obsolete populace.

Could an artificial general intelligence outsmart humans?

Yes, not only would such an intelligence have the same general capabilities as a human being, it would be augmented by the advantages that computers have over humans today — the perfect recall, and the ability to perform calculations near instantaneously.

When will an artificial general intelligence be invented?

It depends who you ask, with answers ranging between within 11 years and never.

Part of the reason it’s so hard to pin down is the lack of a clear path to AGI. Today machine-learning systems underpin online services, allowing computers to recognize language, understand speech, spot faces, and describe photos and videos. These recent breakthroughs, and high-profile successes such as AlphaGo’s domination of the notoriously complex game of Go, can give the impression society is on the fast track to developing AGI. Yet the systems in use today are generally rather one-note, excelling at a single task after extensive training, but useless for anything else. Their nature is very different to that of a general intelligence that can perform any task asked of it, and as such these narrow AIs aren’t necessarily stepping stones to developing an AGI.

The limited abilities of today’s narrow AI was highlighted in a recent report, co-authored by Yoav Shoham of Stanford Artificial Intelligence Laboratory.

“While machines may exhibit stellar performance on a certain task, performance may degrade dramatically if the task is modified even slightly,” it states.

“For example, a human who can read Chinese characters would likely understand Chinese speech, know something about Chinese culture and even make good recommendations at Chinese restaurants. In contrast, very different AI systems would be needed for each of these tasks.”

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

Michael Woolridge, head of the computer science department at the University of Oxford, picked up on this point in the report, stressing “neither I nor anyone else would know how to measure progress” towards AGI.

Despite this uncertainty, there are some highly vocal advocates of near-future AGI. Perhaps the most famous is Ray Kurzweil, Google’s director of engineering, who predicts an AGI capable of passing the Turing Test will exist by 2029 and that by the 2040s affordable computers will perform the same number of calculations per second as the combined brains of the entire human race.

Kurzweil’s supporters point to his successful track record in forecasting technological advancement, with Kurzweil estimating that by the end of 2009 just under 80% of the predictions he made in the 1990s had come true.

Kurzweil’s confidence in the rate of progress stems from what he calls the law of accelerating returns. In 2001 he said the exponential nature of technological change, where each advance accelerates the rate of future breakthroughs, means the human race will experience the equivalent of 20,000 years of technological progress in the 21st century. These rapid changes in areas such as computer processing power and brain-mapping technologies are what underpins Kurzweil’s confidence in the near-future development of the hardware and software needed to support an AGI.

What is superintelligence?

Kurzweil believes that once an AGI exists it will improve upon itself at an exponential rate, rapidly evolving to the point where its intelligence operates at a level beyond human comprehension. He refers to this point as the singularity, and says it will occur in 2045, at which stage an AI will exist that is “one billion times more powerful than all human intelligence today”.

The idea of a near-future superintelligence has prompted some of the world’s most prominent scientists and technologists to warn of the dire risks posed by AGI. SpaceX and Tesla founder Elon Musk calls AGI the “biggest existential threat” facing humanity and the famous physicist and Cambridge University Professor Stephen Hawking told the BBC “the development of full artificial intelligence could spell the end of the human race”.

Both were signatories to an open letter calling on the AI community to engage in “research on how to make AI systems robust and beneficial”.

Nick Bostrom, philosopher and director of Oxford University’s Future of Humanity Institute, has cautioned what might happen when superintelligence is reached.

Describing superintelligence as a bomb waiting to be detonated by irresponsible research, he believes superintelligent agents may pose a threat to humans, who could likely stand “in its way”.

“If the robot becomes sufficiently powerful,” said Bostrom, “it might seize control to gain rewards.”

Is it even sensible to talk about AGI?

The problem with discussing the effects of AGI and superintelligences is that most working in the field of AI stress that AGI is currently fiction, and may remain so for a very long time.

Chris Bishop, laboratory director at Microsoft Research Cambridge, has said discussions about artificial general intelligences rising up are “utter nonsense”, adding “at best, such discussions are decades away”.

Worse than such discussions being pointless scaremongering, other AI experts say they are diverting attention from the near-future risks posed by today’s narrow AI.

Andrew Ng, is a well-known figure in the field of deep learning, previously having worked on the “Google Brain” project and served as chief scientist for Chinese search giant Baidu. He recently called on those debating AI and ethics to “cut out the AGI nonsense” and spend more time focusing on how today’s technology is exacerbating or will exacerbate problems such as “job loss/stagnant wages, undermining democracy, discrimination/bias, wealth inequality”.

Even highlighting the potential upside of AGI could damage public perceptions of AI, fuelling disappointment in the comparatively limited abilities of existing machine-learning systems and their narrow, one-note skillset — be that translating text or recognizing faces.

How would you create an artificial general intelligence?

Demis Hassabis, the co-founder of Google DeepMind, argues that the secrets to general artificial intelligence lie in nature.

Hassabis and his colleagues believe it is important for AI researchers to engage in “scrutinizing the inner workings of the human brain — the only existing proof that such an intelligence is even possible”.

“Studying animal cognition and its neural implementation also has a vital role to play, as it can provide a window into various important aspects of higher-level general intelligence,” they wrote in a paper last year.

They argue that doing so will help inspire new approaches to machine learning and new architectures for the neural networks, the mathematical models that make machine learning possible.

Hassabis and his colleagues say “key ingredients of human intelligence” are missing in most AI systems, including how infants build mental models of the world that guide predictions about what might will happen next and that allow them to plan. Also absent from current AI models is the human ability to learn from only a handful of examples, to generalize knowledge learned in one instance to many similar situations, such as a new driver understanding how to drive more than just the car they learned in.

“New tools for brain imaging and genetic bioengineering have begun to offer a detailed characterization of the computations occurring in neural circuits, promising a revolution in our understanding of mammalian brain function,” according to the paper, which says neuroscience should serve as “roadmap for the AI research agenda”.

Another perspective comes from Yann LeCun, Facebook’s chief AI scientist, who played a pioneering role in machine-learning research due to his work on convolutional neural networks.

He believes the path towards general AI lies in developing systems that can build models of the world they can use to predict future outcomes. A good route to achieving this, he said in a talk last year, could be using generative adversarial networks (GANs).

In a GAN, two neural networks do battle, the generator network tries to create convincing “fake” data and the discriminator network attempts to tell the difference between the fake and real data. With each training cycle, the generator gets better at producing fake data and the discriminator gains a sharper eye for spotting those fakes. By pitting the two networks against each other during training, both can achieve better performance. GANs have been used to carry out some remarkable tasks, such as turning these dashcam videos from day to night or from winter to summer.

Would an artificial general intelligence have consciousness?

Given the many definitions of consciousness, this is a very tricky question to answer.

A famous thought experiment by philosopher John Searle demonstrates how difficult it would be to determine whether an AGI was truly self-aware.

Searle’s Chinese Room suggests a hypothetical scenario in which the philosopher is presented with a written query in an unfamiliar Chinese language. Searle is sat alone in a closed room and individual characters from each word in the query are slid under the door in order. Despite not understanding the language, Searle is able to follow the instructions given by a book in the room for manipulating the symbols and numerals fed to him. These instructions allow him to create his own series of Chinese characters that he feeds back under the door. By following the instructions Searle is able to create an appropriate response and fool the person outside the room into thinking there is a native speaker inside, despite Searle not understanding the Chinese language. In this way, Searle argued the experiment demonstrates a computer could converse with people and appear to understand a language, while having no actual comprehension of its meaning.

The experiment has been used to attack the Turing Test. Devised by the brilliant mathematician and father of computing Alan Turing, the test suggests a computer could be classed as a thinking machine if it could fool one-third of the people it was talking with into believing it was a human.

SEE: Sensor’d enterprise: IoT, ML, and big data (ZDNet special report) | Download the report as a PDF (TechRepublic)

In a more recent book, Searle says this uncertainty over the true nature of an intelligent computer extends to consciousness. In his book Language and Consciousness, he says: “Just as behavior by itself is not sufficient for consciousness, so computational models of consciousness by themselves are not sufficient for consciousness” going on to give an example that: “Nobody supposes the computational model of rainstorms in London will leave us wet”.

Searle creates a distinction between strong AI, where the AI can be said to have a mind, and weak AI, where the AI is instead a convincing model of a mind.

Various counterpoints have been raised to the Chinese Room and Searle’s conclusions, ranging from arguments that the experiment mischaracterizes the nature of a mind, to it ignoring the fact that Searle is part of a wider system, which, as a whole, understands the Chinese language.

There is also the question of whether the distinction between a simulation of a mind and an actual mind matters, with Stuart Russell and Peter Norvig, who wrote the definitive textbook on artificial intelligence, arguing most AI researchers are more focused on the outcome than the intrinsic nature of the system.

Can morality be engineered in artificial general intelligence systems?

Maybe, but there are no good examples of how this might be achieved.

Russell paints a clear picture of how an AI’s ambivalence towards human morality could go awry.

“Imagine you have a domestic robot. It’s at home looking after the kids and the kids have had their dinner and are still hungry. It looks in the fridge and there’s not much left to eat. The robot is wondering what to do, then it sees the kitty, you can imagine what might happen next,” he said.

“It’s a misunderstanding of human values, it’s not understanding that the sentimental value of a cat is much greater than the nutritional value.”

Vyacheslav W. Polonski of the Oxford Internet Institute argues that before an AGI could be gifted morality that people would first have to codify exactly what morality is.

“A machine cannot be taught what is fair unless the engineers designing the AI system have a precise conception of what fairness is,” he writes, going on to question how a machine could be taught to “algorithmically maximise fairness” or to “overcome racial and gender biases in its training data”.

Polonski’s suggested solution to these problems is explicitly defining ethical behavior — citing Germany’s Ethics Commission on Automated and Connected Driving recommendation that designers of self-driving cars program the system with ethical values that prioritize the protection of human life above all else. Another possible answer he highlights is training a machine-learning system on what constitutes moral behavior, drawing on many different human examples. One such repository of this data might be MIT’s Moral Machine Project, which asks participants to judge the ‘best’ response in difficult hypothetical situations, such as whether it is better to kill five people in a car or five pedestrians.

Of course, such approaches are fraught with potential for misinterpretation and unintended consequences.

Hard-coding morality into machines seems too immense a challenge, given the impossibility of predicting every situation a machine could find itself in. If a collision is unavoidable, should a self-driving car knock down someone in their sixties or a child? What if that child had a terminal illness? What if the person in their sixties were the sole carer of their partner?

Having a machine learn what is moral behavior from human examples may be the better solution, albeit one that risks encoding in the machine the same biases that exist in the wider population.

Russell suggests intelligent systems and robots could accrue understanding of human values over time, through their shared observation of human behavior, both today and recorded throughout history. Russell suggests one method that robots could use to gain such an appreciation of human values could be via inverse reinforcement learning, a machine-learning technique where a system is trained by being rewarded for desired behavior.

How do we stop a general AI from breaking its constraints?

As part of its mission to tackle existential risks, the US-based Future of Life Institute (FLI) has funded various research into AGI safety, in anticipation of AI capable of causing harm being created in the near future.

“To justify a modest investment in this AI robustness research, this probability need not be high, merely non-negligible, just as a modest investment in home insurance is justified by a non-negligible probability of the home burning down,” it said upon launching its research program, pointing out that in the 1930s one of the greatest physicists of the time, Ernest Rutherford, said nuclear energy was “moonshine”, just five years before the discovery of nuclear fission.

Before an AGI’s behavior can be constrained, the FLI argues it’s necessary to pinpoint precisely what it should and shouldn’t do.

“In order to build systems that robustly behave well, we of course need to decide what ‘good behavior’ means in each application domain. Designing simplified rules — for example, to govern a self-driving car’s decisions in critical situations — will likely require expertise from both ethicists and computer scientists,” it says in a research priorities report compiled by Stuart Russell and other academics.

Ensuring proper behavior becomes problematic with strong, general AI, the paper says, adding that societies are likely to encounter significant challenges in aligning the values of powerful AI systems with their own values and preferences.

“Consider, for instance, the difficulty of creating a utility function that encompasses an entire body of law; even a literal rendition of the law is far beyond our current capabilities, and would be highly unsatisfactory in practice,” it states.

SEE: How we learned to talk to computers, and how they learned to answer back (cover story PDF)

Deviant behavior in AGIs will also need addressing, the FLI says. Just as an airplane’s onboard software undergoes rigorous checks for bugs that might trigger unexpected behavior, so the code that underlies AIs should be subject to similar formal constraints.

For traditional software there are projects such as seL4, which has developed a complete, general-purpose operating-system kernel that has been mathematically checked against a formal specification to give a strong guarantee against crashes and unsafe operations.

However, in the case of AI, new approaches to verification may be needed, according to the FLI.

“Perhaps the most salient difference between verification of traditional software and verification of AI systems is that the correctness of traditional software is defined with respect to a fixed and known machine model, whereas AI systems — especially robots and other embodied systems — operate in environments that are at best partially known by the system designer.

“In these cases, it may be practical to verify that the system acts correctly given the knowledge that it has, avoiding the problem of modelling the real environment,” the research states.

The FLI suggests it should be possible to build AI systems from components, each of which has been verified.

Where the risks of a misbehaving AGI are particularly high, it suggests such systems could be isolated from the wider world.

“Very general and capable systems will pose distinctive security problems. In particular, if the problems of validity and control are not solved, it may be useful to create ‘containers’ for AI systems that could have undesirable behaviors and consequences in less-controlled environments,” it states.

The difficulty is that ensuring humans can keep control of a general AI is not straightforward.

For example, a system is likely to do its best to route around problems that prevent it from completing its desired task.

“This could become problematic, however, if we wish to repurpose the system, to deactivate it, or to significantly alter its decision-making process; such a system would rationally avoid these changes,” the research points out.

The FLI recommend more research into corrigible systems, which do not exhibit this behavior.

“It may be possible to design utility functions or decision processes so that a system will not try to avoid being shut down or repurposed,” according to the research.

Another potential problem could stem from an AI negatively impacting its environment in the pursuit of its goals — leading the FLI to suggest more research into the setting of “domestic” goals that are limited in scope.

In addition, it recommends more work needs to be carried out into the likelihood and nature of an “intelligence explosion” among AI — where the capabilities of self-improving AI advance far beyond humans’ ability to control them.

The IEEE has its own recommendations for building safe AGI systems, which broadly echo those of the FLI research. These include that AGI systems should be transparent and their reasoning understood by human operators, that “safe and secure” environments should be developed in which AI systems can be developed and tested, that systems should be developed to fail gracefully in the event of tampering or crashes and that such systems shouldn’t resist being shutdown by operators.

Today the question of how to develop AI in a manner beneficial to society as a whole is the subject of ongoing research by the non-profit organization OpenAI.

The FLI research speculates that given the right checks and balances a general AI could transform societies for the better: “Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to research how to maximize these benefits while avoiding potential pitfalls.”

MORE GUIDES TO AI AND RELATED TECHNOLOGIES

What is deep learning? Everything you need to know

The lowdown on deep learning: from how it relates to the wider field of machine learning through to how to get started with it.

AI in the workplace: Everything you need to know

How artificial intelligence will change the world of work, for better and for worse.

What is machine learning? Everything you need to know

This guide explains what machine learning is, how it is related to artificial intelligence, how it works and why it matters.

What is AI? Everything you need to know about Artificial Intelligence

An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

Go to Source