What is general intelligence in the world of AI and computers? The race for the artificial mind explained
Corvids are a family of birds that are known to be astonishingly accomplished at showing self-awareness and problem-solving via the use of tools. Such traits are generally considered to be extremely rare in the animal kingdom, as there's only ourselves and a handful of other species that can do all of this. However, you'd never think for one moment that any corvid is a human: We recognise the fact they are smart but not truly intelligent, or certainly not to the extent that we are.
And it's the same when it comes to artificial intelligence, the biggest topic in the world of computing and tech right now. While we've seen incredibly rapid progress in certain areas, such as generative AI video, nothing produced by the likes of ChatGPT, Stable Diffusion, or Copilot gives us the impression that it's true, human-like intelligence. Typically classed as weak or narrow AI, such systems aren't self-aware nor are they problem-solving, as such; they're basically enormous probability calculators, heavily reliant on the datasets used to train them.
Pinning down exactly what is meant by the phrase human intelligence is something that the scientific community has battled over for centuries, but in general, we can say it's the ability to recognise information or infer it from various sources, and then use it to plan, create, or problem solve through logical reasoning or abstract thinking. We humans do all of this extremely well, and we can apply it in situations that we've not had experience or prior knowledge of.
Getting a computer to exhibit the same capabilities is the ultimate goal of researchers in the field of artificial general intelligence (AGI): Creating a system that is able to conduct cognitive tasks just as well as any human can, and hopefully, even better.
What is artificial general intelligence?
This is a computer system that can plan, organise, create, reason, and problem-solve just like a human can.
The scale of such a challenge is rather hard to comprehend because an AGI needs to be able to do more than simply crunch through numbers. Human intelligence relies on language, culture, emotions, and physical senses to understand problems, break them down, and produce solutions. The human mind is also fragile and manipulable and can make all kinds of mistakes when under stress.
Sometimes, though, such situations generate remarkable achievements. How many of us have pulled off great feats of intelligence during examinations, despite them being potentially stressful experiences? You may be thinking at this point that all of this is impossible to achieve and surely nobody can program a system to apply an understanding of culture, utilise sight or sound, or recall a traumatic event to solve a problem.
(Image credit: Google)
It's a challenge that's being taken up by business and academic institutions around the world, with OpenAI, Google DeepMind, Blue Brain Project, and the recently completed Human Brain Project being the most famous examples of work conducted in the field of AGI. And, of course, there's all the research being carried out in the technologies that will either support or ultimately form part of an AGI system: Deep learning, generative AI, neural language processing, computer vision and sound, and even robotics.
As to the potential benefits that AGI could offer, that's rather obvious. Medicine and education could both be improved, increasing the speed and accuracy of any diagnosis, and determining the best learning package for a given student. An AGI could make decisions in complex, multi-faceted situations, as found in economics and politics, that are rational and beneficial to all. It seems a little facile to shoehorn games into such a topic but imagine a future where you're battling against AGI systems that react and play just like a real person but with all of the positives (comradery, laughter, sportsmanship) and none of the negatives.
Not everyone is convinced that AGI is even possible. Philosopher John Searle wrote a paper many decades ago arguing that artificial intelligence can be of two forms, Strong AI and Weak AI, where the difference between them is that the former could be said to be consciousness whereas the latter only seems like it does. To the end user, there would be no visible difference, but the underlying system certainly isn't the same.
The way that AGI is currently progressing, in terms of research, puts it somewhere between the two, though it's more weak rather than strong. Although this may seem like it's just semantics, one could take the stance that if the computer only appears to have human-like intelligence, it can't be considered to be truly intelligent, ultimately lacking what we consider to be a mind.
AI critic Hubert Dreyfus argues that computers are only able to process information that's stored symbolically and human unconscious knowledge (things that we know about but never directly think about) can't be symbolically stored, thus a true AGI can never exist.
A fully-fledged AGI is not without risks, either. At the very least, the widespread application of them in specific sectors would result in significant unemployment. We have already seen cases where both large and small businesses have replaced human customer support roles with generative AI systems. Computers that can do the same tasks as a human mind could potentially replace managers, politicians, triage nurses, teachers, designers, musicians, authors, and so on.
Perhaps the biggest concern over AGI is how safe it would be. Current research in the field is split on the topic of safety, with some projects openly dismissive of it. One could argue that a truly artificial human mind, one that's highly intelligent, may see many of the problems that humanity faces as being trivial, in comparison to answering questions on existence and the universe itself.
Building an AGI for the benefit of humanity isn't the goal of every project at the moment.
Your next machine
(Image credit: Future)
Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.
Despite the incredible advances in the fields of deep learning and generative AI in recent years, we're still a long way off from having a system that computer scientists and philosophers universally agree on having artificial general intelligence. Current AI models are restricted to very narrow domains, and cannot automatically apply what they have learned into other areas.
Generative AI tools cannot express themselves freely through art, music, and writing: They simply produce an output from a given input, based on probability maps created through trained association.
Whether the outcome turns out to be SkyNet or HAL9000, Jarvis or Tars, AGIs are still far from being a reality, and may never do so in our lifetimes. That may well be a huge relief to many people, but it's also a source of frustration for countless others, and the race is well and truly on to make it happen. If you've been impressed or dismayed by the current level of generative AI, you've seen nothing yet.