Artificial General Intelligence Isn't As Close As You Think

 


A closer examination indicates that the most recent systems, like as DeepMind's much-hyped Gato, are nevertheless hampered by the same issues.

The area of artificial intelligence must appear to the average person to be rapidly progressing. According to press releases and some of the more gushing media accounts, OpenAI's DALL-E 2 can create spectacular images from any text; another OpenAI system called GPT-3 can talk about just about anything; and a system called Gato, released in May by DeepMind, a division of Alphabet, appeared to work well on every task the company could throw at it.

One of DeepMind's top executives even boasted that "The Game is Over!" in the search for artificial general intelligence (AGI), AI with the flexibility and resourcefulness of human intellect. Elon Musk has stated that he would be amazed if artificial general intelligence was not available by 2029.

Don't be deceived. Machines may one day be as intelligent as, if not smarter than, humans, but the game is far from finished. There is still a lot of work to be done in order to create machines that can genuinely perceive and reason about the universe. Less posturing and more fundamental research are what we actually need right now.

To be sure, AI is progressing in some ways—synthetic images are becoming more realistic, and speech recognition is becoming more reliable in noisy environments—but we are still light-years away from general-purpose, human-level AI that can comprehend the true meanings of articles and videos, or deal with unexpected obstacles and interruptions.We're still stuck on the same problems that academic scientists (including me) have been pointing out for years: getting AI to be dependable and adapt to uncommon situations.

Consider how the recently lauded Gato, a self-proclaimed jack of all crafts, described a photograph of a pitcher pitching a baseball. "A baseball player pitching a ball on top of a baseball field," "A guy tossing a baseball at a pitcher on a baseball field," and "A baseball player at bat and a catcher in the dirt during a baseball game" were the three responses provided by the system. The first answer is true, however the other two contain hallucinations of additional players who aren't visible in the image.

The system has no understanding what is in the picture vs what is typical of images of a similar nature. Any baseball enthusiast would immediately realize that this is the pitcher who just threw the ball, not the other way around—and while we would anticipate a catcher and hitter to be nearby, they are clearly absent from the image.

When AI systems make mistakes, such as DALL-E, the effect is funny, while some AI faults cause severe difficulties. Another recent incident included a Tesla on autopilot driving right towards a human worker holding a stop sign in the center of the road, only slowing down when the human driver interfered.

When presented with the odd combination of the two, which put the stop sign in a new and unusual position, the system failed to slow down.

The irony here is that the largest AI research teams are no longer situated in universities, where peer review was formerly the norm, but in companies. And, unlike institutions, firms have no incentive to play fairly. Rather than subjecting their sensational new articles to academic scrutiny, they've opted for press release publishing, enticing journalists and avoiding the peer review process. We only have access to information that the corporations want us to have.

Demoware, software built to seem excellent for a demo but not necessarily good enough for the real world, is a term used in the software industry to describe this type of technique. Demoware frequently becomes vaporware, touted for shock and awe to deter competitors but never distributed.

Chickens, on the other hand, do ultimately come home to roost. Cold fusion may have sounded fantastic, but it's still not available at the mall. A winter of shattered expectations is going to be the price of AI. Too many items have been demoed, advertised, and never delivered, including self-driving vehicles, automated radiologists, and all-purpose digital agents.

For the time being, investment money are flowing in on the promise of a self-driving vehicle (who wouldn't want one? ), but if the underlying issues of dependability and dealing with outliers are not addressed, funding will dry up. We'll be left with strong deepfakes, massive networks emitting massive quantities of carbon, and solid advancements in machine translation, speech recognition, and object identification, but not much more to show for all the excitement.

Deep learning has improved robots' capacity to spot patterns in data, yet it suffers from three key limitations. The patterns it learns are surface, not conceptual; the findings it generates are difficult to explain; and the results are difficult to apply to other processes like memory and reasoning. "The essential problem [coming forward] is to integrate the concept of... learning and reasoning," said Harvard computer scientist Les Valiant. You can't deal with someone who is holding a stop sign if you don't know what a stop sign is.

For the time being, we're stuck in a "local minimum" in which corporations focus on benchmarks rather than core concepts, eking out modest gains with existing technology rather than pausing to ask more fundamental issues. We need more individuals asking basic questions about how to construct systems that can learn and reason at the same time, rather than seeking spectacular straight-to-the-media demos. Instead, contemporary engineering practice is well ahead of scientific skills, with engineers focusing more on using tools that aren't completely understood than on developing new tools and a more solid theoretical foundation. This is why fundamental research is so important.