top of page
Virtual Blue logo

Why Gen AI Will Never Achieve Human Level Intelligence

  • Writer: Dr Peter Catt
    Dr Peter Catt
  • Aug 10
  • 4 min read

Updated: Aug 10

The quest for artificial general intelligence (AGI), AI with human like thinking, has reached a turning point. While large language models (LLMs) like ChatGPT dazzle with text generation, many AI experts argue that generative AI alone cannot achieve human level intelligence. The solution lies in neurosymbolic AI, a hybrid approach merging the pattern spotting power of neural networks with the logical reasoning of symbolic systems. Tech giants like DeepMind are embracing this model, and industries such as finance, healthcare, and autonomous systems are adopting it. Neurosymbolic AI offers not just smarter systems but also more trustworthy and explainable ones , ensuring decisions are transparent and verifiable across critical applications.



Gartner® Hype Cycle for AI 2025
Gartner® Hype Cycle for AI 2025

What Is Neurosymbolic AI?

Neurosymbolic AI blends two AI types: neural networks, which mimic human intuition by spotting patterns (for example, recognising faces), and symbolic systems, which use logical rules like a chess computer. It’s like combining fast, instinctive thinking with slow, logical reasoning, as described by psychologist Daniel Kahneman’s fast and slow thinking model.


Neural networks learn from massive datasets, identifying patterns without explicit rules. Symbolic systems rely on clear rules and explicit knowledge, like a rulebook for problem solving. Neurosymbolic AI connects these: neural networks handle perception (such as understanding speech), while symbolic systems tackle logical tasks (like planning). An integration layer links the two, leveraging their strengths to overcome their weaknesses.


Why Pure Neural Networks Fall Short

Neural networks, the backbone of LLMs, have critical flaws. They often hallucinate, producing convincing but incorrect outputs because they rely on statistical patterns, not true understanding. For instance, an LLM might know Paris is in France from training data but not grasp the concept of a capital city or the logical ties between countries and capitals.


Apple’s research shows this clearly: when irrelevant details were added to maths problems, LLMs’ performance dropped by up to 65%, exposing their reliance on pattern matching over reasoning. As AI expert Gary Marcus puts it, they’re stuck in the realm of correlations. Neurosymbolic AI addresses this by adding logical rules. DeepMind’s AlphaGeometry solved 25 out of 30 Mathematical Olympiad problems (compared to 10 for earlier systems) by combining neural pattern recognition with symbolic logic , enabling it to derive proofs systematically, unlike LLMs that guess based on patterns. These systems also provide explainability, offering clear reasoning chains essential for regulated industries like finance and healthcare. Plus, they need far less training data, requiring sometimes just 1% of what neural networks demand.


Why Symbolic AI Alone Isn’t Enough

Symbolic AI uses explicit rules and symbols, like a digital rulebook for every scenario. Projects like Cyc, which spent decades coding millions of rules, faced the knowledge acquisition bottleneck, as manually encoding knowledge is slow, costly, and inflexible. Symbolic systems also struggle with sub symbolic tasks, like recognising faces or interpreting tone, where human knowledge is hard to define. For example, they cannot easily process raw sensor data from autonomous vehicles, limiting their adaptability in dynamic environments. They falter with uncertain or incomplete data, which makes them impractical for real world use. Neural networks solve this by automatically detecting patterns and adapting to uncertainty, excelling at processing messy, real world data like images or speech.


Real World Impact of Neurosymbolic AI

Neurosymbolic AI is already making waves. Elemental Cognition, founded by IBM Watson’s creator David Ferrucci, powers Oneworld Airlines’ booking system. It handles millions of flight combinations, boosting conversion rates fivefold while ensuring 100% accuracy, a must when errors cost thousands. Its setup uses neural networks for language processing and symbolic algorithms for reliable optimisation. Google’s recent neurosymbolic efforts in robotics integrate neural vision systems with symbolic task planning, enabling robots to navigate complex environments with precision. IBM’s neurosymbolic toolkit spans eight tool categories, reflecting their belief in its potential for AGI, and even OpenAI’s use of code interpreters, which boosts reasoning, aligns with neurosymbolic principles, as Gary Marcus notes.


Expert Consensus

Gary Marcus has championed neurosymbolic AI for over 20 years, arguing in his 2001 book The Algebraic Mind that it’s essential for AGI. His view is gaining support. Yann LeCun, once dismissive of symbolic methods, now says symbolic manipulation is vital for human like AI. Yoshua Bengio’s System 2 Deep Learning focuses on deliberate reasoning. 

LLMs excel at predicting the next word based on patterns but lack true understanding of meaning or the world. Apple’s research shows they struggle with irrelevant data, revealing their reliance on pattern matching over reasoning. Marcus argues that abstract knowledge and systematic reasoning, core to human intelligence, require tools for manipulating abstractions, which LLMs lack. For instance, LLMs cannot reliably solve multi step problems like scheduling or resource allocation without explicit logical frameworks. Neurosymbolic AI fills this gap with compositional understanding, breaking problems into parts and solving them systematically by combining neural intuition with symbolic logic for robust, reliable reasoning.


The Path Ahead

Gartner’s 2024 AI Hype Cycle highlights neurosymbolic AI’s rise. In healthcare, it pairs neural pattern recognition with symbolic medical knowledge for accurate, explainable diagnoses. In finance, it blends statistical analysis with regulatory compliance. It’s also more energy efficient, as symbolic processing tends to use CPUs, not power hungry GPUs, partially addressing AI’s environmental impact. For businesses, neurosymbolic AI means more capable and transparent systems. In autonomous systems, it ensures safer decision making by combining real time perception with logical planning. Startups are building commercial neurosymbolic tools, while tech giants integrate them into existing platforms, reflecting the reality that human intelligence combines learning from experience with abstract reasoning , which positions neurosymbolic AI as the key to advancing towards AGI. As tech giants and startups adopt this approach, the future of AI lies in this hybrid model, not in neural or symbolic systems alone.

 
 
 

コメント


bottom of page