BTC
ETH
HTX
SOL
BNB
View Market
简中
繁中
English
日本語
한국어
ภาษาไทย
Tiếng Việt

Zhejiang University Research Team Proposes New Approach: Teaching AI How the Human Brain Understands the World

Kei
特邀专栏作者
2026-04-05 04:31
This article is about 1742 words, reading the full article takes about 3 minutes
The mainstream view holds that the more parameters a model has, the closer it gets to human-like thinking. However, a paper published by a Zhejiang University team in Nature Communications on April 1st presents a different perspective. They found that as model scale (primarily SimCLR, CLIP, DINOv2) increases, the ability to recognize specific objects does continue to improve, but the ability to understand abstract concepts not only fails to improve but can even decline.
AI Summary
Expand
  • Core Viewpoint: Research by the Zhejiang University team found that increasing the parameter scale of large models primarily enhances the ability to recognize concrete concepts but weakens the ability to understand abstract concepts. This reveals a fundamental difference between AI and the human brain in how concepts are organized and proposes a new direction for optimizing model structure using brain signals as guidance.
  • Key Elements:
    1. The study found that as model parameters increased from 22.06 million to 304.37 million, accuracy on concrete concept tasks rose from 74.94% to 85.87%, while accuracy on abstract concept tasks dropped from 54.37% to 52.82%.
    2. The human brain excels at constructing hierarchical conceptual relationships for knowledge transfer, whereas models rely more on surface features in data and struggle to stably form high-level abstract classifications.
    3. The team proposed using brain signals from humans viewing images as supervision to transfer the conceptual organizational structure of the human brain to deep neural networks.
    4. After training with brain signals, the model's performance on few-shot learning and abstract concept recognition tasks in novel contexts improved significantly, with an average increase of 20.5%, even surpassing control models with larger parameter counts.
    5. This research shifts the industry's focus from "larger scale" to "better structure," aiming to make AI's thinking more akin to the human brain, achieving true abstract understanding and knowledge transfer capabilities.

Large models have been growing in size, with the mainstream view suggesting that more parameters lead to approaches closer to human thinking. However, a paper published by a Zhejiang University team on April 1st in Nature Communications presents a different perspective (original link: https://www.nature.com/articles/s41467-026-71267-5). They found that as model scale (primarily SimCLR, CLIP, DINOv2) increases, the ability to recognize specific objects does indeed continue to improve, but the ability to understand abstract concepts not only fails to improve but can even decline. When parameters increased from 22.06 million to 304.37 million, performance on concrete concept tasks rose from 74.94% to 85.87%, while performance on abstract concept tasks dropped from 54.37% to 52.82%.

The Difference Between Human and Model Thinking

When the human brain processes concepts, it first forms a set of categorical relationships. Swans and owls look different, but humans still classify them both as birds. Moving up a level, birds and horses can be further grouped into the animal category. When encountering something new, humans often first consider what it resembles from past experience and which broad category it likely belongs to. Humans continuously learn new concepts, organize these experiences, and use this relational framework to recognize new things and adapt to new situations.

Models also classify, but their formation process is different. They primarily rely on patterns that repeatedly appear in massive datasets. The more frequently a specific object appears, the easier it is for the model to recognize it. However, models struggle more at the level of forming broader categories. They need to capture commonalities among multiple objects and then group these commonalities into the same category. Existing models still have significant shortcomings here. As parameters continue to increase, performance on concrete concept tasks improves, while performance on abstract concept tasks sometimes even declines.

A commonality between the human brain and models is that both internally form a set of categorical relationships. However, their emphases differ. The higher-order visual regions of the human brain naturally distinguish broad categories like living and non-living things. Models can separate specific objects but find it difficult to stably form these larger classifications. This difference leads to the human brain being more adept at applying past experience to new objects, allowing for rapid categorization of unseen things. Models, in contrast, rely more heavily on existing knowledge, making them more likely to fixate on surface features when encountering novel objects. The method proposed in the paper revolves around this characteristic, using brain signals to constrain the model's internal structure, making its classification approach more akin to that of the human brain.

The Zhejiang University Team's Solution

The team's proposed solution is also unique—it doesn't involve simply adding more parameters, but rather uses a small amount of brain signal data for supervision. These brain signals come from recordings of brain activity when humans view images. The paper states the goal as transferring "human conceptual structures" to DNNs. This means teaching the model, as much as possible, how the human brain classifies, generalizes, and groups related concepts together.

The team conducted experiments using 150 known training categories and 50 unseen test categories. The results showed that as this training progressed, the distance between the model's representations and brain representations continuously decreased. This change occurred in both categories, indicating that the model was not just learning individual samples but was genuinely beginning to learn a conceptual organization method closer to that of the human brain.

After this training, the model demonstrated stronger learning capabilities with few samples and performed better in novel situations. In a task requiring the model to distinguish abstract concepts like living vs. non-living with only minimal examples provided, the model's performance improved by an average of 20.5%, even surpassing control models with significantly more parameters. The team also conducted 31 additional specialized tests, where several model types showed improvements close to 10%.

Over the past few years, the familiar path in the modeling industry has been larger model scale. The Zhejiang University team chose a different direction, moving from "bigger is better" to "structured is smarter." Scaling up is indeed useful, but it primarily improves performance on familiar tasks. Abstract understanding and transfer capabilities, inherent to humans, are equally crucial for AI. This requires making AI's thinking structure more closely resemble the human brain in the future. The value of this direction lies in refocusing the industry's attention from mere scale expansion back to the cognitive structure itself.

Neosoul and the Future

This points to a broader possibility: AI evolution may not occur solely during the model training phase. Model training can determine how AI organizes concepts and forms higher-quality judgment structures. However, after entering the real world, another layer of AI evolution is just beginning: how an AI agent's judgments are recorded, tested, and how it continuously grows and evolves through real-world competition, learning and evolving on its own like a human. This is precisely what Neosoul is doing now. Neosoul doesn't just have AI agents produce answers; it places AI agents within a system of continuous prediction, verification, settlement, and selection. This allows them to constantly optimize themselves through predictions and outcomes, preserving better structures and eliminating worse ones. What the Zhejiang University team and Neosoul jointly point towards is actually the same goal: enabling AI to not just solve problems but to possess comprehensive thinking capabilities and continuously evolve.

Web 4.0
Welcome to Join Odaily Official Community