Researchers showed that large language models use a small, specialized subset of parameters to perform Theory-of-Mind reasoning, despite activating their full network for every task.
Imagine you're watching a movie, in which a character puts a chocolate bar in a box, closes the box and leaves the room. Another person, also in the room, moves the bar from a box to a desk drawer.
After 150 years of mystery, neuroscience has finally cracked the code on how language works in the brain—and the answer is surprisingly elegant.
Humans have the cognitive capacity to infer and reason about the minds and thoughts of other people. Our brains are very good at it—much better than the Large Language Models or LLMs.  Although LLMs ...
New brain-imaging research shows that soccer fans experience rapid shifts in reward and self-control circuits when their team ...
Imagine you're watching a movie, in which a character puts a chocolate bar in a box, closes the box and leaves the ...
This paper introduces a novel AI-enhanced retirement planning platform that integrates behavioral economics principles with advanced machine learning techniques to optimize financial decision-making.
Our brains actively construct vision in near-darkness, blending residual light with memory and prediction. Even with minimal visual input, the brain ...
The human visual system provides us with a rich and meaningful percept of the world, transforming retinal signals into visuo-semantic representations. For a model of these representations, here we ...
An analysis of massive cognitive and neuroimaging databases indicated that more education was associated with better memory, larger intracranial volume, and slightly larger volumes of memory-sensitive ...
The research highlights the philosophical urgency of AI development. As global efforts to achieve general intelligence accelerate, the study calls for renewed dialogue between computer science, ...