Gate Square “Creator Certification Incentive Program” — Recruiting Outstanding Creators!
Join now, share quality content, and compete for over $10,000 in monthly rewards.
How to Apply:
1️⃣ Open the App → Tap [Square] at the bottom → Click your [avatar] in the top right.
2️⃣ Tap [Get Certified], submit your application, and wait for approval.
Apply Now: https://www.gate.com/questionnaire/7159
Token rewards, exclusive Gate merch, and traffic exposure await you!
Details: https://www.gate.com/announcements/article/47889
Cambridge University philosopher: We may never know if AI has consciousness.
As funding pours into AGI research, Cambridge scholars point out that humanity is still unable to verify whether AI possesses consciousness, urging a stance of agnosticism before regulatory relaxations. (Previous summary: How to rely on Vibe Coding in the AI era to have Agent stay up late monitoring the market) (Background Supplement: Altman Discusses OpenAI's Growth Dilemma: Computing Power is the Biggest Limitation, Revenue Growth Relies on Doubling the Amount of Graphics Cards)
Table of Contents
Global capital is flowing into AGI research at an unprecedented pace, with tech giants and venture capitalists competing to ramp up investments across computing power, models, and talent in a comprehensive arms race. The market bets that general artificial intelligence will reshape productivity and capital return structures.
However, earlier this month, philosopher Tom McClelland from the University of Cambridge reminded in a paper published in the journal “Mind & Language” that there is currently almost no evidence in science to prove that AI possesses consciousness, and it may not be possible for a long time in the future. People need to think about the allocation of resources.
Black Box Dilemma: Consciousness Research Has Not Yet Broken Ground
McClelland pointed out that humanity has not even unraveled how the human brain transforms neural activity into subjective experience, let alone analyze the large language models composed of trillions of parameters.
Current functionalists believe that as long as the computational complexity is sufficient, higher consciousness will naturally emerge; the biological essentialists argue that consciousness is a product of carbon-based life. Both sides lack evidence, and the debate resembles a confidence leap of a hypothetical nature.
Consciousness and Perception: Two Confused Concepts
In commercial promotion, companies often conflate “awareness” with “perceptual ability.” McClelland states that awareness refers only to the processing and reaction to external messages; perceptual ability involves pleasure and pain, affecting moral standing.
He reminded that if AI is just a computing system, the ethical risks are limited; but if future models possess perceptual capabilities, humanity must reassess the boundaries of responsibility.
Emotional Projection and Resource Misallocation
In order to increase user engagement, many technology companies are currently giving chatbots a humanized tone to evoke emotional projection.
McClelland calls this “existentialist toxic,” as society may misallocate resources because of it: the hype surrounding artificial intelligence consciousness has ethical implications for the allocation of research resources.
Regulatory Vacuum and Responsibility Game
In the context of de-regulation, the interpretation of whether “AI has a soul” can easily be controlled by companies. When marketing demands, businesses can claim that the model possesses self-awareness; when the system malfunctions and causes damage, they can again claim that the product is merely a tool, attempting to avoid liability. McClelland calls on lawmakers to establish a unified testing framework that draws a clear line between risk and innovation.
The capital markets may be rolling out the red carpet for the “AGI Awakening,” but before science can verify AI's perception capabilities, actively admitting ignorance and maintaining a cautious distance may be the rational choice.