Fei-Fei Li, the Stanford University professor often called the “Godmother of AI,” didn’t anticipate how rapidly artificial intelligence would transform society. In a recent extensive discussion, she reflected on a career spanning 25 years and shared her perspectives on where this civilization-level technology is heading—a direction she believes points unmistakably toward spatial intelligence.
The Unexpected Magnitude of AI’s Rise
When asked about the astonishment of witnessing AI’s explosive mainstream adoption, Li acknowledged the disconnect between her long immersion in the field and its current trajectory. “I never expected it to become so immense,” she revealed. The depth and breadth of AI’s impact on nearly every facet of human existence—work, well-being, and future prospects—still catches her off guard. What distinguishes this moment isn’t merely technological power, but rather its pervasiveness: everyone on the planet will experience AI’s influence in some form.
This wasn’t always obvious. When Li and her generation of researchers created ImageNet in the early 2000s, the landscape was entirely different. Graduate students worked with datasets containing just four to twenty object categories. ImageNet, by contrast, represented a quantum leap: 22,000 object categories and 15 million labeled images. That breakthrough directly catalyzed the deep learning revolution that powers today’s applications.
A Double-Edged Tool Requiring Human Stewardship
Li consistently frames technology through a balanced lens: transformative but inherently dual-natured. Throughout human civilization, tools created by humans have predominantly served beneficial purposes, yet deliberate misuse and unintended consequences remain ever-present risks. She stresses that responsibility must accompany capability—particularly when concentrated in few hands.
“Personally, I hope this technology can become more democratized,” Li emphasized, advocating for broader access and influence over AI’s development. She argues that democratization doesn’t diminish the need for oversight; rather, it distributes responsibility across individuals, enterprises, and society as a whole.
Spatial Intelligence: The Logical Next Frontier
Today, Li serves as co-founder and CEO of World Labs, a startup valued at 1.1 billion dollars dedicated to pioneering what she identifies as AI’s next critical dimension: spatial intelligence. While large language models dominate contemporary discourse, she contends that understanding three-dimensional space—how objects move, how agents interact with environments, and how machines perceive depth and relationship—deserves equivalent prominence.
“Spatial intelligence is AI’s ability to understand, perceive, reason, and interact with the world,” Li explained. This represents the natural continuation of visual intelligence work, which focused on passive information reception. Evolution teaches us that seeing and moving are inseparable; intelligence itself is inseparable from action.
Marble, a model recently showcased by World Labs, exemplifies this direction. The system generates three-dimensional environments from simple text prompts or photographs, enabling designers to ideate rapidly, game developers to source 3D scenes, and robots to train through simulation. The educational applications extend even further: imagine Afghan girls attending virtual classrooms, or elementary students exploring cellular structures by virtually walking inside a cell to observe nuclei and enzymes firsthand.
Confronting Technology’s Labor Disruption
Li doesn’t minimize concerns about employment. She acknowledges that AI will profoundly reshape the labor landscape, citing concrete examples like Salesforce’s transfer of 50% of customer service roles to AI systems. However, she contextualizes this within historical patterns. Every major technological leap—steam engines, electricity, computing, automobiles—created painful transitions alongside eventual job restructuring. The contemporary response must be equally nuanced: individuals must commit to continuous learning, while enterprises and society bear complementary responsibilities.
Superintelligence: Governance, Not Inevitability
Regarding Geoffrey Hinton’s warning about a 10-20% extinction risk from superintelligent AI, Li respectfully disagrees with the framing. She doesn’t dismiss the concern outright but redirects it toward human agency. “If humanity really faces a crisis, it will be because of our own mistakes, not the machines,” she asserted. Rather than viewing superintelligence as an autonomous threat, she poses a more fundamental question: Why would humanity collectively permit such a scenario?
This perspective emphasizes international governance, responsible development practices, and global regulatory frameworks—mechanisms still embryonic in their current form but essential to cultivate. Li advocates for pragmatic oversight at the international level rather than resigned acceptance of technological determinism.
Energy, Renewables, and Realistic Pragmatism
The question of whether massive data centers will trigger ecological disaster prompted Li to distinguish between current energy sourcing and technological inevitability. While present-day facilities predominantly rely on fossil fuels, she argues that renewable energy innovation and policy restructuring can reshape this equation. Countries establishing large data center infrastructure have opportunities to simultaneously invest in cleaner energy systems—a silver lining within a challenging problem.
The Enduring Importance of Human-Centered Values
Perhaps most reflective are Li’s comments on education and child development in an AI-saturated world. Rather than counseling anxiety-driven career pivots, she advocates for cultivating timeless human qualities: curiosity, critical thinking, creativity, honesty, and diligence. Parents shouldn’t obsess over whether their children should study computer science; instead, they should nurture agency and dignity alongside understanding individual aptitudes and interests.
She emphasizes a principle both simple and profound: don’t leverage tools for laziness or harm. Learning mathematics isn’t about obtaining answers from large language models—it’s about developing reasoning capacity. The authenticity concerns surrounding AI-generated images, voices, and text reflect not merely technological challenges but broader social media-era communication failures.
A Global Citizen’s Responsibility
Li’s personal journey—immigrating to the United States at 15, navigating language barriers, managing her family’s dry-cleaning shop while pursuing education, benefiting from mentors like her math teacher—informs her perspective on responsibility and resilience. Today, occupying roles as both Stanford professor and AI startup CEO, she recognizes that her platform carries weight. “The initiative should be in human hands,” she insisted. “The initiative doesn’t lie with machines, but with ourselves.”
This conviction shapes everything her organization undertakes: creating transformative technology while wielding it responsibly. It’s neither techno-utopianism nor dystopian alarmism, but pragmatic centrism grounded in scientific rigor and human values. In an era where AI capabilities expand almost incomprehensibly, Fei-Fei Li remains convinced that human wisdom, governance, and ethical commitment remain humanity’s greatest resource.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
From the Frontiers of Visual AI to Spatial Intelligence: Fei-Fei Li's Vision for the Next Era
Fei-Fei Li, the Stanford University professor often called the “Godmother of AI,” didn’t anticipate how rapidly artificial intelligence would transform society. In a recent extensive discussion, she reflected on a career spanning 25 years and shared her perspectives on where this civilization-level technology is heading—a direction she believes points unmistakably toward spatial intelligence.
The Unexpected Magnitude of AI’s Rise
When asked about the astonishment of witnessing AI’s explosive mainstream adoption, Li acknowledged the disconnect between her long immersion in the field and its current trajectory. “I never expected it to become so immense,” she revealed. The depth and breadth of AI’s impact on nearly every facet of human existence—work, well-being, and future prospects—still catches her off guard. What distinguishes this moment isn’t merely technological power, but rather its pervasiveness: everyone on the planet will experience AI’s influence in some form.
This wasn’t always obvious. When Li and her generation of researchers created ImageNet in the early 2000s, the landscape was entirely different. Graduate students worked with datasets containing just four to twenty object categories. ImageNet, by contrast, represented a quantum leap: 22,000 object categories and 15 million labeled images. That breakthrough directly catalyzed the deep learning revolution that powers today’s applications.
A Double-Edged Tool Requiring Human Stewardship
Li consistently frames technology through a balanced lens: transformative but inherently dual-natured. Throughout human civilization, tools created by humans have predominantly served beneficial purposes, yet deliberate misuse and unintended consequences remain ever-present risks. She stresses that responsibility must accompany capability—particularly when concentrated in few hands.
“Personally, I hope this technology can become more democratized,” Li emphasized, advocating for broader access and influence over AI’s development. She argues that democratization doesn’t diminish the need for oversight; rather, it distributes responsibility across individuals, enterprises, and society as a whole.
Spatial Intelligence: The Logical Next Frontier
Today, Li serves as co-founder and CEO of World Labs, a startup valued at 1.1 billion dollars dedicated to pioneering what she identifies as AI’s next critical dimension: spatial intelligence. While large language models dominate contemporary discourse, she contends that understanding three-dimensional space—how objects move, how agents interact with environments, and how machines perceive depth and relationship—deserves equivalent prominence.
“Spatial intelligence is AI’s ability to understand, perceive, reason, and interact with the world,” Li explained. This represents the natural continuation of visual intelligence work, which focused on passive information reception. Evolution teaches us that seeing and moving are inseparable; intelligence itself is inseparable from action.
Marble, a model recently showcased by World Labs, exemplifies this direction. The system generates three-dimensional environments from simple text prompts or photographs, enabling designers to ideate rapidly, game developers to source 3D scenes, and robots to train through simulation. The educational applications extend even further: imagine Afghan girls attending virtual classrooms, or elementary students exploring cellular structures by virtually walking inside a cell to observe nuclei and enzymes firsthand.
Confronting Technology’s Labor Disruption
Li doesn’t minimize concerns about employment. She acknowledges that AI will profoundly reshape the labor landscape, citing concrete examples like Salesforce’s transfer of 50% of customer service roles to AI systems. However, she contextualizes this within historical patterns. Every major technological leap—steam engines, electricity, computing, automobiles—created painful transitions alongside eventual job restructuring. The contemporary response must be equally nuanced: individuals must commit to continuous learning, while enterprises and society bear complementary responsibilities.
Superintelligence: Governance, Not Inevitability
Regarding Geoffrey Hinton’s warning about a 10-20% extinction risk from superintelligent AI, Li respectfully disagrees with the framing. She doesn’t dismiss the concern outright but redirects it toward human agency. “If humanity really faces a crisis, it will be because of our own mistakes, not the machines,” she asserted. Rather than viewing superintelligence as an autonomous threat, she poses a more fundamental question: Why would humanity collectively permit such a scenario?
This perspective emphasizes international governance, responsible development practices, and global regulatory frameworks—mechanisms still embryonic in their current form but essential to cultivate. Li advocates for pragmatic oversight at the international level rather than resigned acceptance of technological determinism.
Energy, Renewables, and Realistic Pragmatism
The question of whether massive data centers will trigger ecological disaster prompted Li to distinguish between current energy sourcing and technological inevitability. While present-day facilities predominantly rely on fossil fuels, she argues that renewable energy innovation and policy restructuring can reshape this equation. Countries establishing large data center infrastructure have opportunities to simultaneously invest in cleaner energy systems—a silver lining within a challenging problem.
The Enduring Importance of Human-Centered Values
Perhaps most reflective are Li’s comments on education and child development in an AI-saturated world. Rather than counseling anxiety-driven career pivots, she advocates for cultivating timeless human qualities: curiosity, critical thinking, creativity, honesty, and diligence. Parents shouldn’t obsess over whether their children should study computer science; instead, they should nurture agency and dignity alongside understanding individual aptitudes and interests.
She emphasizes a principle both simple and profound: don’t leverage tools for laziness or harm. Learning mathematics isn’t about obtaining answers from large language models—it’s about developing reasoning capacity. The authenticity concerns surrounding AI-generated images, voices, and text reflect not merely technological challenges but broader social media-era communication failures.
A Global Citizen’s Responsibility
Li’s personal journey—immigrating to the United States at 15, navigating language barriers, managing her family’s dry-cleaning shop while pursuing education, benefiting from mentors like her math teacher—informs her perspective on responsibility and resilience. Today, occupying roles as both Stanford professor and AI startup CEO, she recognizes that her platform carries weight. “The initiative should be in human hands,” she insisted. “The initiative doesn’t lie with machines, but with ourselves.”
This conviction shapes everything her organization undertakes: creating transformative technology while wielding it responsibly. It’s neither techno-utopianism nor dystopian alarmism, but pragmatic centrism grounded in scientific rigor and human values. In an era where AI capabilities expand almost incomprehensibly, Fei-Fei Li remains convinced that human wisdom, governance, and ethical commitment remain humanity’s greatest resource.