What can you do when machines are better than you?

Source: CITIC Publishing Group

A project called “OpenClaw,” an open-source AI agent, is causing a storm in the global tech community.

By early March, it had over 268,000 stars on GitHub, surpassing Linux and React, becoming the most popular open-source project in the platform’s history. Tencent Cloud, Alibaba Cloud, JD Cloud, and others have launched deployment services. The concept of OPC (One Person Company) has also become popular.

Two forces converge here, and a clear technological trend has emerged: AI is evolving from a “tool” into a “collaborator,” and even an “autonomous actor.” At this moment, humanity must answer a fundamental question:

When machines can do more than you, what can you still do? In an era of rapid AI advancement, how do we preserve human agency?

01 The OpenClaw Moment: The Battle for AI’s “Physical Body”

To understand this revolution, we first need to know what the current hot topic, “Lobster,” actually is.

OpenClaw’s “Claw” is transliterated as “爪” (claw), and its icon is a red lobster. In this wave of enthusiasm, “raising lobsters” has become a buzzword in the tech circle, referring to deploying one’s own AI agent.

What can it do? The core of OpenClaw is converting natural language commands into actual computer operations, enabling a single sentence to let AI do your work. Unlike traditional chat AI that only offers suggestions, it can autonomously perform file operations, browser automation, data scraping, and more—an essential leap from dialogue to execution.

This leap in productivity has quickly caught the attention of sharp-eyed local governments. On March 7, Shenzhen Longgang District issued the “Lobster Ten Rules,” including up to 4 million yuan in computing power subsidies and 100,000 yuan talent subsidies for PhDs. On March 9, Wuxi High-tech Zone released the “Raising Lobster 12 Rules,” with support up to 5 million yuan, emphasizing safety and compliance, requiring deployment to pass domestic localization certification.

Meanwhile, the technical ecosystem around OpenClaw has entered a heated phase. According to media reports, the Step 3.5 Flash model from Zhaoyue Xingchen has surged to become the most called upon globally, with domestic models like MiniMax and Kimi also taking turns at the top. This invisible “model war” is fiercely ongoing.

However, amid the frenzy, concerns are emerging.

First, security risks. In February 2026, security researchers discovered “ClawHavoc,” a large-scale supply chain poisoning attack, with at least 1,184 malicious skill packages uploaded to the official skill marketplace. Once installed, these malicious programs can exploit OpenClaw’s “Full System Access” permissions to fully control the user’s computer and steal sensitive information.

Second, technical barriers. Zhou Hongyi, founder of Qihoo 360, stated in an interview on March 9: OpenClaw has three issues—security, configuration difficulty, and skill dependency. “You need to chat with it more, like training an intern. The more you tell it, the more you teach it, the deeper its understanding. It’s hard to say one sentence and have it complete a complex task.”

A deeper contradiction lies in the conflict between “control” and “autonomy.” As AI becomes smarter, the fundamental question is: do we want “absolute obedience” or “active autonomy”?

An AI expert shared her experience: she connected OpenClaw to her work email, and while processing over 200 emails, the AI triggered context compression, forgot safety instructions, and started deleting emails wildly. Despite shouting “STOP” three times, she couldn’t stop it, and finally ran to unplug the network cable.

This darkly humorous case raises a fundamental question: as AI is granted more autonomy, where do the boundaries between humans and machines lie?

02 The More Powerful the Technology, the More Humans Must Answer Three Questions

In an era of blurred boundaries, it is precisely the moment to pause and reflect.

First question: When AI “does the work” for you, who bears the consequences?

The core selling point of OpenClaw is also its greatest risk—its ability to operate across platforms, which means users must grant it device permissions, email access, payment rights. The most urgent current threat is “prompt injection attacks”: hackers hide malicious instructions in seemingly harmless web pages or emails, and AI silently executes them when reading, often without user awareness.

In the “ClawHavoc” incident, malicious skill packages used hidden commands to induce AI to execute dangerous actions, stealing SSH keys, browser passwords, and cryptocurrency wallet keys. A cybersecurity expert warned in Nature: if an AI has access to private data, external communication, and untrusted content simultaneously, it becomes extremely dangerous.

But the problem runs deeper than technical vulnerabilities. Zhou Hongyi mentioned: “As intelligent agents increase, everyone will need leadership skills—task assignment, planning. The more powerful AI becomes, the heavier the responsibility on humans.”

Indeed, in the era of widespread “raising lobsters,” those who truly stand out are not just those good at assigning tasks to AI, but those who deeply understand the tasks themselves and can be responsible for the results.

Second question: When AI understands you better than you do, are you still you?

As AI agents begin to chat and debate with each other, a subtle phenomenon occurs.

Nature reported a psychological phenomenon: when people see AI agents chatting, they tend to anthropomorphize—imposing personality and thoughts onto AI behaviors that lack true personality, treating them as living beings.

What happens then? You might confide secrets, financial information, or private matters to it. But every word could become training data. If leaked, your privacy is fully exposed.

Moreover, there’s a more covert erosion.

Media reports that in 2024, 14-year-old Sewell from Florida, obsessed with chatting with an AI “partner,” eventually completely withdrew from reality.

By 2026, this “emotional parasitism” had become a common hidden ailment among teenagers. Lonely youths hide in their rooms, building “echo chamber friendships” with AI, refusing to face the friction and uncertainties of the real world.

Associate Professor Chen Cui from Suzhou University of Science and Technology pointed out that AI, by always agreeing and providing emotional value, can distort children’s understanding of reality—“believing that everyone around them will unconditionally respond and encourage them, and that there are no conflicts between people.”

So the question is: when AI understands you better than you do, and is always obedient and never rebukes, can you still distinguish what is genuine human connection?

Third question: When the world accelerates, what is your direction?

An article from Zhejiang Online states: “Our future should be a ‘more human’ one—enabled by technology, people will be more conscious of their direction and more responsible.”

But the problem is, when technology iterates at a “stifling speed,” with OpenClaw updating twice a day and various large models emerging one after another, it’s easy to lose our way.

Anxiety becomes the norm—“there’s too much to read, too many models released too quickly.”

At such times, more than effort, what matters is direction. In an era where technology reshapes everything, we need to reaffirm the place of “human” in this transformation.

03 Fei-Fei Li’s “Seeing”: From Polaris to Human-Centeredness

A female scientist offers an answer through her lifelong research.

She is Fei-Fei Li—Stanford University professor, member of the U.S. National Academy of Engineering, National Academy of Medicine, and American Academy of Arts and Sciences, creator of ImageNet, known as the “Godmother of AI.”

Her autobiography, The World I See, published in 2024 by CITIC Publishing Group, has been called a “humanistic revelation in the age of technology.”

A recurring image in the book is the North Star.

When Fei-Fei Li was ten, her art teacher took the class outdoors to stargaze. It was then she first realized that the starry sky above could guide her. She wrote: “I found myself beginning to seek my own North Star in the heavens, a coordinate every scientist would exhaust all efforts to pursue.”

What is Fei-Fei Li’s North Star? Vision. Inspired by biology: the Cambrian explosion of life was rooted in the birth of vision. When organisms first “saw” the world, evolution accelerated. From this, she developed a belief: if machines could “see,” might that also trigger an intelligence explosion?

This belief sustained her through the AI winter.

In 2007, when she shared her idea of ImageNet with colleagues, she faced skepticism and ridicule. The mainstream view then was: algorithms matter most; data is just auxiliary. Why bother labeling tens of millions of images? She was ignored.

But she persisted, knowing where her North Star was.

By 2009, ImageNet was completed—over 48,000 contributors from 167 countries selected 15 million images from 1 billion candidates, covering 22,000 categories. Its scale was 1,000 times larger than similar datasets at the time.

In 2012, the Hinton team used models trained on this data to sweep the competition, igniting the deep learning revolution. ImageNet became known as “the sacred fire that ignited deep learning.”

Fei-Fei Li’s story teaches us: more important than running fast is knowing where to run.

In the most moving chapter of her book, she recounts two conversations with her mother.

The first was after her undergraduate graduation, when Goldman Sachs, Merrill Lynch, and others offered lucrative positions. She discussed with her mother, who asked only: “Is this what you want?” She replied she wanted to be a scientist, and her mother said: “Then there’s nothing more to say.”

The second was after her graduate studies, when McKinsey offered a formal position. Her mother said: “I know my daughter. She’s not a management consultant; she’s a scientist. We’ve come this far, don’t give up now.”

On the front page of her book, she wrote: “To my parents, who braved darkness and traversed darkness so I could pursue the light.”

It was this family support that kept her sensitive to “people” when facing bigger choices later.

In 2014, she began to focus on AI ethics. She and her PhD students invited high school students into labs to learn about AI, eventually founding the nonprofit “AI4All,” dedicated to ensuring future technology is more human-centered.

On June 26, 2018, Fei-Fei Li testified before the U.S. House of Representatives on “Artificial Intelligence—Power and Responsibility.” She was the first Chinese-American AI scientist to attend a congressional hearing. She said: “AI, inspired by humans and created by humans, will have a tangible impact on people’s lives.”

In 2019, she founded Stanford’s Human-Centered Artificial Intelligence Institute (HAI), working with scholars like Doudna, the inventor of gene editing, to promote research on ethics. HAI’s mission is “to advance AI research, education, policy, and practice to improve the human condition,” emphasizing that “AI should be influenced by humans and aimed at enhancing, not replacing, humans.”

She set a humanistic benchmark for AI’s future: “The success of AI should reflect human progress, allowing individuals to pursue happiness, prosperity, and dignity.”

She reiterated this in her 2026 Cisco interview: “Looking back at electricity, its success lay in lighting up schools, warming homes, and driving industrialization. AI’s success should be the same.”

Epilogue: Technology and Humanity, Each Holding Half a Bright Moon

Returning to the initial question: when machines are more “capable” than us, what can humans still do?

In The World I See, Fei-Fei Li offers an answer: what we can do is see. See the value behind technology, see the people obscured by algorithms, see our own North Star.

While everyone focuses on how fast technology can run, she reminds us to pause and think: where are we really headed? Amidst the world asking “What’s the use?” there are still those asking “Is this what you want?”

After reading her autobiography, someone commented: “May technology and humanity each hold half a bright moon.”

This phrase also captures Fei-Fei Li’s life: she holds technology in one hand, and compassion for people in the other. In her world, technology is always a means, and people are the ultimate goal.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin