Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Why Did the Financial Industry Hesitate This Time When Everyone Was "Farming Lobsters"?
“Have you raised lobsters?”
Recently, a lobster named “OpenClaw” exploded in popularity overnight, reigniting the AI industry and bringing a spring of growth to AI concept stocks. However, the financial industry’s enthusiasm for this “lobster” is limited to AI sector fund companies paying attention to market trends, rather than actively deploying real investments.
Looking back over the past few years, the market has seen new AI stories emerge periodically: a year ago it was DeepSeek, a few months ago it was Doubao Phone, and now it’s this nationwide sensation, “OpenClaw.” AI technology continues to iterate and improve, with applications becoming increasingly intelligent. But the financial industry, once praised as the best landing scene for AI applications, has remained on the sidelines amid the ongoing “AI fever.”
This stark contrast in attitude is clear: a year ago, after general models like DeepSeek went live, banks and financial institutions expressed their intention to actively follow up and deploy. Yet, at the end of last year with Doubao Phone and earlier this year with the surge of OpenClaw, many banks collectively chose to refuse: multiple banks banned the use of Doubao Phone on their apps; regarding OpenClaw, several industry insiders told Beike Finance, “Not mature,” “Will not follow.”
Why has the attitude of the financial industry shifted so dramatically? Can the financial sector still serve as a landing scene for AI applications? The answer lies in two words: safety.
The financial industry, especially banking, handles vast amounts of customer information and transaction data, leaving no room for mistakes. For any domain involving funds, customer data, and core transactions, security and compliance are unshakable foundations. Although DeepSeek struggles with “illusion” interference, its capabilities in text processing and reduced computational load can help banks improve operational efficiency without posing significant security risks to core banking operations. However, AI agents like OpenClaw seem to relax the “security and compliance” bottom line.
Recently, the National Internet Emergency Center issued a “Risk Warning on the Safe Use of OpenClaw,” pointing out that such intelligent agents typically require high system permissions during operation, such as access to local file systems, reading environment variables, calling external APIs, and installing extensions. If default configurations lack necessary security restrictions, attackers exploiting vulnerabilities could gain full control of the system, leading to data leaks or business system failures.
It is understood that during operation, the “Lobster” agent makes autonomous decisions and calls system resources, often exposing issues like internet exposure, use of administrator privileges, and plaintext storage of keys. As a locally running AI proxy, “Lobster” has features like autonomous decision-making and system resource calls, but trust boundaries are blurred, and many skill packages in the market still lack strict review, posing significant risks.
Online, some users have shared that during use, sensitive credit card information was openly accessible and operable by OpenClaw in plaintext. Even if users upgrade to the latest version, without targeted precautions, there remains a risk of attack.
Such applications with high permissions, weak boundaries, no prompts, and unlimited data access pose enormous risks if deployed widely in the financial industry. For banks that regard security as their life line, they will not easily experiment with these AI agents without guaranteed safety.
In fact, AI applications are already widely used in banking, mainly in auxiliary roles such as document processing, AI customer service, and AI collection. AI systems are also used in credit risk control. However, due to risks like hallucinations, banks tend not to rely on AI for complex tasks.
It’s worth noting that the development of AI agents is a trend. To land in core financial operations, AI must operate under strict security “shackles”: clear permission boundaries, minimal data collection, and ensuring financial information security. Initial small-scale testing should be conducted in low-risk, non-core scenarios; then, models can be deeply modified and privately deployed, establishing a comprehensive AI governance system to control risks from the source. Only then can the decision be made whether to expand into core business and scenarios.
In summary, from the performance of this lobster, it’s “hot” but not “mature.” For AI applications to truly land in the financial industry, such as banks, there is still a long way to go to integrate them into their ecosystem.
Beijing News Beike Finance Reporter Jiang Fan
Editor Wang Jinyu
Proofreader Liu Baoqing