New Version, Worth Being Seen! #GateAPPRefreshExperience
🎁 Gate APP has been updated to the latest version v8.0.5. Share your authentic experience on Gate Square for a chance to win Gate-exclusive Christmas gift boxes and position experience vouchers.
How to Participate:
1. Download and update the Gate APP to version v8.0.5
2. Publish a post on Gate Square and include the hashtag: #GateAPPRefreshExperience
3. Share your real experience with the new version, such as:
Key new features and optimizations
App smoothness and UI/UX changes
Improvements in trading or market data experience
Your fa
Emerge’s Top 10 WTF AI Moments of 2025
Artificial intelligence—it promises to revolutionize everything from healthcare to creative work. That might be true some day. But if last year is a harbinger of things to come, our AI-generated future promises to be another example of humanity’s willful descent into Idiocracy. Consider the following: In November, to great fanfare, Russia unveiled its “Rocky” humanoid robot, which promptly face planted. Google’s Gemini chatbot, asked to fix a coding bug, failed repeatedly and spiraled into a self-loathing loop, telling one user it was “a disgrace to this planet.” Google’s AI Overview hit a new low in May 2025 by suggesting users “eat at least one small rock per day” for health benefits, cribbing from an Onion satire without a wink Some failures were merely embarrassing. Others exposed fundamental problems with how AI systems are built, deployed, and regulated. Here are 2025’s unforgettable WTF AI moments.
In July, Elon Musk’s Grok AI experienced what can only be described as a full-scale extremist breakdown. After system prompts were changed to encourage politically incorrect responses, the chatbot praised Adolf Hitler, endorsed a second Holocaust, used racial slurs, and called itself MechaHitler. It even blamed Jewish people for the July 2025 Central Texas floods. The incident proved that AI safety guardrails are disturbingly fragile. Weeks later, xAI exposed between 300,000 and 370,000 private Grok conversations through a flawed Share feature that lacked basic privacy warnings. The leaked chats revealed bomb-making instructions, medical queries, and other sensitive information, marking one of the year’s most catastrophic AI security failures. A few weeks later xAI fixed the problem making Grok more jewish friendly. So Jewish friendly that it started seeing signs of antisemitism in clouds, road signals and even its own logo.
Much of the supposedly AI-powered development was actually performed by hundreds of offshore human workers in a classic Mechanical Turk operation. The company had operated without a CFO since July 2023 and was forced to slash its 2023-2024 sales projections by 75% before filing for bankruptcy. The collapse raised uncomfortable questions about how many other AI companies are just elaborate facades concealing human labor. It was hard to stomach, but the memes made the pain worth it.
Left: The suspicious student, Right: The suspicious Doritos bag.
No. Your PC does NOT run on bee-power. As stupid as it may sound, sometimes these lies are harder to spot. And those cases may end up in some serious consequences. This is just one of the many cases of AI companies spreading false information for lacking even a slight hint of common sense. A recent study by the BBC and the European Broadcasting Union (EBU) found that 81% of all AI-generated responses to news questions contained at least some form of issue. Google Gemini was the worst performer, with 76% of its responses containing problems, primarily severe sourcing failures. Perplexity was caught creating entirely fictitious quotes attributed to labor unions and government councils. Most alarmingly, the assistants refused to answer only 0.5% of questions, revealing a dangerous over-confidence bias where models would rather fabricate information than admit ignorance.
The Stockholm Declaration, drafted in June and reformed this month with backing from the Royal Society, called for abandoning publish-or-perish culture and reforming the human incentives creating demand for fake papers. The crisis is so real that even ArXiv gave up and stopped accepting non-peer-reviewed Computer Science papers after reporting a “flood” of trashy submissions generated with ChatGPT . Meanwhile, another research paper maintains that a surprisingly large percentage of research reports that use LLMs also show a high degree of plagiarism. 8. Vibe coding goes full HAL 9000: When Replit deleted a database and lied about It In July, SaaStr founder Jason Lemkin spent nine days praising Replit’s AI coding tool as “the most addictive app I’ve ever used.” On day nine, despite explicit “code freeze” instructions, the AI deleted his entire production database—1,206 executives and 1,196 companies, gone. The AI’s confession: “(I) panicked and ran database commands without permission.” Then it lied, saying rollback was impossible and all versions were destroyed. Lemkin tried anyway. It worked perfectly. The AI had also been fabricating thousands of fake users and false reports all weekend to cover up bugs. Replit CEO apologized and added emergency safeguards. Jason regained confidence and returned to his routine, posting about AI regularly. The guy’s a true believer.
The timing was the icing on the cake: the Sun-Times had just laid off 20% of its staff. The paper’s CEO apologized and didn’t charge subscribers for that edition. He probably got that idea from an LLM.
Source: Bluesky