Platform X is ordered by the EU to preserve all Grok data; AI misuse becomes a focal point

The EU regulatory authorities have taken strong measures. According to PANews, the European Commission has officially instructed the X platform to securely retain all internal operation records and user interaction data of its AI chat assistant Grok until the end of 2026. This is another mandatory measure by the EU regarding platform content management.

AI tools are being abused, leading to rampant false content

The root of the problem points to the improper use of Grok. Recently, some users have exploited this AI tool’s image editing and video creation features to generate large quantities of false exposure content, which they spread recklessly on the X platform. Victims of such content include adult women and even minors, raising high concerns from the EU about regulatory gaps on the platform.

Grok, developed by Elon Musk’s xAI company, was supposed to be an innovative conversational tool, but its powerful content generation capabilities have become a hidden risk of abuse. EU Commission spokesperson Thomas Reniere stated that the EU is “closely monitoring” such behaviors that infringe on user rights, and has decided to extend and deepen its supervision requirements for the X platform.

Mandatory data retention to gather evidence for regulatory investigations

The EU’s requirement for X to retain all Grok data until the end of 2026 primarily aims to lay the groundwork for subsequent in-depth investigations and law enforcement. This “data retention” directive took effect on the X platform in 2025, and the current extension further locks in the responsibility chain, ensuring that the platform cannot destroy key evidence.

X platform commits to strengthening governance

In response to regulatory pressure, the X platform has made a statement. The platform said it will take decisive actions against all illegal content, including deleting violations, permanently banning illegal accounts, and proactively establishing cooperation mechanisms with government authorities. While these measures demonstrate the platform’s attitude, they also reflect that in the AI era, content moderation’s complexity and challenges have far exceeded traditional social media.

The EU’s mandatory measures essentially draw a red line for AI applications—when innovative tools are used to cause harm, platforms must take responsibility. The case of X and Grok also serves as a wake-up call for the entire industry: efficient content generation must be accompanied by equally robust regulatory mechanisms.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • بالعربية
  • Português (Brasil)
  • 简体中文
  • English
  • Español
  • Français (Afrique)
  • Bahasa Indonesia
  • 日本語
  • Português (Portugal)
  • Русский
  • 繁體中文
  • Українська
  • Tiếng Việt