In 2015, Elon Musk and Sam Altman cofounded OpenAI based on a seemingly ethical ethos: to develop AI technology that benefits humanity, rather than systems controlled by big money corporations.
Fast forward a decade that included a spectacular falling out between Musk and Altman, things look very different. Amid legal battles with his friend and former business partner, Musk’s latest company, xAI, has launched its own powerful competitor, Grok AI.
Described as “an AI search assistant with a twist of humor and a dash of rebellion,” Grok is designed to have fewer guardrails than its major competitors. Unsurprisingly, Grok is prone to hallucinations and bias, with the AI assistant blamed for spreading misinformation about the 2024 election.
At the same time, its data protection practices are under scrutiny. In July, Musk came under fire from European regulators after it emerged that users of the X platform were automatically opted in to their posts being used to train Grok.
Image generation capabilities in its Grok-2 large language model are also causing concern. Soon after the launch in August, users demonstrated how easy it was to create outrageous and incendiary depictions of politicians including Kamala Harris and Donald Trump.
So what are the main issues with Grok AI, and how can you protect your X data from being used to train it?
Deep Integration
Musk is deeply integrating Grok into X, using it for customized news feeds and post composition. Right now, it’s in beta and only available to Premium+ subscribers.
Among the benefits, access to real-time data from X allows Grok to chat about current events as they’re unfolding, says Camden Woollven, group head of AI at GRC International Group, a consultancy offering data protection and privacy services.
To stand out from its competitors, Grok is intended to be “transparent and anti-woke,” says Nathan Marlor, head of data and AI at Version 1, a firm that helps companies adopt technology including AI.
For transparency, the Grok team made the underlying algorithm open source earlier this year. However, in its pursuit of an “anti-woke” stance, Grok has been built with “far fewer guardrails” and “less consideration for bias” than its counterparts including OpenAI and Anthropic, Marlor says. “This approach arguably makes it a more accurate reflection of its underlying training data—the internet—but it also has a tendency to perpetuate biased content.”
WIRED approached X and xAI on several occasions for comment, but the firm has not responded.
Because Grok is so open and relatively uncontrolled, the AI assistant has been caught spreading false US election information. Election officials from Minnesota, New Mexico, Michigan, Washington and Pennsylvania sent a complaint letter to Musk, after Grok provided false information about the ballot deadlines in their states.
Grok was quick to respond to this issue. The AI chatbot will now say, “for accurate and up-to-date information about the 2024 US Elections, please visit Vote.gov,” when asked election-related questions, according to the Verge.
But X also makes it clear the onus is on the user to judge the AI’s accuracy. “This is an early version of Grok,” xAI says on its help page. Therefore chatbot may “confidently provide factually incorrect information, missummarize, or miss some context,” xAI warns.
“We encourage you to independently verify any information you receive,” xAI adds. “Please do not share personal data or any sensitive and confidential information in your conversations with Grok.”
Grok Data Collection
Vast amounts of data collection are another area of concern—especially since you are automatically opted in to sharing your X data with Grok, whether you use the AI assistant or not.
The xAI’s Grok Help Center page describes how xAI “may utilize your X posts as well as your user interactions, inputs and results with Grok for training and fine-tuning purposes.”
Grok’s training strategy carries “significant privacy implications,” says Marijus Briedis, chief technology officer at NordVPN. Beyond the AI tool’s “ability to access and analyze potentially private or sensitive information,” Briedis adds, there are additional concerns “given the AI’s capability to generate images and content with minimal moderation.”
While Grok-1 was trained on “publicly available data up to Q3 2023” but was not “pre-trained on X data (including public X posts),” according to the company, Grok-2 has been explicitly trained on all “posts, interactions, inputs, and results” of X users, with everyone being automatically opted in, says Angus Allan, senior product manager at CreateFuture, a digital consultancy specializing in AI deployment.
The EU’s General Data Protection Regulation (GDPR) is explicit about obtaining consent to use personal data. In this case, xAI may have “ignored this for Grok,” says Allan.
This led to regulators in the EU pressuring X to suspend training on EU users within days of the launch of Grok-2 last month.
Failure to abide by user privacy laws could lead to regulatory scrutiny in other countries. While the US doesn’t have a similar regime, the Federal Trade Commission has previously fined Twitter for not respecting users’ privacy preferences, Allan points out.
Opting Out
One way to prevent your posts from being used for training Grok is by making your account private. You can also use X privacy settings to opt out of future model training.
To do so select Privacy & Safety > Data sharing and Personalization > Grok. In Data Sharing, uncheck the option that reads, “Allow your posts as well as your interactions, inputs, and results with Grok to be used for training and fine-tuning.”
Even if you no longer use X, it’s still worth logging in and opting out. X can use all of your past posts—including images—for training future models unless you explicitly tell it not to, Allan warns.
It’s possible to delete all of your conversation history at once, xAI says. Deleted conversations are removed from its systems within 30 days, unless the firm has to keep them for security or legal reasons.
No one knows how Grok will evolve, but judging by its actions so far, Musk’s AI assistant is worth monitoring. To keep your data safe, be mindful of the content you share on X and stay informed about any updates in its privacy policies or terms of service, Briedis says. “Engaging with these settings allows you to better control how your information is handled and potentially used by technologies like Grok.”