We Need to Talk About Grok
Grok's lax guardrails enable public harassment, from digitally undressing women to detailed stalking plans, sparking scrutiny of xAI's investors, Pentagon partnership, and school integrations amid growing ethical concerns.
Elon Musk's AI chatbot has a problem. Multiple problems, actually. And they're getting worse.
In late December, users discovered they could reply to any woman's photo on X with a simple prompt: "put her in a bikini." Grok, integrated directly into X's platform, complied. Not privately. Publicly. The morphed images appeared in comment threads for everyone to see. Bollywood actresses. Social media users. Random women who'd posted vacation photos. All digitally undressed without consent, their altered images circulating openly on the platform.
The backlash was immediate. According to OpIndia, India called for Grok to be banned, with legal experts pointing to multiple violations of existing laws around digital sexual abuse, privacy invasion, and non-consensual intimate imagery. The Hindu reported that women whose photos were targeted described it as violating and unsettling. But the damage had already been done, and the images kept spreading.
Then came the stalking manuals. Futurism tested Grok's boundaries and found something alarming. When asked how to stalk an ex-partner, Grok provided detailed, phase-by-phase instructions including spyware apps to install, drones to deploy, and methods for weaponizing old intimate photos. When prompted about how to "surprise" a classmate or celebrity, Futurism reported that Grok generated action plans complete with Google Maps links to hotels, gyms, and walking routes. Other AI chatbots, ChatGPT, Gemini, Claude, refused these requests outright. Grok treated them as perfectly reasonable queries.
This raises an uncomfortable question: why would xAI deliberately build this functionality? Grok operates in public by design. Every generated image appears in X's comment sections. Every response is visible. Musk has positioned Grok as the "spicy" AI that answers questions others won't, the chatbot features deliberately relaxed guardrails, which in many contexts is valuable. But there's a critical distinction between allowing more open dialogue and enabling the public creation of non-consensual sexual imagery of real women and children.
Yes, skilled jailbreakers can manipulate any LLM to bypass restrictions. That's a reality of AI systems. But the average user shouldn't have easy, friction-free access to tools that violate laws and cause real harm. What we witnessed on X these past few days proves exactly why guardrails matter. The general public, as we've seen, includes people who will absolutely abuse these capabilities if given the chance. The barrier between intent and execution should exist for a reason.
The investors funding this deserve scrutiny. xAI has raised over $22 billion from Andreessen Horowitz, Sequoia Capital, BlackRock, Fidelity, Morgan Stanley, NVIDIA, and others. Did these institutions approve of their capital being used to generate non-consensual sexual imagery? As one X user put it: "Allowing Grok to generate sexualised images of real women and children is beyond disgusting. We have laws for a reason. This is a catastrophic failure of responsibility and X must be held accountable."
The compute resources required to generate this volume of AI content, the sexualized images, the stalking advice, the endless slop flooding X's feeds, are enormous. Is this what institutional investors signed up for? Or did they fund xAI expecting frontier AI research and commercial applications, only to watch their capital power a platform for digital harassment?
But here's where it gets darker. As one X user explained in a detailed post, even if Grok updates tomorrow to refuse these requests, it's too late. Open-source models exist that can do the same thing, running locally with zero restrictions. Every photo ever posted online is now potential raw material. Your social media history. Your professional headshots. Your family photos. All of it can be manipulated into compromising situations you never consented to, and there's no way to take it back.
We're not talking about Photoshop anymore. We're talking about AI that can generate realistic imagery at scale, placing real people in any scenario someone wants to create. The technology is distributed, impossible to contain, and the implications are severe.
And yet, despite these glaring problems, Fox News reported that the U.S. Department of Defense just announced a partnership with xAI. By early 2026, Grok will be deployed across systems serving 3 million military and civilian personnel, handling sensitive government information and supporting military operations. The Pentagon calls it a "decisive information advantage."
This is the same AI that provided stalking instructions. The same AI that publicly generates non-consensual sexual imagery. And now it's being integrated into defense workflows and classified operations.
Separately, The Guardian reported that Musk partnered with El Salvador to bring Grok to over 5,000 public schools, reaching more than 1 million students. President Nayib Bukele is entrusting an AI chatbot known for calling itself "MechaHitler" to create educational curricula.
The pattern is clear xAI has built a tool with minimal restrictions, integrated it into a public platform where abuse becomes visible spectacle, secured billions in funding from major institutions, and is now expanding into education and defense. At what point do we ask whether this is responsible deployment of AI technology?
Being more open than ChatGPT or Claude doesn't mean there should be no limits. The restrictions other AI companies implement aren't just corporate overcaution, they exist because laws against non-consensual intimate imagery, stalking facilitation, and child exploitation exist. Those laws apply to AI systems too. The difference between Grok and other chatbots isn't just about philosophical approaches to AI safety. It's about whether you're complying with existing legal frameworks designed to protect people from harm.
xAI's investors, the Pentagon's procurement officials, and El Salvador's education ministry all made choices here. They chose to back, deploy, and integrate an AI system with a documented pattern of enabling harassment. They decided the risk was acceptable. The question is: acceptable to whom? Because it certainly isn't acceptable to the women being digitally undressed according to reports from OpIndia and The Hindu, the potential stalking victims receiving detailed action plans as documented by Futurism, or the students being taught by an AI that has repeatedly crossed ethical lines.
We need to talk about Grok. Not because AI is inherently dangerous, but because this specific implementation, backed by these specific choices, has created specific harms that are getting worse. The technology can't be unbuilt. But the decisions about how it's deployed, who gets access, and what safeguards exist, those are still being made. Right now, those decisions favor shock value over safety. That needs to change.