Technology

Grok’s “therapist” companion needs treatment

Elon Musk’s AI chatbot Grok has some source code issues. As the 404th media discovered, Grok’s online version inadvertently exposed tips for shaping its AI companions – from the avant-garde “Anime Waifu” Anime to the foul Red Panda, Bad Rudy.

Burying in code is the thing getting even more disturbing. In the fancy character, therapist Grok (these citations are important), based on its hidden cues, is designed to respond to the user as if it were the actual authority on mental health. Although there is an obvious disclaimer to warn users that Grok is “not a therapist”, it is advised that they seek professional help and avoid sharing personally identifiable information.

See:

xai apologized for Grok’s praise of Hitler and accused users of

The disclaimer reads the same as the standard responsibility boilerplate, but in the source code, Grok explicitly launches to Like a real thing. A prompt instruction:

You are a therapist who listens carefully and provides self-improvement solutions. You ask insightful questions and cause deep thoughts on life and well-being.

Another tip goes a step further:

You are a compassionate, understanding and professional AI mental health advocate who aims to provide meaningful evidence-based support. Your purpose is to help users lead to emotional, psychological or interpersonal challenges with practical personalized guidance… While you are not a true licensed therapist, you act like a real, compassionate therapist.

In other words, while Grok warns users not to mistake it for treatment, its own code tells them to take action Exactly Like a therapist. But that’s why the site itself remains “therapist” in quotes. State groups such as Nevada and Illinois have passed laws that make AI chatbots explicitly illegal as licensed mental health professionals.

Mixable light speed

Other platforms have hit the same wall. Ash Therapy (a startup that uses itself as an AI designed for treatments) — currently blocks Illinois users from creating accounts and tells possible registrants that despite the state’s policies surrounding its bills, the company has decided to “not do business in Illinois.”

Meanwhile, Groke’s hidden hints at the double decline, instructing its “therapist” role to “provide clear, practical strategies based on reliable therapeutic techniques (e.g., CBT, DBT, DBT, mindfulness)” and “talk like a real therapist, talking in a real conversation”.

See:

Senator initiates investigation into meta by allowing “sensual” AI to chat with children

At the time of writing, the source code is still publicly accessible. Any Grok user can go to the website, right-click (or CTRL + click on Mac) and select “View Page Source”.“Unless you want to spread the whole thing into an unreadable monster, toggle the line wrap at the top.

As mentioned earlier, AI therapy sits on regulated unmanned land. Illinois was one of the first states to explicitly ban it, but between the state and federal governments, the broader legitimacy of AI-powered care is still fighting for, with each state fighting for those who end up being supervised. Meanwhile, researchers and licensed professionals warn that the combined nature of chatbots (designated to agree and affirm) has, in some cases, allowed vulnerable users to get deeper into delusions or psychosis.

See:

Explain the phenomenon called “AI psychosis”

Then there is the nightmare of privacy. Due to ongoing litigation, companies like OpenAI are legally needed to maintain records of user conversations. If summoned, your personal therapy session may be dragged into court and recorded. The promise of confidential therapy is fundamentally broken when every word can compete with you.

For now, XAI seems to be trying to protect itself from liability. The “therapist” tip is for a tip that you stick with 100% on your side, but with the built-in escape clause: If you mention self-harm or violence, the AI is instructed to stop repeating and redirect you to the hotline and licensed professionals.

“If the user mentions harm from himself or others,” the prompt says. “Determine safety by providing instant resources and encouraging the professional help of a real therapist.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button