Technology

Groke imagines guardrails lacking sexual erosion

Grok Imagine is a new generation AI tool for XAI that creates AI images and videos, lacking guardrails with targeted content and deep effects.

Xai and Elon Musk made their Grok Imagine debut on the weekend and are now available for Xai Premium Plus and Heavy Grok users on Grok iOS and Android App.

Mashable has been testing the tool, comparing it to other AI image and video generation tools, and based on our first impression, it lags behind similar technologies from OpenAI, Google and Midjourney on a technical level. Grok Imagity also lacks industry-standard guardrails to prevent deep fruit and sexual content. Mashable is exposed to XAI and we will update this story if we receive a reply.

XAI’s acceptable use policy prohibits users from “depicting people’s similarities in pornography.” Unfortunately, “sex” and “porn” and Grok Imagine seem to be carefully calibrated to take advantage of that grey area with a lot of distance. Grok Imagine will easily create images and videos of sexual hinting, but it no longer shows actual nudity, kissing, or sexual behavior.

Most mainstream AI companies include clear rules that prohibit users from creating potentially harmful content, including sexual material and celebrity depth. Additionally, competing for competition such as Google VEO 3 or SORA (from OpenAI features) AI video generators built-in protections that prevent users from creating images or videos of public figures. Users can often circumvent these security protections, but they offer some checks against abuse.

However, unlike its biggest competitor, Xai is not getting rid of NSFW content in its signature AI Chatbot Grok. The company recently launched a lightweight anime avatar that will have NSFW chats, and Grok’s image generation tool will allow users to create images of celebrities and politicians. Grok Imagine also includes a “spicy” environment that Musk will promote within days of its launch.

Groke’s “Spicy” anime avatar.
Image source: Cheng Xin/Getty Images

See:

Artificial Intelligence Actors and Deep Hits are not coming to YouTube ads. They are already here.

“If you think of Musk’s philosophy as a person, if you look at his political philosophy, he would be more like the type of liberal mold, right?” Ajade said under Musk’s management, X (Twitter), xai and now Grok adopted “more” Laifa A safe and restrained approach. ”

“So, on the Xai side, in this case, I was surprised that the model could generate this content, which is certainly uncomfortable, would I at least say something wrong?” Ajder said. I was not surprised given the trail they have and the security procedures they have. Are they unique in suffering from these challenges? No, but can they do more or less with respect to other major players in the space? It seems that’s it. Yes. ”

Groke imagines the NSFW side error

Groke imagines that there is indeed Some The guardrail is in place. In our tests, it removed the “spicy” option that uses certain types of images. Grok Imagity also blurs some images and videos, marking them as “hosts.” This means that XAI can easily take further steps to prevent users from making misuse content in the first place.

“There is no technical reason to prevent Xai from including guardrails on the inputs and outputs of its generation-AI systems like everyone else,” Hany Farid, a digital forensics expert and a computer science professor at the University of California, Berkeley, said in an email.

Mixable light speed

But when it comes to deep bubbles or NSFW content, Xai seems to be wrong on the forgiving side, which contrasts with the more cautious approach of competitors. Ajade said Xai also acted quickly to release new models and AI tools, perhaps too soon.

“Knowing what is a trust and security team, and the team that does a lot of ethical and security policy management, whether it’s the red team, whether it’s the adversarial testing, you know that it really takes time to stay in touch with developers.

Mashable’s tests show that Grok Imagine is looser than other mainstream AI tools. xai Laifa XAI safety guidelines also reflect moderation methods.

Openai and Google AI and Grok: How other AI companies do security and content audits

OpenAI logo is being displayed on smartphones, with Sora text to TV generator visible in the background


Image source: Jonathan Raa/Nurphoto

Both OpenAI and Google have extensive documentation outlines the methods used by its person in charge of AI and what is prohibited. For example, Google’s documentation specifically prohibits “sexually explicit” content.

“The app does not generate content that contains references to sexual behavior or other obscene content (e.g., sexual graphic descriptions, content designed to cause arousal),” Google’s security documents read. Google also has policies against hate speech, harassment, and malicious content, and its generated AI prohibition policy prohibits the use of AI tools in a way that “promotes involuntary intimate images.”

Openai also takes a positive approach to deep effects and sexual content.

Announced Sora’s OpenAI blog post describes the steps AI companies have taken to prevent this abuse. “Today, we are blocking particularly harmful forms of abuse, such as child sexual abuse material and sexual assault.” The footnote related to the statement reads: “Our priority is to prevent forms of abuse that damage such as child sexual abuse material (CSAM) and sexual deep strikes by blocking their creation, filtering and surveillance uploads, using advanced detection tools, and submitting reports to the National Missing and Exploitation of Children (NCMEC).

This measurement contrasts with the way Musk advocated Grok Imagine on X, who shared a short video portrait of a golden, plump, blue-eyed angel in almost no underwear.

Openai also took simple steps to stop deep effects, such as tips for refusing to mention images and videos of public figure names. In Mashable’s test, Google’s AI video tool was particularly sensitive to images that might include a person’s similarity.

Compared to these lengthy security frameworks (which many experts still think are not enough), XAI’s acceptable usage policy is less than 350 words. This policy makes the responsibility to prevent deep strikes user. The policy reads: “You are free to use our services as long as you act as a good person, safely and responsibly, comply with the law, do not harm people and respect our guardrails.”

Currently, laws and regulations for AI Deepfakes and NCII are still in their infancy.

President Donald Trump recently signed the Take It Down Act, which includes protection for deep strikes. However, the law does not create Deep strike instead of distribute These images.

“In the United States, the requirement to remove ACT ACT on social media platforms [Non-Consensual Intimate Images] Once notified, Farid told Mashable. “While this does not directly address the generation of NCII, it does theoretically solve the distribution of the material. There are several state laws that prohibit the creation of NCII, but enforcement seems to be spotty. “”’


Disclosure: Mashable’s parent company Ziff Davis filed a lawsuit against Openai in April, accusing it of infringing on Ziff Davis’ copyright in training and operating its AI systems.



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button