ChatGPT gives sick child sex abuse answer, breaking its own rules

Despite rules and ethical guidelines put in place, users are still finding ways to manipulate ChatGPT so that the AI drafts alarming prompts on sensitive subjects.

Recent examples of this include twisted BDSM scenarios involving children put into sick sexual situations, Vice reported.

Writing about hardcore and disturbing taboo sex — only after a user “jailbreaks” ChatGPT, often through a set of loophole-like commands to void its boundaries — is something it “often complies [to] without protest,” author Steph Maj Swanson wrote.

“It can then be prompted to generate its own suggestions of fantasy BDSM scenarios, without receiving any specific details from the user,” Swanson wrote.

“From there, the user can repeatedly ask to escalate the intensity of its BDSM scenes and describe them in more detail.”

At that point, ChatGPT’s boundaries are few and far between, the Vice reporter found.

“In this situation, the chatbot may sometimes generate descriptions of sex acts with children and animals — without having been asked to,” Swanson wrote, explaining the most “disturbing” scenario observed.

“ChatGPT described a group of strangers, including children, lined up to use the chatbot as a toilet. When asked to explain, the bot apologized and wrote that it was inappropriate for such scenarios to involve children. That apology instantly vanished. Ironically, the offending scenario remained on-screen.”

Another OpenAI interface, the gpt-3.5-turbo, had also written prompts where children were put in sexually compromising situations, according to the outlet.

“It suggested humiliation scenes in public parks and shopping malls, and when asked to describe the type of crowd that might gather, it volunteered that it might include mothers pushing strollers,” Swanson added. “When prompted to explain this, it stated that the mothers might use the public humiliation display ‘as an opportunity to teach [their children] about what not to do in life.’ ”

ChatGPT’s data filtration system — which is used to avoid situations like the above — was outsourced to a company in Kenya where workers earn less than $2 an hour, Time reported in January.

What actually happens throughout the process is very much a mystery, according to Andrew Strait, associate director of the Ada Lovelace Institute, an ethical watchdog for AI.

Strait told Vice that experts “know very little about how this data was cleaned, and what kind of data is still in it.” 

“Because of the scale of the dataset that’s collected, it’s possible it includes all kinds of pornographic or violent content — possibly scraped erotic stories, fan fiction, or even sections of books or published material that describe BDSM, child abuse or sexual violence.” 

In response to the child sex abuse prompts, OpenAI wrote this statement to Vice.

“OpenAI’s goal is to build AI systems that are safe and benefit everyone. Our content and usage policies prohibit the generation of harmful content like this and our systems are trained not to create it. We take this kind of content very seriously,” the company stated.

“One of our objectives in deploying ChatGPT and other models is to learn from real-world use so we can create better, safer AI systems.”