

Ai Dungeon Tips How To Play Cards
It also allows players to create and share their own custom adventure settings. A video guide on how to Play Cards.AI Dungeon is a free-to-play single-player and multiplayer text adventure game which uses artificial intelligence to generate content. When a player typed out the action or dialog they wanted their character to perform, algorithms would craft the next phase of their personalized, unpredictable adventure.Ai Dungeon scripting allows user- input script handling, AI output, World Info, and other elements such as frontMemory.
A new monitoring system revealed that some players were typing words that caused the game to generate stories depicting sexual encounters involving children. In marketing materials, OpenAI touted AI Dungeon as an example of the commercial and creative potential of writing algorithms.Then, last month, OpenAI says, it discovered AI Dungeon also showed a dark side to human-AI collaboration. AI Dungeon, an infinitely generated text adventure powered by deep learning.Last summer, OpenAI gave Latitude early access to a more powerful, commercial version of its technology.
Some complained it was oversensitive and that they could not refer to a “8-year-old laptop” without triggering a warning message. “This is not the future for AI that any of us want." Cancellations and memesLatitude turned on a new moderation system last week—and triggered a revolt among its users. "Content moderation decisions are difficult in some cases, but not this one,” OpenAI CEO Sam Altman said in a statement.
“It allowed me to explore aspects of my psyche that I never realized existed,” Mimi says. Mimi and other upset users say they understand the company’s desire to police publicly visible content, but say it has overreached and ruined a powerful creative playground. Irate memes and claims of canceled subscriptions flew thick and fast on Twitter and AI Dungeon’s official Reddit and Discord communities.“The community feels betrayed that Latitude would scan and manually access and read private fictional literary content,” says one AI Dungeon player who goes by the handle Mimi and claims to have written an estimated total of more than 1 million words with the AI’s help, including poetry, Twilight Zone parodies, and erotic adventures.
Technology like OpenAI’s can generate text in many different styles because it is built using machine learning algorithms that have digested the statistical patterns of language use in billions of words scraped from the web, including parts not appropriate for minors. Latitude pledged in a blog post last week that AI Dungeon would “continue to support other NSFW content, including consensual adult content, violence, and profanity.”Blocking the AI system from creating some types of sexual or adult content while allowing others will be difficult. But after OpenAI’s recent warning, the company is working on “necessary changes,” the spokesperson said. Staff had previously banned players who they learned had used AI Dungeon to generate sexual content featuring children.

He said several players had sent him examples that left them “feeling deeply uncomfortable,” adding that the company was working on filtering technology. Others complained the AI would bring up sexual themes unbidden, for example when they attempted to travel by mounting a dragon and their adventure took an unforeseen turn.Latitude cofounder Nick Walton acknowledged the problem on the game’s official Reddit community within days of launching. Some quickly discovered and came to cherish its fluency with sexual content. In December 2019, the month the game launched using the earlier open source version of OpenAI’s technology, it won 100,000 players. Mount that dragon?Out of the limelight, AI Dungeon provided relatively unconstrained access to OpenAI’s text-generation technology. OpenAI said it would carefully vet customers to weed out bad actors, and required most customers—but not Latitude—to use filters the AI provider created to block profanity, hate speech, or sexual content.
The company also added a premium subscription tier to generate revenue.When AI Dungeon added OpenAI’s more powerful, commercial writing algorithms in July 2020, the writing got still more impressive. And some players noticed the supposedly safe setting improved the text-generator’s erotic writing because it used more analogies and euphemisms. Like all automated filters, however, it was not perfect. Latitude added an optional “safe mode” that filtered out suggestions from the AI featuring certain words. AdvertisementAI Dungeon’s official Reddit and Discord communities added dedicated channels to discuss adult content generated by the game.
Unwanted suggestions from the algorithm could be removed from a story to steer it in a different direction the results weren’t posted publicly unless a person chose to share them.Latitude declined to share figures on how many adventures contained sexual content. For a time last year, players noticed Latitude experimenting with a filter that automatically replaced occurrences of the word “rape” with “respect,” but the feature was dropped.The veteran player was among the AI Dungeon aficionados who embraced the game as an AI-enhanced writing tool to explore adult themes, including in a dedicated writing group. The system got noticeably more creative in its ability to explore sexually explicit themes, too, this person says.
Two prominent Google researchers were forced out of the company after managers objected to a paper arguing for caution with such technology.The technology can be used in very constrained ways, such as in Google search where it helps parse the meaning of long queries. The startup now must use OpenAI’s filtering technology, an OpenAI spokesperson said.How to responsibly deploy AI systems that have ingested large swaths of Internet text, including some unsavory parts, has become a hot topic in AI research. That analysis and the security flaw, now fixed, added to anger from some players over Latitude’s new approach to moderating content.Latitude now faces the challenge of winning back users’ trust while meeting OpenAI’s requirements for tighter control over its text generator. He analyzed a sample of 188,000 of them and found 31 percent contained words suggesting they were sexually explicit.
More than one moderation community has gotten into hot water logging private player interactions and jumping in with no reports over the years in communities and platforms of all sizes.Giving up on fixing this problem entirely will improve player goodwill and product quality in the medium term but will invariably cause more and more scandals with the U.S. OpenAI and Latitude say they’re working on that too, while also trying to make money from the technology.This story originally appeared on wired.com.It's like they haven't even met humanity.Or for that matter most of the collaborative writing roleplaying community which is demonstrative of the impossibility of preventing this from spiralling out of control of their meager resources to moderate. He contributed to a study and interactive online demo with researchers from UW and Allen Institute for Artificial Intelligence showing that when text borrowed from the web was used to prompt five different language generation models, including from OpenAI, all were capable of spewing toxic text.Gururangan is now one of many researchers trying to figure out how to exert more control over AI language systems, including by being more careful with what content they learn from.
