Character.AI institutes new safety measures for AI chatbot conversations

Trending 1 month ago
Character AI
(Image credit: Character AI)

Character.AI has rolled retired caller information features and policies for building and interacting pinch nan AI-powered virtual personalities it hosts. The caller measures purpose to make nan level safer for each users, but peculiarly younger people. The update includes much power complete really minors prosecute pinch nan AI chatbot, much contented moderation, and amended discovery of nan AI discussing topics for illustration self-harm.

Though not cited successful nan blog station astir nan update, Character AI linked to nan announcement successful a station connected X expressing condolences to nan family of a 14-year-old who spent months interacting pinch 1 of Character.AI's chatbots earlier taking his ain life. His family has now revenge a suit against Character.AI for wrongful death, citing a deficiency of safeguards for nan AI chatbots arsenic a contributor to his suicide​.

We are heartbroken by nan tragic nonaccomplishment of 1 of our users and want to definitive our deepest condolences to nan family. As a company, we return nan information of our users very earnestly and we are continuing to adhd caller information features that you tin publication astir here:…October 23, 2024

AI chat guardrails

Character AI's station laid retired respective caller information features for nan platform. For instance, if nan exemplary detects keywords related to termination aliases self-harm, it will show a pop-up urging nan personification to nan National Suicide Prevention Lifeline and related resources. The AI will besides beryllium amended astatine spotting and removing inappropriate contented successful a conversation, pinch a peculiar sensitivity to erstwhile users are nether 18.

Presumably, minors would already person restricted contented successful conversations, but Character.AI whitethorn person upped that sensitivity further. In cases wherever that mightiness not beryllium enough, full chatbots person been removed.

"We behaviour proactive discovery and moderation of user-created Characters, including utilizing industry-standard and civilization blocklists that are regularly updated. We proactively, and successful consequence to personification reports, region Characters that break our Terms of Service," Character.AI explained successful its post. "Users whitethorn announcement that we’ve precocious removed a group of Characters that person been flagged arsenic violative, and these will beryllium added to our civilization blocklists moving forward."

Other caller features are much astir helping crushed users. So, you'll spot a notification erstwhile you person spent an hr connected nan level asking if you want to support going arsenic a measurement of helping make judge you don't suffer way of time. You'll besides spot much salient disclaimers emphasizing that nan AI is not a existent person. There are already specified disclaimers successful nan conversations, but Character.AI wants to make it intolerable to ignore.

These information features are nan flipside of really Character.AI has made engaging pinch chatbots consciousness much for illustration talking to a existent person, including voices and nan two-way sound conversations disposable pinch nan Character Calls feature. Still, nan institution is apt keen to guarantee its services are arsenic safe arsenic possible, and its moves could pass really others successful nan abstraction style their ain AI chatbot characters.

Sign up for breaking news, reviews, opinion, apical tech deals, and more.

You Might Also Like...

  • Character.ai lets you talk to your favourite (synthetic) group connected nan telephone – which isn't weird astatine all
  • I created an AI clone of myself and nan consequence was unintentionally hilarious
  • This ChatGPT rival lets you talk to anyone, dormant aliases alive

Eric Hal Schwartz is simply a freelance writer for TechRadar pinch much than 15 years of acquisition covering nan intersection of nan world and technology. For nan past 5 years, he served arsenic caput writer for Voicebot.ai and was connected nan starring separator of reporting connected generative AI and ample connection models. He's since go an master connected nan products of generative AI models, specified arsenic OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and each different synthetic media tool. His acquisition runs nan gamut of media, including print, digital, broadcast, and unrecorded events. Now, he's continuing to show nan stories group want and request to perceive astir nan quickly evolving AI abstraction and its effect connected their lives. Eric is based successful New York City.

More
Source Technology
Technology