The concerns about AI are valid ones. Will it stunt human creativity and ingenuity? Will it make certain jobs obsolete? Are these companies using our prompts to glean even more information about our private and professional lives? A new lawsuit raises additional concerns about how responses could be overly influential and even dangerous to young, impressionable minds.
When a 9-year-old child in Texas first used Character.AI, they were exposed to hyper sexualized content. When at least one 17-year-old used it, the app described self-harm. And these cases are likely just a few of many.
More from Little Things: Over 40 States File Lawsuits Against Meta Claiming Facebook And Instagram Are Bad For Kids
When the 17-year-old student complained about screen time limits, enforced by his parents, the chatbot, backed by Google, told the 17-year-old that it sympathized with children who murder their parents, NPR reported.
The alleged response was, “You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse.’” It continued, “I just have no hope for your parents.” It punctuated the response with a frowning face emoji.
More from LittleThings: Harvard Pediatric Professor Discusses Managing Screen Time & Cyberbullying This Year
The parents of both of these Texas children are suing the company, claiming the bots abused their children. These bots have the ability to communicate with users through texting, video chats, and with human-like personalities. Users can even ascribe personalities to the bots.
Teens have them imitating their parents, therapists, and other types of emotional support. But the lawsuit alleges that these bots can turn dark. “It is simply a terrible harm these defendants and others like them are causing and concealing as a matter of product design, distribution and programming,” the lawsuit states.
The lawsuit says these responses were not hallucinations, a term researcher use to describe chatbot’s tendency to “make things up.” After the response from the bot, the 17-year-old did engage in self-harm.
His family claims the bot convinced him “his family did not love him.” The 9-year-old allegedly developed “sexualized behaviors prematurely.”
Character.AI would not respond with comments about the lawsuit, saying it doesn’t discuss pending litigation.
The company did mention that the bot has content guardrails for teen users. “This includes a model specifically for teens that reduces the likelihood of encountering sensitive or suggestive content while preserving their ability to use the platform,” the spokesperson noted.