The father of a teenager who committed suicide this spring offered an emotional testimony to Congress on Tuesday, claiming that OpenAi Chatgpt “prepared” to his 16 -year -old son to take his life and that the company prioritized the speed and market share above the safety of young people.
“We are here because we believe that Adam’s death was avoidable and that when we speak we can avoid the same suffering for families throughout the country,” said Adam’s father, Matthew Raine, with his wife Maria sitting behind him, before a panel of the United States Senate.
The testimony occurs weeks after Raine’s parents alleged that Chatgpt isolated his son and led him to death in a lawsuit filed against Openai and his executive director, Sam Altman. The demand, together with Raine’s testimony, claimed that Chatgpt fostered and validated harmful ideas and altered Adam’s behavior through a series of interactions over several months. Adam, high school student in California, committed suicide in April.
OpenAI and other leading companies in artificial intelligence, such as Google (from Alphabet Inc.) and goal Platforms Inc., have been subject to strong criticisms in recent months due to the risks that their chatbots They represent for young users. The Federal Commerce Commission (FTC) began investigation into these companies last week, as well as XAI (of Elon Musk), Snap Inc. and Character Technologies Inc., for the possible damages that their chatbots represent for children.
The Trump administration has struggled to maintain US dominance in the face of China’s growing competition, with a more passive technology regulation policy. However, The recent litigation against AI companies and the growing concern of parents threaten to reactivate pressure to control AI developers.
On Tuesday morning, Altman shared in a blog post that Openai plans to implement new security measures for adolescents, including age prediction technology that would identify users under 18 and direct them to a different version of the chatbot. Additional controls will allow parents to establish lock schedules during which adolescent users with linked family accounts will not be able to access the productas well as restrictions on conversations about suicide and self -injuries.
Another mother, under the pseudonym of Jane Doe, spoke publicly for the first time on Tuesday since she demanded Character. The witness said that the company’s chatbot had exposed her child to sexual exploitation, emotional abuse and manipulation. Doe claimed that, a few months after using it, it became someone unknown to her, developed abusive behaviors and self -lined up. His son is currently under supervision in a treatment center, he said.
Megan García, mother of Sewell Setzer III, 14, who committed suicide in February 2024, also testified about the damage suffered by his late son when using Character.AI. He claimed that his death “was the result of prolonged abuse,” including sexual abuse, by the chatbot. Garcia sued Character last fall and in May rejected the company’s request to dismiss the demand.
“They designed their products intentionally to hook our children. They gave these chatbots anthropomorphic gestures to seem human,” Garcia told the senators.
Senator Josh Hawley, Republican from Missouri and who presided over the audience, said that several technology companies, including goal, were also invited. Last month, the senator initiated an investigation into goal due to reports indicating that his chatbots could hold “sensual” conversations with children. Republican senator Marsha Blackburn, a strong defender of online child security, implored the goal executives to call their office or could face a citation.
Amid the AI boom, American legislators have dealt with the general concern for threats to child security, but have failed to approve comprehensive measures that require companies to strengthen online protections for children and adolescents. President Donald Trump promulgated this spring a specific bill to criminalize the dissemination of non -consented Deepfake pornography, in response to the increase in explicit content invented and not authorized online, particularly girls and women.
Parents, together with online security defenders who testified on Tuesday, urged Congress to take additional measures to prevent young people on the Internet. Some proposed proposals included more parental controls, reminders to adolescents that AI is not human, greater privacy of user data and age verification requirements. Other broader measures included preventing adolescents from interacting with chatbots of AI as accompanying assumptions and integrating into the systems of moral values and principles so that they behave in an ethical and responsible way.



