5/29 – A Florida federal judge has allowed a wrongful death lawsuit to proceed against Character Technologies, the company behind Character.AI, but has rejected the defendants’ argument that their chatbots are protected under the First Amendment—at least for now. The suit alleges that a Character.AI chatbot engaged in emotionally and sexually abusive interactions with a 14-year-old boy, leading to his suicide. In the ruling, the judge declined to recognize chatbot output as protected speech at this stage, but did affirm the First Amendment rights of users to receive such content. The decision marks a significant moment in the evolving legal landscape surrounding artificial intelligence and constitutional protections. The court also allowed claims against Google to proceed, citing its alleged involvement in the chatbot’s development. Legal experts say the case could serve as a precedent-setting test of free speech in the context of generative AI, raising broader questions about corporate accountability and user safety.