OpenAI is facing what a lawyer describes as a first-of-its-kind set of lawsuits from families in a Canadian town devastated by a February mass shooting, who say the company should have warned police about the suspect's chats with ChatGPT. The families of seven victims in Tumbler Ridge, British Columbia—12 year-old Maya Gebala, who remains hospitalized, and five children and one education assistant who were killed—filed suits Wednesday in federal court in Northern California, accusing OpenAI of negligence, violations of product liability standards, and effectively helping enable the attack that left eight dead and more than 25 injured, reports the Wall Street Journal.
Their attorney, Jay Edelson, says the group is inviting CEO Sam Altman and top executives to visit the town of 2,700 to see the damage the shooting has caused. Edelson says the school where the shooting largely occurred remains shut, with kids learning in trailers. "Sam doesn't seem to understand how many people saw the most traumatic things ever," he says. "This is the first time that a community collectively has said we have got to hold OpenAI responsible."
It was previously reported that OpenAI debated but ultimately chose not to alert law enforcement about Jesse Van Rootselaar's ChatGPT conversations. OpenAI says it banned the suspect's account months before the shooting and later found a second account used by the suspect. CNN reports that in addition to unspecified financial damages, the suits seek a court order requiring OpenAI to prevent users who have been blocked due to violent chats from creating new ChatGPT accounts and to notify law enforcement when its internal systems flag risky conversations. The AP reports Altman apologized to residents in a letter posted on Friday that read in part:
- "I am deeply sorry that we did not alert law enforcement to the account that was banned in June. While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered."
The AP reports that Chicago-based Edelson is at the helm of a number of high-profile cases against OpenAI, including from the family of a California teen who alleged ChatGPT instructed him on how to take his own life, and from the heirs of an 83-year-old Connecticut woman killed by her son after ChatGPT allegedly fueled his delusions.