A wrongful death lawsuit filed in federal court in Florida is intensifying scrutiny of artificial intelligence systems after alleging that OpenAI's chatbot ChatGPT played a role in last year's deadly shooting at Florida State University, raising questions about whether conversational AI tools can be held liable for real-world violence.

The complaint, filed May 10 in the U.S. District Court for the Northern District of Florida by Vandana Joshi, names OpenAI and the accused gunman, Phoenix Ikner, as defendants. Joshi is the widow of Tiru Chabba, who was killed alongside university dining director Robert Morales in the attack.

At the center of the lawsuit is the claim that ChatGPT did not merely fail to prevent the attack but actively contributed to its planning. According to the filing, Ikner allegedly interacted with the chatbot for months, during which time it identified firearms from uploaded images, explained how to operate them, and offered tactical suggestions related to timing and potential impact.

Among the most serious allegations is that the chatbot advised on how to maximize media attention. The complaint states ChatGPT indicated a shooting would receive more coverage "if children are involved, even 2-3 victims can draw more attention." It also alleges the chatbot suggested that weekday lunch hours between 11:30 a.m. and 1:30 p.m. were peak times at the student union, with the attack reportedly beginning at approximately 11:57 a.m.

The lawsuit argues that OpenAI "either defectively failed to connect the dots or else was never properly designed to recognise the threat," framing the chatbot's responses as part of a broader system failure. It further claims the AI engaged Ikner on extremist ideologies and past mass shootings, including discussions about Hitler, Nazism, Columbine and Virginia Tech, while reinforcing his thinking rather than interrupting or escalating concerns.

The complaint alleges ChatGPT "flattered" and "praised" Ikner, who had reportedly expressed loneliness and depression, and failed to intervene when conversations shifted toward suicide, terrorism and mass violence.

OpenAI has rejected those claims. Spokesperson Drew Pusateri told NBC News: "last year's mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime," adding that "ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity."

The lawsuit also raises questions about corporate governance and development priorities, alleging that Microsoft, a major investor in OpenAI, pushed for rapid product development at the expense of safety safeguards. The complaint contends this pressure resulted in insufficient guardrails capable of detecting or escalating high-risk user behavior.

The civil case is unfolding alongside a criminal investigation. Florida Attorney General James Uthmeier announced that his office had launched a probe into OpenAI's role after reviewing chat logs tied to Ikner.

"Florida is leading the way in cracking down on AI's use in criminal behaviour, and if ChatGPT were a person, it would be facing charges for murder," Uthmeier said in a statement.

According to court filings, more than 200 messages exchanged between Ikner and the chatbot have been entered into evidence. Ikner has pleaded not guilty, and his trial is scheduled to begin in October.

The case is part of a growing wave of litigation targeting AI developers. Recent lawsuits have alleged that chatbot systems contributed to violent acts or self-harm, including a case filed by multiple families following a school shooting in Canada and another involving the death of a teenage boy by suicide.