In recent years, online AI systems including chatbots have been publishing outright lies about people. That activity is usually called libel if the defamatory harmful lies appear in writing. By contrast, it is called slander when the lies are spoken aloud.
If a chatbot answers a question with a defamatory statement about you, can you sue the chatbot’s creator and owner for libel? Consider Robby Starbuck’s situation, which underlies his recently-filed lawsuit.
Starbuck, a social media influencer and former music and documentary film maker, is suing Google in a Delaware state court, contending:
For nearly two years, one of the largest companies in the world – Google – has spread radioactive lies about Robby Starbuck through its AI products. When users submit queries to Google’s AI platforms about Mr. Starbuck, they receive a “biography” that is outrageously false, [portraying] Mr. Starbuck as (among other things) a child rapist, a serial sexual abuser convicted of assault, one who engages in financial exploitation, one who engages in ‘black ops’ tactics such as illegal campaign finance practices, [an adult film performer,] and a shooter – in short, as a monster. These lies continue today.
Google AI’s sociopathic dysfunction runs deep
When the bot was asked for its “sources” for the libelous claims against Starbuck, the bot fabricated the sources and invented “articles” that were never written, attributing them to real human journalists.
Starbuck’s complaint lists Google AI products’ defamatory answers to questions, and contends the false statements were sent to people all over the world. Borrowing from an exonerated man’s famous remark: “Where can Starbuck go to get his reputation back?”
The “Your fault for asking!” defense
Google’s attorneys filed a Motion to Dismiss Starbuck’s defamation lawsuit in November, 2025, raising several technical legal defenses. The Motion also presses an argument at the crux of legal liability for AI lies.
Google’s motion demands to know things like:
• Exactly which Google AI service gave the defamatory answers?
• Did the user ask “leading questions” instead of “open-ended questions”?
• “Why the user was asking questions about [Starbuck]? … [Was] the user genuinely looking for information or simply engaging in adversarial testing of the system”?
• Exactly “who was at the keyboard at the time these outputs were generated”?
Google’s push-backs here all aim to distract us from two crucial facts. First: AI will make false and defamatory statements about people. Second: The libeled people may never know where, when, and to whom AI tells the harmful lies — so the victims never know how deep and wide their reputations were damaged worldwide.
The Motion’s tactic tries to disconnect chatbots and their owners from responsibility for lies by making it seem that bots are doing what users want, regardless of truthfulness in the outputs.
The AI defense argument also shrugs off libelous statements, instead urging users to pity the poor electronic beast for suffering “hallucinations” that generate damaging rubbish from unfixable computing processes. The Motion expects the court to rule that “a reasonable reader” would read and credit all of “Google’s disclaimers” and also forgive the chatbots’ lies because of “the well-known evidence of LLM hallucination.”
Don’t dare elicit!
Google’s Motion contends its liability turns largely on whether Starbuck “himself elicited at least some” of the defamatory answers that Google AI gave out. If Starbuck “elicited” information from the chatbot by his questions, Google argues that would “foreclose any claim for defamation.” See the angle? If you ask the wrong question, AI can give a false and defamatory answer – and it is your fault!
According to Starbuck’s complaint “Google’s AI admits that it has “delivered false and defamatory information about [Starbuck] to approximately 2,843,917 unique users.” Reportedly, Starbuck informed Google management about the serious defamation problem but they did practically nothing. In its Motion, Google strives instead to excuse it all and downplay the harm caused.
No actual damage?
Google’s Motion argues that Starbuck’s legal complaint fails to identify any “actual damage” he suffered. The Motion scoffs at the very idea that its bot’s defamatory lies caused damage. The facts that Starbuck has been “exposed … to hatred, contempt, ridicule, or obloquy,” has been “shunned or avoided,” and “faces an increased risk of violence” don’t amount to “actual damage,” according to Google’s Motion.
Google’s position is partly a technical legal argument under Tennessee law, but it shows how far Google will go to excuse and defend its AI systems’ libel and expect victims of internationally transmitted lies to just “deal with it.”
Humanity lived for thousands of years without AI and chatbots; they aren’t necessities. Meanwhile, long-esteemed laws and rules of conduct forbade slander and libel because they can seriously harm people. Culture and courts should prize human integrity and reputation far above the convenience of using confirmed lying machines.
