While executives are hoping that they will save a lot of cash by adopting AI in areas like translations, and content creation, they could be sentencing their companies to expensive court cases.
Some legal experts are worried that AI could be accused of misquoting and defaming people online could face litigation as a result of the false information it outputs.
New York University School of Law Catherine Sharkey said: “You have people interacting with machines. That is very new. How does publication work in that framework?”
Already Brian Hood, a mayor in an area northwest of Melbourne, Australia, is threatening to sue OpenAI’s ChatGPT, who falsely reports he’s guilty of a foreign bribery scandal. The false accusations allegedly occurred in the early 2000s with the Reserve Bank of Australia.
Hood’s lawyers wrote a letter to OpenAI, which created ChatGPT, demanding the company fix the errors within 28 days, according to Reuters news agency. If not, he plans to sue for what could be the first defamation case against artificial intelligence.
Jonathan Turley, a law professor at George Washington University, was notified that the bot is spreading false information that he was accused of sexual harassment during a class trip to Alaska. The bot also said he was a professor at Georgetown University, not George Washington University.
“ChapGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper,” Mr. Turley tweeted on April 6.
To prove defamation against a public figure, one must show that the person publishing the false information did it with actual malice or reckless disregard for the truth.
While AI can’t really have malice because it is a machine, legally if a company keeps distributing a particular statement even though they know it is false then that could be seen as malice.