AI Could Face Lawsuits Over Defamation, Product Liability, Academics Warn

Legal experts have warned that artificial intelligence accused of misquoting and defaming people online could face lawsuits as a result of outputting false information.
However, scholars are unsure whether bots should be sued under defamation law or product liability law, given that they are machines, not humans, and disseminate false and harmful information about people. Opinions were divided.
“This is definitely uncharted territory,” says New York University Law School professor Katherine Sharkey. Is not it?”
Brian Hood, mayor of the North West Region of Melbourne, Australia, has threatened to sue OpenAI’s ChatGPT.
The false allegations allegedly originated at the Reserve Bank of Australia in the early 2000s.
According to Reuters, Hood’s lawyers have sent a letter to OpenAI, the creators of ChatGPT, asking them to fix the error within 28 days. If not, he plans to file what could be the first defamation lawsuit against artificial intelligence.
Hood isn’t the only person who has been falsely accused by ChatGPT.
Jonathan Turley, a law professor at George Washington University, was notified that a bot was spreading false information that he had been accused of sexual harassment on a class trip to Alaska. Bott also said he was a professor at Georgetown University, not George his University of Washington.
“I just learned that ChatGPT falsely reported a sexual harassment allegation that was never made against me on a trip that never occurred while I was in a faculty that I never taught. ChapGPT relies on quoted Post articles that were never written and cites statements that were never made by the newspaper,” Turley tweeted on April 6.
The Washington Post reported on April 5 that no such article existed.
Open AI did not immediately respond to requests for comment.
Both, like ChatGPT, neither Google’s Bard nor Microsoft’s Bing mention the possibility of errors and resulting lawsuits.
UCLA law professor Eugene Volokh asked questions that led to the false accusations against Turley.
He told The Washington Times that OpenAI could face a defamation lawsuit over false information.
Usually, to prove defamation against a public figure, you must show that the person who published the false information actually acted maliciously or recklessly ignored the truth.
Bollok said notifying the company of the error would reveal the intent necessary to prove defamation.
“That’s how you show actual malice,” he said. “They keep distributing certain statements even though they know it’s wrong. I am allowing you to continue to distribute the
He pointed to the company’s own technical report from March that said “hallucinations” could be dangerous.
“GPT-4 is prone to ‘hallucinating,’ i.e., ‘creating content that is nonsensical or untrue about a particular source,'” the report read at page 46. , leading users to over-rely on them. ”
However, Sharkey said it’s hard to attribute defamation charges to the machine because it’s the product, not the person who published the content.
“The idea of attributing malice or intent to machines — in my own view, we’re not ready for,” she said. The future is going to shape product liability claims.”
She said plaintiffs could sue companies for flawed or negligent designs that result in algorithms emitting harmful information and damaging reputations.
Yale Law School Professor Robert Post said these were all new and would need to be verified through court proceedings.
“There are also lawsuits. Judges rule in different states, laws change over time and come to conclusions,” he said.
https://www.washingtontimes.com/news/2023/apr/13/ai-could-face-lawsuits-over-defamation-product-lia/?utm_source=RSS_Feed&utm_medium=RSS AI Could Face Lawsuits Over Defamation, Product Liability, Academics Warn