OpenAI has Little Legal Recourse against DeepSeek, Tech Law Experts Say
Stuart Carmona edited this page 2 months ago


OpenAI and the White House have actually accused DeepSeek of utilizing ChatGPT to cheaply train its brand-new chatbot.
- Experts in tech law say OpenAI has little recourse under copyright and agreement law.
- OpenAI's regards to usage may use but are mostly unenforceable, they say.
Today, OpenAI and the White House implicated DeepSeek of something akin to theft.

In a flurry of press statements, they said the Chinese upstart had actually bombarded OpenAI's chatbots with queries and hoovered up the resulting data trove to rapidly and a model that's now nearly as good.

The Trump administration's leading AI czar said this training procedure, forum.pinoo.com.tr called "distilling," amounted to intellectual residential or commercial property theft. OpenAI, meanwhile, informed Business Insider and other outlets that it's investigating whether "DeepSeek may have wrongly distilled our models."

OpenAI is not stating whether the company prepares to pursue legal action, rather assuring what a representative termed "aggressive, proactive countermeasures to secure our innovation."

But could it? Could it sue DeepSeek on "you stole our material" premises, much like the grounds OpenAI was itself sued on in an ongoing copyright claim filed in 2023 by The New York Times and other news outlets?

BI postured this question to experts in innovation law, who said challenging DeepSeek in the courts would be an uphill fight for OpenAI now that the content-appropriation shoe is on the other foot.

OpenAI would have a difficult time proving a copyright or copyright claim, these attorneys said.

"The question is whether ChatGPT outputs" - indicating the answers it creates in action to questions - "are copyrightable at all," Mason Kortz of Harvard Law School stated.

That's because it's uncertain whether the answers ChatGPT spits out qualify as "creativity," he said.

"There's a teaching that states imaginative expression is copyrightable, however truths and ideas are not," Kortz, who teaches at Harvard's Cyberlaw Clinic, said.

"There's a big question in intellectual home law today about whether the outputs of a generative AI can ever constitute creative expression or if they are necessarily unguarded truths," he added.

Could OpenAI roll those dice anyhow and declare that its outputs are secured?

That's unlikely, the attorneys stated.

OpenAI is already on the record in The New york city Times' copyright case arguing that training AI is an allowable "fair use" exception to copyright protection.

If they do a 180 and tell DeepSeek that training is not a fair usage, "that might return to sort of bite them," Kortz stated. "DeepSeek could state, 'Hey, weren't you just stating that training is fair usage?'"

There may be a difference between the Times and DeepSeek cases, Kortz included.

"Maybe it's more transformative to turn news short articles into a design" - as the Times implicates OpenAI of doing - "than it is to turn outputs of a design into another design," as DeepSeek is said to have done, Kortz stated.

"But this still puts OpenAI in a quite predicament with regard to the line it's been toeing concerning fair use," he added.

A breach-of-contract lawsuit is more most likely

A breach-of-contract lawsuit is much likelier than an IP-based suit, though it comes with its own set of issues, said Anupam Chander, who teaches innovation law at Georgetown University.

Related stories

The regards to service for Big Tech chatbots like those developed by OpenAI and Anthropic forbid using their content as training fodder for a contending AI model.

"So maybe that's the lawsuit you may potentially bring - a contract-based claim, not an IP-based claim," Chander said.

"Not, 'You copied something from me,' but that you gained from my design to do something that you were not enabled to do under our contract."

There may be a hitch, Chander and Kortz said. OpenAI's regards to service require that a lot of claims be solved through arbitration, not suits. There's an exception for claims "to stop unauthorized usage or abuse of the Services or intellectual property violation or misappropriation."

There's a bigger drawback, however, specialists said.

"You need to understand that the dazzling scholar Mark Lemley and a coauthor argue that AI regards to usage are most likely unenforceable," Chander said. He was describing a January 10 paper, "The Mirage of Artificial Intelligence Terms of Use Restrictions," by Stanford Law's Mark A. Lemley and Peter Henderson of Princeton University's Center for Information Technology Policy.

To date, "no model creator has really tried to impose these terms with financial charges or injunctive relief," the paper states.

"This is likely for excellent factor: we think that the legal enforceability of these licenses is doubtful," it adds. That remains in part because model outputs "are largely not copyrightable" and since laws like the Digital Millennium Copyright Act and the Computer Fraud and Abuse Act "offer limited option," it says.

"I think they are most likely unenforceable," Lemley told BI of OpenAI's regards to service, "due to the fact that DeepSeek didn't take anything copyrighted by OpenAI and due to the fact that courts typically will not enforce agreements not to contend in the absence of an IP right that would prevent that competitors."

Lawsuits between celebrations in various nations, each with its own legal and enforcement systems, are constantly difficult, Kortz said.

Even if OpenAI cleared all the above difficulties and won a judgment from a United States court or arbitrator, "in order to get DeepSeek to turn over money or stop doing what it's doing, the enforcement would boil down to the Chinese legal system," he stated.

Here, OpenAI would be at the grace of another incredibly complex location of law - the enforcement of foreign judgments and the balancing of private and corporate rights and nationwide sovereignty - that stretches back to before the starting of the US.

"So this is, a long, complicated, laden procedure," Kortz included.

Could OpenAI have protected itself much better from a distilling attack?

"They could have used technical procedures to obstruct repetitive access to their site," Lemley said. "But doing so would likewise hinder regular clients."

He added: "I don't think they could, or should, have a legitimate legal claim against the searching of uncopyrightable info from a public site."

Representatives for DeepSeek did not instantly react to an ask for remark.

"We understand that groups in the PRC are actively working to use techniques, including what's referred to as distillation, to attempt to reproduce advanced U.S. AI designs," Rhianna Donaldson, an OpenAI representative, informed BI in an emailed declaration.