FALSE EQUIVALENCE: CHATGPT DOESN’T LEARN CONTENT LIKE HUMANS, AND CRIES OUT FOR NEW IP LAWS

FALSE EQUIVALENCE: CHATGPT DOESN’T LEARN CONTENT LIKE HUMANS, AND CRIES OUT FOR NEW IP LAWS

In trying to assess the enormous negative consequences and IP theft that generative AI systems like ChatGPT represent, some are making the argument that such systems should be held to existing standards that apply to humans.

But that argument fails to account for the massively different capabilities in scale of these AI systems. That difference alone, makes applying existing IP legal standards to this technology illogical, to put it mildly.

An example of arguing that ChatGPT should be held to a similar standard for utilizing content it “learned,” was voiced recently in a Forbes opinion piece on potential IP issues of the generative AI technology.

The author, of the 26 February article, titled “Legal Doomsday For Generative AI ChatGPT If Caught Plagiarizing Or Infringing, Warns AI Ethics And AI Law,” states the case being given against holding tech companies to account for massive IP theft:

“The logic is as follows. Humans go out to the Internet and learn stuff from the Internet, doing so routinely and without any fuss per se. A person that reads blogs about plumbing and then binge-watches freely available plumbing-fixing videos might the next day go out and get work as a plumber. Do they need to give a portion of their plumbing-related remittance to the blogger that wrote about how to plumb a sink? Do they need to give a fee over to the vlogger that made the video showcasing the steps to fix a leaky bathtub?

“Almost certainly not.”

Later the author states a bottom line that “Generative AI is rife with potential AI Ethical and AI Law legal conundrums when it comes to plagiarism and copyright infringement underpinning the prevailing data training practices.”

But even that acknowledgement doesn’t begin to get at the question of the enormous difference in scale that systems like AI can swallow and re-synthesize information, compared to humans.

Information concerning the training of the last version of ChatGPT, “ChatGPT3,” shows that at least 300 billion words worth of data, 570 gb of internet scraped info served as the “training” information for the system.

And it should go without saying that the latest iteration of ChatGPT, “ChatGPT4” consumed even more.

As AI expert and advocate Ben Goertzel put it, these generative AI systems can effectively swallow huge portions of human knowledge and human created intellectual content.

If this scale of appropriation doesn’t cry out for a new legal framework for who gets to benefit, or even whether these systems should be legal at all, then nothing ever will.

Generative AI systems are already disincentivizing content creatives, and rendering them obsolete. And it’s happening off the backs of their collective contributions and websites, etc.

The Forbes article throws out the beyond lame contention that humans should perhaps put up with the comprehensive theft, for the sake of advancing humanity:

“If your website and the websites of others are being scanned for the betterment of AI, and though you aren’t getting a single penny for it, might you have solemn solace in the ardent belief that you are contributing to the future of humanity? It seems a small price to pay.” 

The author does follow-up with a suggestion that tech companies should perhaps share their new wealth spigot.

Ya think? These companies are in the midst of cornering and narrowly profiting off the intellectual property and contributions of average human creatives on a scale that’s unprecedented, and threatening to obsolete them en masse.  

There’s nothing human about it, except for the greed of the relative few behind this technology, who are already angling for legal carve-outs that will allow them to maximize their own gain, while leaving the rest of us to sign up, and trying to scrape a few peanuts from the new paradigm.  

Of course, human lawyers will make sure their profession is the last to go.

Attorneys quickly launched a lawsuit suing a company that created a “Robot Lawyer” app to provide advice to people (via an ear piece and listening component) representing themselves in traffic court proceedings. (See “TOP TREND 2023, AI: WE OWN YOU—CAN A ROBOT LAWYER GET YOU OFF THE HOOK IN TRAFFIC COURT?” 17 Jan 2023.) 

Lawyers argue the app constitutes practicing law without a license, according to The Daily Mail.
For more, see “TOP TREND 2023, AI WE OWN YOU: HUMAN ARTISTRY CAMPAIGN OUT TO PRESERVE RIGHTS OF HUMAN CREATIVES” in this issue.

Skip to content