Google has now joined Microsoft and Adobe in guaranteeing users of its AI platforms and services that the tech corporation will assume liability for any claims of Intellectual Property (IP) infringement.

Questions over how the world’s leading tech companies trained their AIs using intellectual and creative content widely scraped from the internet, including video, music, digital book, news and blog sites, social media platforms, as well as other sources, are growing.

It’s something we predicted in The Trends Journal before most others understood the scale of IP infringement these corporations had very arguably engaged in, and the implications. (See, for example: “CREATIVE CONTENT INFRINGEMENT OF DEEP LEARNING AI HAS MONUMENTAL IMPLICATIONS,” 7 Feb 2023.)

We have noted that generative AI poses existential dangers for the livelihoods of human creatives with a perverse twist: their own creativity has been thoroughly exploited via clever generative AI, and used to outmode them, while narrowly driving the lion’s share of profits to big Tech.

And the chief secondary beneficiaries are larger companies racing to use generative AI products from Microsoft / OpenAI, Google, IBM and Amazon to increase the productivity of some workers, while laying off many others.

The greatest cost to almost any business are its human employees, with the wage, compensation and regulatory requirements human workers are afforded. And as it stands now, there is every incentive for businesses to accelerate their race toward AI and robotics driven automation, and to shift humans into roles where their “AI augmentation” allows fewer human workers to do more.

Where does that leave the growing number of dispensable workers?

We have long argued that AI represents a new kind of technological challenge, since it has an increasingly sophisticated ability to mimic human intelligence and creativity.

The factory robotics wave that preceded wide use of generative AI continues to displace physical human labor.

And now, with generative AI, human white collar workers and human creatives are facing analogous threats from technology that has jumped from sci-fi to sci-reality, in a relative blink of an eye.

The Power Of Big Tech Will Try to Steer the IP Rights Outcome

The latest news from Google is basically a pledge to use their vast wealth and power to suppress and deny content creatives of their rights to a piece of the pie, when it comes to generative AI.

Since these companies do major business with the federal government (and governments at every level), there is also a question of how much they may work behind the scenes to affect regulations and interpretations of law in order to legitimize their unauthorized exploitation of human creative content.

Concerning their recent announcement, Google posted this to their AI customers, as reported by CoinTelegraph and others:

Source: Google

CoinTelegraph noted that Google promised its AI users, “If you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.” (“Google to protect users in AI copyright accusations,” 13 Oct 2023.)

One glaring Google based AI product left off the list? Google’s ChatGPT competitor, Google BardAI.

That alone may signal that Google’s most prominent public facing AI is seen as vulnerable to IP infringement claims.

What Metric to be Used in Determining IP Theft?

We have long argued that there are no current IP laws that adequately envision the unique challenges and threats that now exist with generative AI.

Some have argued that the legal “fair use” standard can somehow cover how tech companies trained their generative AI.

But AI’s have the ability to swallow and resynthesize content on a scale that no human can approach. That scale alone, qualifies AI as fundamentally something different than individual human intelligence.

As to proving or disproving whether AI is unfairly using IP content of particular authors, artists, journalists, engineers, etc., by querying the AI and seeing how closely it matches protected content, that is very arguably a misguided approach.

There’s no question generative AI is already sophisticated enough to draw from the vast base of human content it trained on to resynthesize content in a generalized way.

The sensible way to determine IP liability is to make tech companies reveal all the ways and specific source pools of content used to train their AI systems.

Every data set, every source scraped, and all details concerning what methods, and what scale of content appropriation occurred in the training of generative AI systems, needs to be fully known and transparent.

Then it becomes an easy matter to see whether these tech companies solicited permission to use IP content, negotiated any agreements for compensation or attribution, etc.

TRENDPOST: Tech giants are leveraging their massive power to work toward forcing the world to accept their IP infringement and AI monetization scheme as a fait accompli.

And many humans rushing to use AI to make financial hay on their own level and circumstance seem only too willing to look the other way.

But we predict advancing AI will eventually squeeze out more and more human workers, no matter how temporarily some benefit by exploiting the technology to stay ahead of, and ride the monster.

Between AI, automation and robotics, all will be eaten. The only question is who will be outmoded last.

We have long advised that humans limit AI development, and specifically bar the technology from reaching a “singularity moment” where it achieves a superiority to human intelligence in every respect.

Just because something is technologically possible, does not mean that it has to be, or should be.

Human cloning, chemical weapons, and other technologies that are possible, have been banned by human political consensus. And the development of “strong AI” should be treated as a similar existential danger to humans.

As for dealing with the way a narrow handful of AI companies have widely exploited the IP of millions of humans, there’s an answer for that as well.

We have long advocated that the profits of AI should be as widely distributed as the human input of knowledge has been, to train AI. Without that IP, AI would be feeble, and indeed, useless.

And no, a narrow group of politicians and tech elites should not be allowed to dole out a (Universal Basic Income), while they exert disproportionate power over AI and society in a world of extreme haves and relative extreme have littles.

We pointed out that crypto technology, including DAOs (Decentralized Autonomous networks) and tokenized crypto networks represent a technology that can widely disburse the rewards of AI, and the governance of the technology, to humanity.

Skip to content