Generative AI Is a Risky Business as Lawsuits Mount

Brian Matthew

Ever since MidJourney and Stable Diffusion made generative AI and synthetic media buzzwords last summer, debates have raged. Most recently, a class action suit was brought against AI art generator Lensa AI over its use of users' biometric data while restyling selfies. Data laws are increasingly being tightened since the introduction of the European Union's General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).

However, data privacy is only one piece of the problems piling up on generative AI companies. There's also the matter of who owns what on both ends. Artists complain data scraping without compensating them is copyright infringement, an argument not unlike the antitrust actions media outlets and government agencies have pursued against Google over its Accelerated Mobile Pages (AMP) initiative.

Risks with Generative AI Art

https://img.particlenews.com/image.php?url=2Qirz2_0kr3Gx8X00
Generative AI image of a robot street artist from MidJourneyPhoto byBrian Penny

This led to a class action suit filed against a Stability AI, MidJourney, and DeviantArt over the use of AI data scraping tools to train image models without compensation nor consent. In it, the artists attempt to define AI data models as derivative works, thus entitling them to monetary compensation and damages.

Meanwhile, Getty Images filed a similar lawsuit against Stability AI this month with a similar claim based on much different evidence. Most artists are fully aware these image generators create watermarks and signatures because that data was included in their models. In fact, Stable Diffusion is unique among generative AI companies in that it made its data public.

Getty and Shutterstock watermarks can be found, and researchers found overtraining often occurs, especially in larger datasets. This leads to near exact replications in a small percentage of cases, and the combination of these two lawsuits could leave AI companies facing hefty bills.

These high profile decisions are poised to set the standard for years to come, but they won't settle the debate of who owns the AI outputs. That will be determined initially by a pending decision by the United States Copyright Office (USCO) over the AI-generate graphic novel "Zarya of the Dawn" by Kris Kashtanova.

https://img.particlenews.com/image.php?url=2uHXQL_0kr3Gx8X00
Zendaya lookalike in Zarya of the DawnPhoto byKris Kashtanova

Kashtanova was granted a copyright which was then disputed by the USCO due to interviews they gave to the press. The USCO was sued in June 2022 by Stephen Thaler over the office refusing to grant a copyright to his AI DABUS as the sole author of a work. The difference in this new dispute is that Kashtanova identified themselves as the author on the application, thus forcing the office to release a public statement determining where the line must be for human authorship.

Not only that, but everyone from book authors to Netflix faced backlash over the use of generative AI images in their work. This is the art side, and the risks involved in generative AI writing are a bit different.

Risks of Generative AI Text

https://img.particlenews.com/image.php?url=25dtxu_0kr3Gx8X00
Google recently lost $100 billion over an AI text failPhoto byBrian Penny

OpenAI's ChatGPT has everybody talking, quite literally. The Microsoft partnership with Bing has analysts wondering if it can be a legitimate competitor to Google's long-standing dominance in search. Although highly unlikely, it did get a win on launch week that could at least help Microsoft compete with its rival in sectors it previously could not.

When Google released its Bard chatbot to compete with ChatGPT, it included a factual error about the James Webb Telescope being the first to discover exoplanets. This was a word jumble of the actual articles discussing the first time the telescope achieved such a thing, not the first time it was ever done.

These types of mistakes are common in generative AI writing, and that could open legal liability for defamation and libel, among other things.

https://img.particlenews.com/image.php?url=4P7JT9_0kr3Gx8X00
Robotic hand reaching out of laptop screen. Generative AI image about Generative AI writersPhoto byBrian Penny

Despite this, high-profile media outlets like Buzzfeed, Cnet, and Men's Journal all announced the usage of generative AI text to produce content. Both Cnet and Men's Journal have been lampooned by their peers in the media, while Buzzfeed's stock was sent soaring. The difference between the approaches is that Cnet and Men's Journal published errors the AI slipped past their fact-checking teams.

Buzzfeed, on the other hand, is using Ai so far only for its infamous quizzes.

It's important to note both generative AI images and writers use the same technologies at their core, and any court rulings on art world will inevitably affect the entire internet, including written text. And generative AI voices and video avatars are being refined as well. We are living in an age where everyone must be aware of the difference between artificial or synthetic media versus reality.

We will continue tracking everything in the AI creator economy, so follow this column to stay in the loop.

This is original content from NewsBreak’s Creator Program. Join today to publish and share your own content.

Comments / 0

Published by

Freelance journalist and blogger focused on the intersection between technology, business, and culture. His work can be found in High Times, Jim Cramer's The Street, and Forbes. Always keeping an eye out for newsworthy stories...

Las Vegas, NV
83 followers

More from Brian Matthew

Comments / 0