Generative AI: A Lawsuit Risk for Businesses

Theregister

As businesses increasingly turn to generative AI tools in pursuit of cost savings, particularly in creative endeavors, many are inadvertently stepping into a legal minefield. While these advanced algorithms can quickly churn out public-facing communications—from logos and promotional copy to entire websites—the potential for copyright infringement looms large, threatening hefty legal bills.

According to Kit Walsh, Director of AI and Access-to-Knowledge Legal Projects at the Electronic Frontier Foundation, the legal principle is straightforward: if AI generates content “substantially similar” to a copyrighted work, infringement has likely occurred, unless it qualifies as fair use. The danger isn’t limited to deliberate appropriation; even a neutral prompt, such as asking Bing Image Creator for a “video game plumber,” can yield copyrighted intellectual property like the familiar Super Mario character. In such cases, businesses could still find themselves liable, regardless of their intent or awareness. To mitigate these significant risks, Walsh advises businesses to develop a robust legal AI policy in collaboration with their general counsel and, critically, to ensure human review of all AI-generated public materials.

The financial repercussions of infringement can vary. Benjamin Bedrava, who leads the intellectual property practice at Miami firm EGPD, notes that a small business infringing on the copyright of a large entity like Nintendo might initially receive a cease-and-desist letter, offering a chance to rectify the situation before a lawsuit ensues. However, if the aggrieved party is a direct competitor or a business of comparable size, the path to litigation becomes far more direct. Damages can be substantial, determined by factors such as profits derived from the infringing materials and whether the infringement was “willful.” Under Title 17 of the US Code, Chapter 5, copyright holders can claim either actual damages (including the defendant’s profits) or statutory damages, which range up to $30,000 per infringed work, escalating to $150,000 if the infringement is proven to be willful. While courts may sometimes award damages equivalent to a licensing fee (which could be as low as $1,500), the real financial burden often lies in crippling legal fees, which can easily reach $150,000. Beyond monetary penalties, being forced to abandon an AI-generated logo or slogan can result in significant losses on investments already made in branding and marketing materials, such as signage, billboards, or website development.

A common misconception is that the AI company providing the generative tool—be it Meta, OpenAI, Midjourney, Google, or Microsoft—will shoulder the legal consequences. However, a glance at most AI vendors’ Terms of Service (TOS) reveals disclaimers of responsibility for lawsuits arising from user-generated content. OpenAI’s TOS, for instance, explicitly states that businesses must indemnify the company against third-party claims related to their use of the services and content. Similarly, Bing’s image creator TOS disavows any warranty that its generated material avoids infringing on third-party rights.

While some major players like Microsoft, OpenAI, and Anthropic have begun offering limited indemnification to certain paid business customers, these policies are far from a “get out of jail free card.” Such agreements often come with numerous caveats, making them notoriously unreliable. For example, OpenAI’s indemnification for API, ChatGPT Team, or ChatGPT Enterprise users does not apply if the customer “knew or should have known” the output was infringing, if safety features were ignored, if the output was modified or combined with non-OpenAI products, or if the customer lacked rights to the input. As lawyer Mike Poropat of Stockman & Poropat points out, indemnifications are “never rock solid” and can be easily dismantled, with the “should have known” clause representing a “wide open net” for liability. Questions also arise about what constitutes “modification”—does simply cropping an image in Photoshop or editing text in Word nullify the indemnity? Ultimately, these provisions offer a mechanism to pursue the AI platform, not a guaranteed shield from initial legal action.

Despite their disclaimers regarding user liability, AI vendors themselves are increasingly facing legal challenges from copyright holders who argue that these platforms enable infringement. In June 2025, Disney and Universal notably filed a lawsuit against Midjourney, asserting both direct and secondary copyright infringement. The studios claim Midjourney directly infringed their works by reproducing, displaying, distributing, and creating derivative works during both its training phase and in the outputs generated for subscribers. Midjourney, however, suggests users are solely responsible for prompts and outputs. This leads to the secondary infringement claim, where Disney and Universal contend that Midjourney enables or induces infringement by failing to block problematic prompts and by promoting infringing artwork in its “Explore” section. Midjourney defends its training process as “quintessentially transformative fair use,” arguing it cannot know if an image is infringing without specific notice and context of use, given the many legitimate non-commercial uses for popular culture characters. Regardless of the outcome of such high-profile lawsuits, individual users remain exposed to legal jeopardy, although large corporations typically target the AI generators for their greater financial capacity.

Beyond the risk of infringement, businesses must contend with another critical legal limitation: AI-generated content is generally not copyrightable under US law. The US Copyright Office maintains that such content lacks a human author, a stance affirmed in cases like Thaler v. Perlmutter, where an AI-generated image was denied copyright, and Naruto v. Slater, which established that non-human entities cannot obtain copyrights. If a work combines human and AI-generated elements, only the human-created portions qualify for copyright protection, as seen with the graphic novel Zarya of the Dawn.

However, there is a silver lining for brand protection: AI-generated logos or slogans can be registered as trademarks. Unlike copyrights, trademarks do not require human authorship; they simply need to function as an “indicator of origin,” allowing consumers to immediately associate the logo or slogan with a specific product or service.

To navigate this complex legal landscape, businesses must prioritize human oversight. Thoroughly checking AI-generated materials against existing copyrighted works—using tools like Google Image Search for visuals or precise quoted searches for text—is essential. While copyrighting AI-generated content may be impossible, trademarking brand assets offers a viable alternative for protecting identity. Ultimately, the most crucial safeguard is the “human in the loop.” As intellectual property lawyer Travis Stockman advises, companies should integrate genuine human creativity into final materials, meticulously vet outputs, document their creative process, and fully understand the licensing terms of any AI tools they employ.