AI Art: The Future Of Game Development, Or Risky Infringement?

When I was in high school, I wrote a paper on the possibility of music generated by Artificial Intelligence, or AI. At the time, AI art seemed like a distant dream, one that couldn’t possibly be real outside of science fiction. The thought at the time was that AI would always lack some human element of creativity, which would cause it to feel fake and unaccepted by general audiences.

Now the world is very different. While we’ll mostly be talking about AI image generation, these principles apply to the generation of any other types of asset that could become prominent in the future, such as music or 3D models.

AI image generators are pretty neat, but I don’t think I’ll be using it for stock pictures of deals being made.

Very recently, AI technologies have had an explosion of growth with programs like ChatGPT offering interaction and information in a conversational way or models like Midjourney, Dall-E, and Stable Diffusion able to produce visual art based on a text prompt, even if they do struggle with hands. For their part, many game developers have expressed concerns and reservations about AI in game development, including ethical concerns like compensation for artists whose work makes up the datasets, as well as the use of the output of the AI models. Various tech leaders have also called for a pause on AI research while we sort these concerns out. There are also lawsuits that will likely define the legal future of this field, such as the one between Getty Images and Stability AI over their use of Getty’s stock photos to train its image generation model.

The idea of being able to generate assets using AI is tempting. Who wouldn’t want to just click a button and spit out various permutations of the asset you described? I get it. I’m a game designer and a programmer myself, but art has always been a challenge for me. But while this may seem to be a game changer for game developers and other creators who lack particular artistic skills, there are many legal issues to be aware of if you plan on incorporating AI into your workflow. 

How AI Art Generation Works

Before talking about any of the legal issues, it’s important to understand at least a little bit about how a computer can generate images. Most image generators use a process called stable diffusion. However other methods are similar for our purposes, as they all make use of a very large data set. We’ll try to keep things simple, but here’s a more involved explanation for the interested.

Let’s first talk about what stable diffusion is not. Stable diffusion is not learning art rules and techniques before practicing its craft for years to hone a skill like a human would. It has no understanding of what makes art good or skilled.

It is also not copying and pasting elements from the internet like a sort of artistic collage. If I ask it for a picture of a car, it doesn’t go to Google and paste an image of a car.

Stable diffusion works by taking a large number of images that are labeled and turned into raw data called latent images, which make up its data set. After thousands of pictures of a car, it can start to “understand” how a car looks, and also start to distinguish a batmobile as a type of car or other attributes of a car like color or shapes. It is then trained to unblur these images (that is, it stabilizes diffused images) and reconstruct them, but in the process comes back with something that’s not quite the original images.

I asked Dall-e Mini for “a car on top of a wedge of cheese,” and it did get most of the individual parts right at least. More advanced models can better understand not only individual parts of a prompt, but the prompt in context.

In a way, “AI-recreated” is probably a more precise term than “AI-generated,” though they’re mostly functionally identical for our purposes. That said, it’s not just copying something online, but recreating it based on its latent image mode. The most important thing to remember is that they’re reliant on a large number of training images that make up the latent images in the data set used for generation.

Legal Risks of Using AI Art Assets in a Video Game

If you’re considering using AI-generated assets in your game, there are potential legal risks. The biggest one is that the legal landscape is still shifting. While some image generators, such as DALL-E, have a license that allows users to commercialize the images generated by the program, that may not be the case in the future as decisions are made. Things can and probably will change in this space, so it’s best not to have a game that’s reliant on a license that might not exist in the future.

Most legal issues come from the data sets that these image generators are trained on. The Getty Images lawsuit centers on the Laion 5B dataset. This data set is 240 Terabytes of labeled images, many of which are protected by copyright held by various artists. Many artists have found that their own work was in the data set without their permission. While you can’t copyright a particular artist’s style, you can protect the actual data that was fed into the AI model.

A crowd of people drawn by AI with warped faces, along with a watermark by "geetyimages" which AI drew into the frame because it was part of the training data. AI art needs this training data in order to function.
The Getty Images watermark is intended to stop you from using their images without paying, but here Stable Diffusion recreated the “geetyimages” watermark. It did so because it was trained on images from Getty Images, without their permission.

The fundamental question that this lawsuit will be deciding is whether or not the incorporation of these images into the AI model is infringement, and if there is a fair use defense that can be asserted by Open AI. It does not matter that the data set is open source. This is new technology. There is no precedent here. Accordingly, if a decision is made that affects Open AI’s ability to use the images, you may find that the images you’ve generated no longer have a valid license.

The Risk of Secondary Infringement Liability

If a court reaches a decision that the incorporation of art into a data set constitutes infringement, then a developer using these AI-generated assets in their game may also face secondary liability under federal copyright law. Secondary or contributory infringement means you are contributing to the infringement in question. You yourself are not infringing by creating the generator’s data set using copyright-protected images, but you are contributing to it by using (and in this case even profiting from) its use. Secondary liability occurs when someone
“(1) has knowledge of another’s infringement and (2) materially contributes to it.”1

Again, there’s no precedent here but someone using generated assets in their game would likely be paying for access to the image generator or would otherwise be contributing to it, and likely should have knowledge of how it works enough to be aware of the infringement. While it is unlikely that you would immediately find yourself in legal trouble over using AI-generated assets, it’s usually best to avoid a situation where your game’s development is reliant on potential infringement.

Conclusion

All of this is to say that while right now image generators offer a commercial license to use whatever you generate in your game or other use cases, they may not have the license to do so because of the data set on which they were trained. If you make use of these images and the courts decide that they are infringing, then you could also be liable as a secondary infringer. 

In spite of all of that, I think there’s a far greater problem when it comes to using AI-generated assets in your game – you may not be able to own the copyright to those parts of your game at all, which is not a situation you want to be in. We’ll talk more about that in the next article, so stay tuned.

I don’t know where this technology is going, but I can say that it will be changing things in the future. Remember that while AI is a tool, it shouldn’t be relied on to solve all of your art needs. Artists still provide tons of value, don’t underestimate them. There are many legal and ethical questions surrounding the use of AI-generated assets, some of which are still being decided. For now, it’s probably better to take a cautious approach when it comes to generated assets until things are better settled.

  1. Perfect 10, Inc. v. Amazon.com, Inc. 508 F.3d 1146 (9th Cir. 2007) ↩︎

Leave a Reply

Your email address will not be published. Required fields are marked *