Is AI art the future?

Call Me the Lizard Queen
6 min readDec 20, 2022

--

A scene from the movie I, Robot where Will Smith’s character asks, “can a robot write a symphony? Can a robot take a blank canvas and turn it into a masterpiece?” to which the robot replies, “can you?”

When I first learned that people had programmed computers to make art, my only thought was that it was an impressive accomplishment — a milestone of human technological progress. And to be honest, I didn’t give it much more consideration beyond that. I didn’t even spend much time playing around with any of the programs available for free.

The criticism I saw at first felt very reactionary and alarmist. Frankly, it sounded a lot like the arguments people have had about any art media going from analogue to digital. Some of it was astoundingly ableist, the implication being that people who see AI art generators as a way to finally be able to make art are just lazy and unwilling to put in the work to get good. A lot more of it came off as snobbish and elitist. I don’t like wading into conversations about what is and isn’t art. Different things resonate with different people, and I’m not interested in gatekeeping creative expression.

While I dismissed those kinds of critiques of AI art, I think it is becoming hard to ignore that there are more pressing criticisms of the technology. They just have a lot less to do with the art itself and everything to how the technology has been developed, and who is benefiting from it.

A look below the surface of AI art

Machine learning has proven to be a powerful tool when it comes to advancing technology. The thing is, for machine learning to work, you need to provide hundreds of thousands of data points. We participate in machine learning every time we look anything up using a search engine. The keywords we enter, the links we click on, how long we interact with a page, are all used to help search algorithms become more effective.

AI art generators are also dependent on machine learning. It might be naive, but I’m going to assume the original intention was to only use works of art in the public domain to train the AI. While there is no way for the artists to predict that their art would be used in this way, at this point, no one owns their intellectual property anymore. I think there is an argument for the transformative nature of AI art somewhere in there. Regardless, it is hard to say that this would harm anyone.

Now, I’m assuming things started to get murky when it came to automating the process of finding, downloading, and inputting data to train the AI. It would take an unreasonable amount of time to do this without taking into account identifying the artists and getting their consent.

One solution was to allow users to help train the AIs. Many hands make light work, after all. Of course, there’s no one moderating who is uploading what, which invariable means it’s been a total free-for-all. It’s clear that a lot of users aren’t bothering to get consent from the creators of the work they’re uploading. More troublesome are the instances of artists who’ve explicitly denied permission for their art to be used this way, and people doing so anyway. I couldn’t say if it occurred to anyone that some artists might mind having their art used in this way, or if they just didn’t care. I don’t know if intent matters at this point.

Ever since it became possible to upload images online, art theft has been a problem. This has always been the double-edged sword of a free and open internet. What’s more pernicious about using art theft to train AI, is that once something becomes a data set, there is no extricating it from the system. Sort of like you can’t remove a specific sheep’s wool from the sweater you’re already wearing.

Does this mean the technology is unethical?

There is a simple answer and a more complex answer. Technology in a vacuum cannot be inherently ethical or unethical. It’s the human component that introduces ethical value. We do this in two ways: either in how we create/develop technology, or in how that technology is used. Without humans to create and use technology, it is affectively an inert lump.

That being said, AI art generators are clearly unethical because they were developed on a foundation of art theft. (And, some might argue, designed to replace human artists to fill the ever-growing demand for “content”.) While some have tried to deny that art theft is a component of AI art, the situation over on Artstation has made it clear that unethical practices are ongoing. It is clear that there exists no safeguards to prevent users from inputting art they have no ownership over.

A tweet by @tsurudraws stating: “the fact that AI art generators are getting absolutely BROKEN today is proof that 1) AI prompters absolutely rip folks art from artstation without permission 2) protests work” followed by images of tweets from AI prompters complaining that the protest is ruining their generative art.

The problem is bigger than AI art generators

When you place AI art generators within the context of Silicone Valley and tech startup culture, it’s not in the least bit shocking that the technology has turned out to be pretty problematic. This is far from the first time that an idealistic technology has ended up having pretty big, negative social consequences. The problems in Big Tech, I think, boil down to who is in the room making the decisions about direction, functionality, and implementation.

Firstly, diversity and the lack thereof, has never not been an issue in Silicone Valley, and by extension, Big Tech. You can find the stats in this report from the U.S. Equal Employment Opportunity Commission. The bottom line is that the tech industry skews disproportionately white and male, even more so at the executive level. While the report doesn’t touch on class, we can assume it becomes a factor in who has the ability to amass the necessary venture capital for their startups, which also impacts the kind of perspectives in the room.

Secondly, there is no requirement in IT career development that people be taught ethics, or even just how to self-identify and correct for bias. Without a grounding in ethics, no one is having the very important conversations about the impact of technology on vulnerable populations until it becomes an optics problem. In fact, I’d be willing to wager that most of the time, it doesn’t even occur to question whether a technology should be made at all.

These two problems compound, which results in a seeming inability to predict the social harms of the technology being created. There have been too many instances of hard-coded racial bias in the technology we are becoming more and more reliant on. And until we address both of these issues, that’s not going to change.

An excerpt from Ellen Ullman’s book, A Life in Code: a personal history of technology, where she describes a conversation among software engineers that rapidly descended into talk of forced eugenics and how little it seemed to bother them that they were proposing the same ideas that led to the holocaust in world war two.
An excerpt from Ellen Ullman’s book, A Life in Code: a personal history of technology, where she describes a conversation among software engineers that rapidly descended into talk of forced eugenics and how little it seemed to bother them that they were proposing the same ideas that led to the holocaust in world war two.

Can AI art be salvaged?

What would it take to salvage AI art generators? Probably starting over from scratch and implementing rigorous safeguards to ensure that nothing can be used to train the AI without vetting and obtaining fully informed consent from creators. Do I think that’s going to happen? Probably not.

The sunk-cost fallacy comes into play here. Building any kind of AI is a very expensive and labour intensive venture. It’s a huge ask to get people to dump all that in the garbage and start over completely from scratch.

More over, and this is where my cynicism comes out, there are always going to be people who believe that all means are justified in the name of human progress — even if that means eradicating the human component altogether. And that’s not something I see society evolving past any time soon.

It’s like something out of a novel…

The more I think about it, the more I’m convinced AI art is truly Cyberpunk. Not because it’s so high-tech and futuristic, but because it exploits the labour of generations in order to remove the human element of something that is a fundamental aspect of humanity. All so that people, but mainly companies, can benefit monetarily. I’d say that AI art fits in very well in a genre defined by its dehumanizing, high-tech future, don’t you?

“The future is already here, it’s just not evenly distributed yet” — William Gibson

--

--