I think AI-art is in an ethical gray area. If the AI is "trained" on public facing images, one could argue that it "sees" them and develops "an opinion" about how they are drawn or photographed.
Engage with me in a thought experiment; Say AI is sentient and independent, i.e., an android like Data from Star Trek. It observes Rembrandt's entire body of work, then produces a Rembrandt style painting of Capt. Picard. (As an aside, do pronouns count here? If we say “they”, do we have a deeper emotional connection? When do we start to change our language concerning AI?)
Would we then say that by observing anothers' art style and making adjustments to fit his (Data's) own style that, “it isn’t art because an artificial intelligence created it”?
This would be the same process as current AI as I understand it. Observing art, analyzing it, and reproducing the style. This conversation opens (and reopens) a whole world of currently unanswered questions. What constitutes original art? Did Warhol create art, or just reproduce what other people had made and present it in his own unique style? I think there was (and still is) some debate around the “artistic” nature of the work(s) he and others like him created.
In addition to using words to describe what you want, you can use an image as a “seed” for generation.
This is my seed image, created by me, in Windows Paint:
Generated with Stable Diffusion from the above seed image with the only keyword, “face”, from least complex generally trending to most complex:
Also Stable Diffusion with the same seed, keyword “landscape”:
It stuck to the brief provided, green on the right, black on the left, red background.
Should the art be based on something original that the artist drew themselves but with generated details? We can observe that the art style is obviously influenced by other hand created art… But isn’t that art? Seeing (stealing) a style and making your own interpretation?
But, I can hear some of you saying, it was trained on the art of people who didn’t agree to that use of their publicly viewable art. I hear that argument, and I’m not making any assertion of rightness or wrongness. I think that the AI companies generally have a good standing unless they used art that is behind a paywall. If the companies that did the training were to be sued (Which is more than likely) the outcome will be that, if found culpable, they would agree to use only those artists who opt in. That would only be a minor setback. I don’t think the same can be said for ChatGPT, the program that generates text, as you can’t very well copyright words. More on that later.
Would requiring a seed image to generate your AI art help alleviate the current angst and outrage that people are expressing? I suspect not. It’s one giant step toward making art copy-able rather than a skill in and of itself. It’s a vast simplification of the process. That of copying (or being influenced by) another individual’s, or multiple individuals, art and then by hand and adjusting it to fit your perspective.
So, maybe we set a limit in this new world of ability to copy another person’s art style easily. We shouldn’t allow the public the ability to copy by supplying the name of the artist. That’s a pretty low barrier to people who are determined to copy Rembrandt, but it is a barrier nonetheless.
I think that as with “forgeries”, copies of famous artists, “real” art will still be considered more valuable. It will be like the vinyl record hipsters, only those who care about it will be into “real” art, the rest of us will just continue listening to mp3s/CDs/streaming music.
There is a very real possibility that many artists will find themselves out of a job, with only the well connected or very talented able to make a living with their art skills alone. Like it or not, I think we are looking at an inevitability. The world will still need artists, but they’ll operate differently. There will be far fewer well known artists, but there may be a similar number of “creatives” out there, just with more focused audiences. It sucks and I wish it were different.
The same could be said of writing. Currently ChatGPT is capable of making a more than adequate textual interpretation of a prompt. The prompt, “Tell me a story”, resulted in a very passable story, four paragraphs in length about a girl named Lily and her climb to a hilltop that had a great view, and it had a denouement tying it all together. I wouldn’t have been able to tell that this was a story created by AI. The story was predictable and straightforward, but a passable story. Copy-writing is soon going to change. From writing copy, to reading what an AI bot has written, perhaps making slight alterations to that output, and pushing a button saying “I approve”.
Indeed, if you plug in the first sentence of this essay into ChatGPT, you get much the same overall content, that could be massaged into an article with similar points. I would say it would lack the feeling and weight of my writing, but would you be able to tell?
Graphic designers, web coders, and copy-writers might be one role, and far fewer people needed overall to do the same work as before. I suspect when the dust settles, there will still be creative positions that look different but are no less creative. This is just another turn in the inevitable wheel that is progress, we must adapt or become irrelevant.