This article is by Sydney Firmin and originally appeared on the Alteryx Data Science Blog here: https://community.alteryx.com/t5/Data-Science-Blog/Creative-AI-What-the-Field-of-AI-Artwork-Can-Teach-Us-About-Deep/ba-p/368988
On October 25th, 2018 the British auction house Christie’s auctioned a piece of artwork created using artificial intelligence. This was the first piece of “AI Artwork” featured in a Christie’s auction. Originally predicted to sell for between $7,000-$10,000, the final price ended up being $432,500 (including the fees paid to the auction house). The piece, titled Portrait of Edmond de Belamy, was created by the art group Obvious, using an AI architecture introduced by Ian Goodfellow called a Generative Adversarial Network (GAN). Fun fact: the name Belamy is a play on Goodfellow. (Bel Amie is French for good friend. Good-friend, good-fellow. You get it.)
You can read Christie's publication about the artwork and the auction here. The hype around this auction was high. Despite computer-assisted artwork having been around since the 1950s, this auction seemed to signal to many people the arrival of AI art into the mainstream art scene. A piece of AI artwork selling for hundreds of thousands of dollars might seem like an amazing story on its own. However, the story of Edmond de Delamy is also riddled with scandal and intrigue.
One of the reasons that this piece gained as much attention as it did was because it was marketed as being entirely “created by AI,” implying little to no human involvement. However, the process of using AI to create artwork is heavily curated by a human. When an AI “creates artwork,” it is a human that selects the algorithm to use, curates the training data fed into the algorithm (pre-curation), and ultimately selects the outputs from hundreds to thousands of generated images (post-curation).
The publicity was generated around the single statement "this portrait is not the product of a human mind" was considerable. Many have argued this is how Obvious ended up with a piece in Christie's, despite it being a misleading and misrepresentative statement about the reality of the AI art process.
The other problem people have had with this piece is that Portrait of Edmond de Belamy is seen as being highly derivate of some of AI artist Robbie Barrat’s earlier projects. The members of Obvious used a great deal of Barrat's unmodified code, which was posted on GitHub, to create their artwork.
Robbie Barrat, as well as the AI art community at large, were (understandably) somewhat unimpressed by the sale of the artwork at Christie’s and were concerned that a debut of AI artwork on a larger stage had been handed away to a group of people that weren’t truly innovative - or a part of the AI art community.
I wanted to start the article this way because I think it is an interesting entry point to discussing AI art. The hype built around the idea of an AI-generated portrait is reflective of where we are (as a society) in understanding AI. People got excited about this piece because the way it was marketed implied the AI that created it was autonomous. The ensuing flurry of media attention caused the painting to sell at a very high price.
Because there is not a strong understanding of what AI is on a societal level right now, it easy to hype up. Although The Portrait of Edmond de Belamy was not a particularly innovative or unique piece, the way it was marketed really spoke to people. That in and of itself makes the piece and the auction meaningful and historical to the art world.
If you are interested in learning more about the Edmond de Belamy story, the Verge has a great article; as does the Technology and Art blog Artnome, which includes an interview with a member of Obvious. And Dr. Ahmed Elgammal has a piece that discusses the implications the auction might have to the greater AI art market.
All this being said, currently, AI is not autonomous. Any AI application, from artwork to autonomous cars, is completely defined by the human process around it.
AI and Art
The field of AI art raises questions like “At what point should the credit for a piece of artwork be given to the algorithm?” or “Can anything an AI creates ever truly be art, as it lacks intention on the part of the AI?”, and of course, the old standby: “What is art?”
As the founder and director of the Digital Humanities Research Laboratory at Rutgers, Dr. Ahmed Elgammal actively examines these types of questions. In his 2018 Ted Talk, AI vs. art: Can the machine be creative?, Elagmmal states that in his opinion, AI is the first invention since photography that can change what art is.
Elgammal identifies AI art as being a type of conceptual art, which is a field of art where the idea behind the work is more important than the material outcome (the piece of artwork that is produced). With AI art, the process of collaborating with an algorithm to create art is a defining part of the artwork, as well as where intent comes into play in the artistic process.
Art Is in the Process
AI is not at a point where it creates art (or anything) on its own. Deep learning (and more generally, machine learning) algorithms live in a vacuum and can only do what they have been trained to do. A human needs to act to give AI direction, context, and define its purpose. In this sense, using AI for art is not unlike using AI for any other application.
Any data science project requires careful guidance from the data scientist. The data scientist defines what the machine learning algorithm knows with the training data, the algorithm used to learn a solution, and what an end result needs to look like.
Projects from the AI art world can teach us about how AI works, as well as help characterize the process of applying deep learning and machine learning to a data science project. With that in mind, I’d like to highlight a couple of my favorite projects that demonstrate the beauty in the process of implementing deep learning algorithms.
Artist Anna Ridler has worked on a variety of AI art projects. Personally, my favorite piece from this project has to do with training an AI to create tulips. Her work Mosaic Virus draws on the historical parallels between “tulip-mania” that swept across Europe in the 1630s, and the speculation around cryptocurrencies. Part of what caused tulip speculation in the 1630s is the Mosaic virus, which causes stripes in a tulip’s petal. In this work, AI-generated tulips bloom and transform, with the presence of visible striping dependent on the value of bitcoin (Bitcoin becomes the proverbial mosaic virus).
What I think is beautiful about this is it shows the human-intensive side of a successful machine learning project. The hardest part of any data science study is collecting and processing the data. To set an algorithm up for success, your data needs to be robust, as well as focused. Here, Anna shows the images of ten thousand tulips that were needed to train her learning algorithm to create Mosaic Virus. Each image is an individual tulip photographed with a black background and painstakingly labeled by hand. This type of lovingly prepared training data makes the work succeed.
The Treachery of ImageNet
Another one of my favorite AI artists is Tom White. A lecturer in computational design at the University of Wellington (New Zealand), his work has involved using neural networks in a process called perceptron engines to create ink prints.
White’s art strives to depict the world not as humans see it, but as algorithms see it. His series The Treachery of ImageNet (the title is a play on Magritte’s The Treachery of Images) leverages the perception ability of trained neural networks to identify the contents of an image to create abstract images that neural networks are able to correctly classify as a certain object.
White’s perceptron engine is actually a group of algorithms. The process for creating his abstract prints is iterative. First, he feeds a group of convolutional neural networks an unfiltered collection of training images from the repository ImageNet. These neural networks simultaneously construct a “sketch” of the object represented in the training data (the machines are given a limit of the number of shapes and thresholds they can use to represent the object). The produced abstract image is then fed back into the same collection of algorithms to see if the image can be correctly labeled. This process is repeated until the abstract image is correctly classified by the algorithm.
The outcome is that his images create strong classifier responses in other neural networks trained with the same ImageNet repository. When an AI looks at these images, they don’t see abstract art, they (strongly) see the object the image was trained to represent. Often the confidence for classifying these images is higher than what a classifier would return for an actual image of an object. White has effectively used AI to create an AI-“platonic ideal” of a series of objects.
White is iteratively working with algorithms to reverse engineer their thinking process. It's another reminder of how intensive working with machine learning algorithms can be. It also gives insight to the features that really matter to AI – and they are not always what we think they are. This has important implications for AI applications like self-driving cars or facial recognition. White’s work can teach us about the commonalities and discrepancies between how we and AI view the world.