Refik Anadol


Last week’s speaker in the Penny Stamps Speaker Series was Refik Anadol, one of the most successful artists working primarily with AI, data, and visualization. Over a decade or so, Anadol has produced work in collaboration with organizations and venues like the LA Philharmonic, Gaudi’s Casa Batlló, the Exosphere in Las Vegas, and the Museum of Modern Art in New York.

The lecture he gave was basically a review of his body of work, and that of his art and design studio, which is comprised of about 20 people. The earlier projects he showed mostly took the form of massive scale light projections on the sides of buildings, which could give the impression of a totally different texture or form. More recent work tended to use data driven visualizations, abstracting some sort of data into visual patterns displayed on a wall, building, or screen. Many of these also made use of what appeared to be procedural animations to create the impression of three dimensionality and motion.

In some cases explicit imagery is included, such as brief video clips or things close to familiar artworks, but most of it is quite abstract. Part of the reason is that, at least for the works that are not just abstract light patterns, Anadol has made extensive use of various AI methods to create imagery, especially generative adversarial networks (GANs). This phase of his work seems to have come out of his participation in the Google Art and Machine Intelligence Artist Residency in 2016, which led to a collaboration with NASA Jet Propulsion Laboratory. For anyone who has followed generative AI for a few years, this type of shifting, slightly unreal, imagery will seem quite familiar, although it is not normally seen at this scale.

While Anadol has clearly been very successful in terms of high profile installations, some critics have been quite harsh.1 2 3 It’s easy to see why his work can be polarizing; on one hand, it’s obvious that this work involves a high level of technical competence (including both software and hardware). On the other hand, it’s easy to dismiss it as abstract, lifeless, and repetitive. At its best, some people seem to get a kind of transcendent experience from experiencing the ever shifting visuals. At their worst, a lot of it seems comparable to a very expensive screensaver.

Hearing Anadol’s talk, I was struck by how similar his work was, in certain ways, to pretty basic works in interactive art, even to ideas that I played around with as a student many years ago. To be clear, Anadol’s work is on another level in terms of technical sophistication, involving lots of custom software and massive projection installations, and I have no pretensions that I could replicate them. Nevertheless, it’s interesting how many of the basic techniques remain the same.

For example, one of my projects in a class designed to encourage collaboration between artists and engineers was an interactive sculpture that used microphones to pick up the sound in the room, processed that audio using a computer, and used the transformed output to activate motors to make noise by vibrating or striking a series of metal cylinders. (It had a very industrial feel). Another was an umbrella that played audio of the sound of rain when you stood underneath it (which I personally thought was quite delightful).

Fast forward X years, and one of the pieces Anadol showed was an installation that used the sound of the room to update the parameters of a data-driven animation (in this case one that navigated through a latent space of images using neural networks, and produced images based on that). In another, he even included a component that actually adjusted the smell in the room based on audience presence and what was being shown on screen (though disappointingly, apparently only in another abstract data-transforming way).

Possibly his first foray into generative art was his WDCH Dreams, a project that came out of his Google AMI residency. Using the archive of the LA Philharmonic (both visual and audio), it created a truly massive projection (using 42 high resolution projectors) on the outside of the Walt Disney Concert Hall, designed by Frank Ghery. Anadol had already worked with this building (at least in miniature) for his UCLA MFA thesis, in which a data light sculpture projection was controlled by the music within.

More recently, his Unsupervised — Machine Hallucinations was exhibited at MoMA in New York, which he claimed was the first piece of AI art exhibited and acquired by the museum. A variation on many of his other works using GANs, this large wall projection is based on the MoMA’s collection of art works and metadata, but has the goal of never reproducing any of those works exactly. Following the same idea, this project uses the motion of visitors and local weather data to guide the generation of images. This idea of transducing one modality of data into another has remained consistent throughout, with only relatively minor variations.

It’s quite understandable why Anadol would reuse this technique multiple times. Given that the imagery itself can be quite abstract, claiming it has been grounded in some sort of data perhaps gives it more emotional or intellectual heft. On the other hand, now that so much of our life has been datafied, it’s unclear how much novelty remains here. One might expect something that tracked the audience’s movement to have more resonance with concerns about surveillance and nudges, but perhaps all that too is now passé.

Even more disappointing, many of the works seem to reuse certain algorithmic tricks in converting data into visualizations. While they are not all the same, several do seem to use the same or a very similar kind of particle simulation, which gives the illusion of waves of colour crashing around a 3D space (despite being projected onto a flat wall). The effect is impressive when you first see it, but starts to look very familiar, no matter what the source of the underlying data might be.

To be honest, a lot of the installations he showed tended to look quite similar to each other, differing primarily in terms of the space they were projected onto. In part this may involve the startup costs of writing code, but more generally this does seem to be an unfortunate property of a lot of generative art—big differences in algorithms can unfortunately still lead to work that seems relatively familiar. An artist like Zach Leiberman takes full advantage of this fact by simply producing a relentless number of images, such that the small differences are more interesting. The fact that Anadol’s work involves a much smaller number of big installations means they end up seeming less interestingly distinct from each other.

The other thing that struck me was the ways in which AI enters into these works, and how people in the audience responded to it. Because Anadol was early to the field, many of his earlier projects used techniques that now seem primitive, compared to newer image generation systems. As a result, many of these animations have a familiar unstable or flickering look as successive images are generated. In some cases, the resulting lack of realism can be beneficial, as they may look more dream-like, but it’s amazing how quickly they can also look somewhat dated.

It was also not clear to me how much Anadol had updated to using more recent AI models, but he is certainly engaged with other forms of tech, in some cases almost to the point of self-satire or unintentional absurdity: he has apparently been a big advocate of NFTs, and one of his recent projects involved blockchain (in order to send money to remote people). For another, he used EEG to monitor people’s brains while they looked at art. Both of these projects involved slightly cringe-worthy elements—one involved gathering imagery and art from the Yawanawá people in the Amazon (the one that was the inspiration for Avatar, Anadol emphasized) to be turned into generative AI art. The other was directed at helping a neuroscientist to figure out what sort of art could help put people into a “flow” state. In addition, the descriptions Anadol provides unsurprisingly include a lot of terminology anthropomorphizing algorithms, like “dream”, “imagine”, “memory”, “consciousness”, etc.

Given the audience questions, it is clear that lots of people still adore his work, and are still enthralled by his use of AI. Many questions were highly complimentary, although a few were more subtly critical. One questioner asked what blockchain technology Anadol had used, and what he ate. My assumption is that the questioner was trying to get at his moral commitments (e.g., was he concerned about the environmental or financial implications of blockchain or meat production?), but that is speculation on my part, and Anadol mostly dodged the question. Another asked about the carbon costs of using AI (Anadol pointed out that using AI models is far less intensive than training large foundation models, that he uses carbon-neutral compute on Google cloud, and that the projector in the theater was probably comparable in terms of energy consumption to the ones he uses in installations), although the questioner did not ask about the many flights that Anadol was planning to take for a project that involved him visiting all of the jungles around the world.

A third questioner asked about his use of millions of images, where he sourced these, and the implications for copyright. Anadol noted that he only uses “public” data, that he never directly shows any of the source examples (only interpolating between them, like people do when they create art from their experiences, he suggested), and that he had talked to some very good copyright lawyers.

Like most artwork based on data, Anadol’s projects have a natural affinity with ideas related to archives, and in some cases he has pushed quite explicitly in this direction. A project from five years ago called Virtual Archive seems to be a kind of cylindrical virtual reality interface into an archival collection of 1.7 million documents from the SALT archive in Turkey. A related project, Archive Dreaming, turns this into more of an immersive visual experience, and blurs the line between retrieval and generation. Leaning more into the idea of world as archive, His Artificial Realities: Coral project uses StyleGAN2 and Stable Diffusion to generate synthetic images of undersea coral, based on a large collection of images. Although the connection seems a bit tenuous, Anadol alluded to helping scientists to preserve this disappearing resource.

For what it’s worth, I wonder if Anadol’s work itself is badly served by the limitations of archives. He has a beautiful website (linked throughout), which features video and stills of many of his works. Ultimately, however, I suspect that a big part of the impact of them comes from experiencing them in person. If nothing else, he has been highly successful in drawing crowds, and I imagine experiencing these works along with many other people would heighten the feeling. Somehow, that does not transfer over well to a lecture format, and I got the sense that most of the people at his talk were ultimately left feeling somewehat cold.


  1. MoMA’s Daydream of Progress, Travis Diehl, The New York Times, Dec. 15, 2022. ↩︎

  2. MoMA’s Glorified Lava Lamp, Jerry Saltz, Vulture, Feb. 22, 2023. ↩︎

  3. Refik Anadol’s “Unsupervised”, R.H. Lossin, e-flux, March 14, 2023. ↩︎