Have you ever wondered if a machine can truly be creative—or if creativity is something uniquely, irreducibly human?
Welcome to FreeAstroScience, where we break down complex ideas into clear, human-centered understanding. We're glad you're here because today, we're exploring one of the most fascinating cultural debates of our time: the relationship between artificial intelligence and art.
In the summer of 2024, something remarkable happened. The influential journal October released a special issue—number 189—titled "A Questionnaire on Art and Machine Learning." It gathered voices from 24 artists, theorists, and curators, each wrestling with questions that feel almost existential: What does it mean to "make art" when algorithms can generate images? Who is the author when a machine participates in creation? And perhaps most haunting—what gets lost, or gained, when human hands let go of the brush?
This isn't just academic hand-wringing. It's a map of where we stand at a genuine crossroads. If you've ever used a text-to-image generator, typed a prompt into ChatGPT, or simply scrolled past an AI-generated image without realizing it, these questions touch your life directly.
Stick with us. By the end of this piece, you'll see AI art not as a threat or a gimmick, but as a mirror—one that reflects back everything beautiful and broken about how we create, consume, and connect.
Why Does This Questionnaire Matter?
The curators, Michelle Kuo and Pamela M. Lee, noticed a dramatic shift between 2022 and 2023. At first, many artists feared AI would "destroy" the meaning of art. Then, almost overnight, using AI-generated images became ordinary—even routine—for countless creatives.
This wasn't just technological adoption. It was a cultural earthquake.
The questionnaire asked pointed questions: How are artists collaborating with, changing, or critiquing AI systems? Does generative AI represent something fundamentally new—or is it just another tool, like the camera or the printing press? What biases hide inside these systems? What ecological costs?
The responses weren't unified. They clashed, converged, and sometimes contradicted each other. That's precisely what makes them valuable. We don't need a single answer. We need a richer conversation.
How Do Artists Actually Work With AI?
Let's break down the four main orientations that emerged from the questionnaire. Think of these as different lenses through which artists view the same technology.
AI as a Creative Partner
Some respondents see AI not as a threat to creativity but as a companion—a kind of cognitive extension that reveals things humans might miss on their own.
K Allado-McDowell describes how deeply using AI image tools changed their own perception. After spending hours with Midjourney, they noticed the visual signatures of AI appearing even in psychedelic visions during ceremony. The mind, they argue, absorbs whatever it practices. We become what we use.
"The tool digs deeper into the plasticine mind-sphere and flesh. Becoming a power user means being changed by a medium."
This isn't naive optimism. It's acknowledgment that tools reshape their users. A brush changes the painter. A camera changes the photographer. AI changes us too—whether we're conscious of it or not.
Nancy Baker Cahill pushes this further. She suggests that, when guided by artistic intention, machine learning could introduce "forms of attention, responsibility and inclusion into everyday technologies." Not just pretty pictures. Real care, coded into systems.
Holly Herndon and Mat Dryhurst have experimented with training AI on human voices through communal "call-and-response singing" sessions. Their 2019 album Proto emerged partly from these group ceremonies, where audiences literally lent their voices to shape the AI's dataset.
Here's the aha moment: AI doesn't create from nothing. It creates from us—from what we feed it, from the communities that shape its training data. The question isn't whether AI has creativity. It's whose creativity gets encoded, amplified, or erased.
The Critical Voice: Bias, Cost, and Power
Not everyone views AI as a partner. A significant group of respondents approaches it with skepticism—even opposition.
Kate Crawford, a leading critic, challenges the mystical language often wrapped around concepts like "latent space" (the mathematical realm where AI models generate images). She reminds us that latent space is "the product of statistics and material infrastructures."
| Common AI Myth | Reality Check |
|---|---|
| "AI creates from nothing" | AI compresses billions of human-made images and texts, then predicts new combinations |
| "Latent space is infinite and neutral" | Training datasets are often dominated by e-commerce images and stock photography |
| "AI is objective" | Datasets carry the biases of who created and labeled them |
| "AI art is costless" | Massive energy consumption, underpaid data labelers, extracted intellectual property |
Crawford's research shows that major training datasets like LAION-5B (used for Stable Diffusion) draw heavily from Shopify, eBay, and Pinterest. The images aren't selected for artistic or cultural significance. They're there because they have useful text labels—often written for search-engine optimization, not human understanding.
Simon Denny warns about "the cultural force" of tools like Midjourney. When millions use the same generator, its aesthetic becomes the default. It "washes everything else away."
Trevor Paglen connects AI image systems to surveillance infrastructure. The same techniques that generate "the pope in a puffer jacket" are rooted in military computer vision and facial recognition. His work deliberately makes visible what these systems usually hide.
American Artist has spent years exploring how AI continues patterns of systemic racism—especially in predictive policing. Their 2019 installation My Blue Window simulated driving through Brooklyn while an AI interface directed where to go based on "crime prediction." The absurdity? Nothing actually happened. The system just declared "crime deterred."
"I'm trying to change the perception that everything we're experiencing with AI is new... when in reality it's an optimization of the narrow thinking that causes most systemic problems."
Experimental Visions: Imagining New Possibilities
A third group treats AI as a site of experimentation—not just a tool to critique, but a new cultural territory to explore.
Ian Cheng imagines AI as a "cognitive symbiont" that could grow alongside humans. Not fusion. Coevolution. He pictures his children growing up with AI the way he grew up with smartphones—not as something alien, but as part of the fabric of life.
Alexander Kluge, the legendary German filmmaker and writer, sees AI as a way to extend the "fourth canon" of storytelling: constellation. Rather than linear narrative (epic, lyric, dramatic), AI enables vertical storytelling—digging into archives, juxtaposing images across centuries, surfacing forgotten histories.
He used AI image generators as a "virtual camera" to reimagine Aby Warburg's famous Mnemosyne Atlas—those mysterious panels connecting images across 150 million years of evolution.
Lev Manovich, a pioneer of digital media theory, places generative AI in a longer historical arc. He sees it as the latest chapter in a century-long project of decomposing and recombining visual elements—from the Bauhaus's basic design courses to Photoshop's filters to neural networks trained on billions of images.
| Historical Phase | Image Creation Method |
|---|---|
| Pre-1839 | Manual creation (drawing, painting, carving) |
| 1839–1970s | Optical capture (photography, film, video) |
| 1970s–2010s | Digital simulation (3D graphics, CGI) |
| 2021–present | Statistical prediction (generative AI) |
Manovich calls AI images "predictive media." They don't record what exists. They predict what could exist based on patterns learned from the past.
The Invisible Infrastructure
A fourth orientation focuses on what's hidden: the servers, the datasets, the labor, the code.
David Joselit draws a surprising parallel. He compares AI training on massive datasets to how the Louvre was "trained" on looted artworks during the Napoleonic era. Both processes compress human cultural production into a system that generates new meanings—and new power structures.
The difference? Museum curators traditionally select exemplary works for inclusion. AI "curators" are often low-paid workers tasked with excluding the non-normative—protecting averages, not discovering excellence.
Fred Turner reminds us that AI isn't magic. It's industrial extraction. Companies mine human creativity the way mining companies extract ore. They process it through machines. They sell the output.
"Show me an AI tool and I will show you a labor violation." — Alexander R. Galloway
Antonio Somaini offers a theoretical framework. He argues that "latent space"—that hidden mathematical realm inside AI models—now governs the relationship between what can be seen and what can be said. Prompts become a new form of language. Images become "latent-space visualizations."
Artists like Grégory Chatonsky have started embedding new data points into open-source models like Stable Diffusion—training them on local archives to generate alternative, counterfactual histories. Holly Herndon and Mat Dryhurst have explored ways to influence how future AI models will represent them by flooding the web with specific images.
What Are the Core Tensions?
From this rich conversation, several tensions stand out:
1. The Displaced Author The artist is no longer an isolated genius. They're a node in a network of models, data, infrastructure, and community. Creation becomes negotiation with the technical-cultural environment.
2. Surface vs. Structure AI-generated images look like surfaces—pretty, weird, sometimes uncanny. But beneath them lie deep layers: datasets, filters, omissions, curatorial choices. Working only on the surface without questioning the architecture is a risk.
3. Care vs. Extraction For some, AI is a machine of capture—extracting data, labor, imagination. For others, it's a potential device of relation, capable of giving visibility and voice to marginalized communities.
4. Aesthetics and Ethics Are Inseparable No AI image is neutral. Every output has a genealogy, a cost, a political context. Artists can't escape responsibility for the tools they choose.
5. Symbiosis or Resistance? Some imagine co-evolving with AI. Others seek glitches, errors, and deviations as critical practice. The space between these tendencies is where the most interesting work happens.
What Does This Mean for You?
If you're an artist, a student, a curious reader, or just someone who's seen AI-generated images scroll past your feed—this matters.
We're not spectators. Every time we use a prompt, share an AI image, or decide not to, we participate in shaping what AI becomes. The training data of future models includes what we create and share today.
Stephanie Dinkins puts it beautifully: our stories are our algorithms. What narratives do we want to encode? What communities do we want to include? What futures do we want to make possible?
Amelia Winger-Bearskin, an Indigenous artist and educator, asks her students to name AI stories they know. Almost all are dystopian. But she pushes back: if we can't imagine a world where AI helps rather than harms, why do we keep building it?
The answer, she suggests, lies in creation stories—the kind that embed values, knowledge, and tools for future generations. We need creation stories for AI. Not Frankenstein. Not Terminator. Something that honors where we came from, who gets included, and what seeds we're planting.
A Moment of Reflection
Here at FreeAstroScience, we believe the sleep of reason breeds monsters. That's why we exist—to keep your mind active, your curiosity alive, your questions sharp.
AI isn't coming. It's already here. It's reshaping how images are made, how meaning circulates, how power concentrates or disperses. We can't unknow it. But we can understand it better.
The artists in this questionnaire aren't prophets. They're explorers. They make mistakes. They contradict each other. They sometimes fail spectacularly. That's how discovery works.
What they share is a refusal to accept the technology as given—a commitment to intervene, to question, to create in the cracks and margins where something unexpected might still emerge.
Final Thoughts: Where Do We Go From Here?
The October questionnaire doesn't offer a single answer. It offers a constellation of perspectives—some hopeful, some wary, all engaged.
Maybe that's the point. Art has always been a space where we work out what it means to be human. Now, it's a space where we work out what it means to create alongside machines—and what we're willing to give up or fight for in the process.
The images AI generates aren't just images. They're compressed versions of human culture, filtered through corporate datasets, shaped by prompts we write, and embedded in systems we often don't understand.
But understanding is possible. And that's where artists, critics, and curious minds like you come in.
Come back to FreeAstroScience.com to keep learning. We're here to make complex scientific and cultural ideas accessible—because the world is too interesting to leave to the experts alone.
Sources
Carlo A. Bachschmidt, "Arte e AI," MagIA, December 14, 2025.
Michelle Kuo & Pamela M. Lee (eds.), "A Questionnaire on Art and Machine Learning," October 189, Summer 2024, MIT Press, pp. 6–130.

Post a Comment