In December of last year, Ammaar Reshi, a design manager at the finance and tech company Brex, announced on Twitter that he had created a picture book using artificial intelligence. The tweet went viral, with artists pointing out errors that revealed the limits of AI. “Anti-gravity leaf,” “pen becomes dress i guess,” and “tiny spikes jutting out of knuckles” were a few of the comments that illustrator Corey Brickley included on a marked-up page from the book. The consensus was clear among artists: This creation was a pale imitation of the real thing.
But the problems with AI and its implications for the publishing industry go deeper than aesthetics. Shivana Sookdeo, a cartoonist and a senior designer for Scholastic Graphix, who spoke with me via Zoom from her home in Brooklyn, notes that artificial intelligence isn’t even an accurate term. “It’s not intelligent in any way,” she says. “I usually call it an image generator.” AI platforms are trained to generate images by analyzing thousands of pieces of artwork created by real people, many of whom never agreed to their images being used this way. AI–generated art is rooted in their uncompensated, uncredited labor.
The news that the cover of Christopher Paolini’s upcoming novel, Fractal Noise, was created with AI–generated art also disturbed many. Despite requests that the cover be redesigned, the publisher, Tor, chose not to do so. Sookdeo says that as it is, artists too often go uncredited when authors do cover reveals. “I think AI makes it even worse because now it's another step of removal between the actual person who did the work and who gets the accolades or who gets the payments.”
Using AI is “inherently anti-worker,” graphic novelist Wendy Xu told me via Zoom from her home in Brooklyn. She notes that publishing is already an industry with equity issues. Many employees are overworked and underpaid—a problem made more visible with the ongoing HarperCollins strike. Xu (who is published by Harper) says that relying on AI is another way for companies to reduce costs—at the expense of workers. “It’s about cutting out people who deserve to make a living. It’s about cutting out people who deserve to be recognized for their humanity, period.”
AI is especially harmful for marginalized creators and cultures. Sookdeo has seen AI–generated portraits of “shamans” and “tribal art.” These images flatten the cultures they come from. “What tribe? Where are the images from? Did these people even consent to the original photographs?” asks Sookdeo. “It’s all blended up and smoothed away. And it severs any connection to cultural meaningfulness.”
Artists aren’t the only ones being exploited by AI, Xu says, citing a Vice article (“AI Isn’t Artificial or Intelligent”) that explains that these platforms are built on the labor of poorly paid workers doing data labeling and annotating and beta testing. Xu says that companies “want you to think it’s this magical thing made by a computer instead of the labor of tons of people in the Global South sifting through data.”
Will AI be unavoidable in publishing in the future? The answer to that question, Xu says, depends on what we do now. Both Sookdeo and Xu believe that publishers must take strong stances against AI. Xu wants all publishers to enact anti–AI policies and to be specific about what that means. She adds that the onus is on publishers, not artists, to take action. For instance, if it could be proven that images from a picture book were fed into an AI generator, the publisher should be willing to pursue legal action on behalf of the artist.
“We’re at a crossroads,” Xu says. “The industry can take this moment to go to bat for their artists, their workers, to make these beautiful books possible. We can go to bat for expanding artistic horizons; we can go to bat for expanding our visual literacy. We can start to create a culture in publishing that truly appreciates artists and their labor and values.”
Mahnaz Dar is a young readers’ editor.