When AI researcher Melanie Mitchell published Artificial Intelligence: A Guide for Thinking Humans in 2019, she set out to clarify AI’s impact. A few years later, ChatGPT set off a new AI boom—with a side effect that caught her off guard. An AI-generated imitation of her book appeared on Amazon, in an apparent scheme to profit off her work. It looks like another example of the ecommerce giant’s ongoing problem with a glut of low-quality AI-generated ebooks.
Mitchell learned that searching Amazon for her book surfaced not only her own tome but also another ebook with the same title, published last September. It was only 45 pages long and it parroted Mitchell’s ideas in halting, awkward language. The listed author, “Shumaila Majid,” had no bio, headshot, or internet presence, but clicking on that name brought up dozens of similar books summarizing recently published titles.
Mitchell guessed the knock-off ebook was AI-generated, and her hunch appears to be correct. WIRED asked deepfake-detection startup Reality Defender to analyze the ersatz version of Artificial Intelligence: A Guide for Thinking Humans, and its software declared the book 99 percent likely AI-generated. “It made me mad,” says Mitchell, a professor at the Santa Fe Institute. “It’s just horrifying how people are getting suckered into buying these books.”
Amazon took down the imitation of Mitchell’s book after WIRED contacted the company. “While we allow AI-generated content, we don’t allow AI-generated content that violates our Kindle Direct Publishing content guidelines, including content that creates a disappointing customer experience,” Amazon spokesperson Ashley Vanicek says.
But Mitchell is far from the only AI researcher apparently targeted using the same technology they work on. Pioneering computer scientist Fei-Fei Li’s new memoir The Worlds I See: Curiosity, Exploration, and Discovery in the Age of AI has over a dozen different summaries come up when you search for the book on Amazon.
Unlike the takeoff of Mitchell’s book, the summaries of Li’s announce themselves as such. One, forthrightly titled Summary and Analysis of The Worlds I See, has a product description that begins: “DISCLAIMER!! THIS IS NOT A BOOK BY FEI-FEI LI, NOR IS IT AFFILIATED WITH THEM.IT IS AN INDEPENDENT PUBLICATION THAT SUMMARIZES FEI-FEI LI BOOK IN DETAILS.IT IS A SUMMARY.” Yet these books, too, appear to be AI-generated and to add little value for readers. Reality Defender analyzed a sample of the Summary and Analysis book and found it was also likely AI-generated. “A complete and total rewriting of the text. Like, someone queried an LLM to rewrite the text, not summarize it,” Reality Defender head of marketing Scott Steinhardt says. “It’s like a KidzBop version of the real thing.” Reached for comment over email, Li distilled her reaction into a single emoji: 🤯.
Summary Execution
Sleazy book summaries have been a long-running problem on Amazon. In 2019, The Wall Street Journal found that many used deliberately confusing cover art and text, irking writers including entrepreneur Tim Ferriss. “We, along with some of the publishers, have been trying to get these taken down for some time now,” says Authors Guild CEO Mary Rasenberger. The rise of generative AI has supercharged the spammy summary industry. “It is the first market we expected to see inundated by AI,” Rasenberger says. She says these schemes fit the strengths of large language models, which are passable at producing summaries of work they’re fed, and can do it fast. The fruits of this rapid-fire generation are now common in searches for popular nonfiction titles on Amazon.
AI-generated summaries sold as ebooks have been “dramatically increasing in number, says publishing industry expert Jane Friedman—who was herself the target of a different AI-generated book scheme. That’s despite Amazon in September limiting authors to uploading a maximum of three books to its store each day. “It’s common right now for a nonfiction author to celebrate the launch of their book, then within a few days discover one of these summaries for sale.”
Writer Sarah Stankorb is one such author. This summer, she published Disobedient Women: How a Small Group of Faithful Women Exposed Abuse, Brought Down Powerful Pastors, and Ignited an Evangelical Reckoning. Summaries appeared on Amazon within days. One, which she suspects was based on an advance copy commonly distributed to reviewers, appeared the day before her book came out.
Stankorb was aghast. Disobedient Women was the product of years of careful reporting. “It’s disturbing to me, and on multiple moral levels seems wrong, to pull the heart and sensitivity out of the stories,” she says. “And the language—it seemed like they just ran it through some sort of thesaurus program, and it came out really bizarre.”
Comparing the texts side by side, the imitation is blatant. Stankorb’s opening line: “In my early days reporting, I might do an interview with a mompreneur, then spend the afternoon poring over Pew Research Center stats on Americans disaffiliating from religion.” The summary’s opening line: “In the early years of their reporting, they might conduct a mompreneur interview, followed by a day spent delving into Pew Research Center statistics about Americans who had abandoned their religious affiliations.” Reality Defender rated the summary of Stankorb’s book as 99 percent likely AI-generated.
Legally Blurred
When Mitchell found out about her own AI imitator, she vented in a post on X, asking, “Is this legal?” Now, she has doubts she could successfully take anyone to court over this. “You can’t copyright the title of a book, I’ve been told,” Mitchell says. Usure of whether she has any recourse, she hasn’t contacted Amazon.
Some copyright scholars say that a summary is legal as long as it refrains from explicit word-for-word plagiarism. Kristelia Garcia, an intellectual property law professor at Georgetown University, draws a comparison with the original blockbusters of the summary world: CliffsNotes, the longrunning study guide series that provides student-friendly explanations of literature.
“CliffsNotes aren’t legal because they’re fair use. They’re legal because they don’t actually copy the books. They just paraphrase what the book is about,” Garcia says via email.
Other IP experts aren’t so sure. There’s a big difference, after all, between CliffsNotes—which provide substantive analysis of a book in addition to summarizing it and are written by humans—and this newer wave of summaries clogging up Amazon. “Simply summarizing a book is harder to defend,” says James Grimmelmann, a professor of internet law at Cornell University. “There is still substantial similarity in the selection and arrangement of topics and probably some similarity in language as well.”
Rasenberg of the Authors Guild sees a 2017 case where Penguin Random House sued authors who created children’s editions of its titles as a precedent that could help writers fight AI-generated summaries. The court found that the children’s summaries were not legal, because they were primarily devoted to retelling copyrighted stories.
Until an author actually files a lawsuit against the creator of one of these new generation summaries, their legality remains an open question. And although Amazon did take down Mitchell’s summary, it has not announced plans to proactively monitor this wave of summaries. “I hate that this is the new reality, but it would likely take a significant and recurring PR nightmare for a change in policy to occur at Amazon,” Friedman says.
Right now, there’s nothing much stopping the next AI ebook hustler from creating a new summary and uploading it tomorrow. “It’s ridiculous that Amazon doesn’t seem to be doing much to stop it,” Mitchell says. Then again, the publishing industry doesn’t seem to know quite how to handle this, either. Mitchell remembers the resigned way her agent responded when she wrote to her about the AI imitation: “This is just the new world we’re living in.”