AI’s Big Gift to Society Is … Pithy Summaries?

One phrase encapsulates the methodology of nonfiction master Robert Caro: Turn Every Page. The phrase is so associated with Caro that it’s the name of the recent documentary about him and of an exhibit of his archives at the New York Historical Society. To Caro it is imperative to put eyes on every line of every document relating to his subject, no matter how mind-numbing or inconvenient. He has learned that something that seems trivial can unlock a whole new understanding of an event, provide a path to an unknown source, or unravel a mystery of who was responsible for a crisis or an accomplishment. Over his career he has pored over literally millions of pages of documents: reports, transcripts, articles, legal briefs, letters (45 million in the LBJ Presidential Library alone!). Some seemed deadly dull, repetitive, or irrelevant. No matter—he’d plow through, paying full attention. Caro’s relentless page-turning has made his work iconic.

In the age of AI, however, there’s a new motto: There’s no need to turn pages at all! Not even the transcripts of your interviews. Oh, and you don’t have to pay attention at meetings, or even attend them. Nor do you need to read your mail or your colleagues’ memos. Just feed the raw material into a large language model and in an instant you’ll have a summary to scan. With OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude as our wingmen, summary reading is what now qualifies as preparedness.

LLMs love to summarize, or at least that’s what their creators set them about doing. Google now “auto-summarizes” your documents so you can “quickly parse the information that matters and prioritize where to focus.” AI will even summarize unread conversations in Google Chat! With Microsoft Copilot, if you so much as hover your cursor over an Excel spreadsheet, PDF, Word doc, or PowerPoint presentation, you’ll get it boiled down. That’s right—even the condensed bullet points of a slide deck can be cut down to the … more essential stuff? Meta also now summarizes the comments on popular posts. Zoom summarizes meetings and churns out a cheat sheet in real time. Transcription services like Otter now put summaries front and center, and the transcription itself in another tab.

Why the orgy of summarizing? At a time when we’re only beginning to figure out how to get value from LLMs, summaries are one of the most straightforward and immediately useful features available. Of course, they can contain errors or miss important points. Noted. The more serious risk is that relying too much on summaries will make us dumber.

Summaries, after all, are sketchy maps and not the territory itself. I’m reminded of the Woody Allen joke where he zipped through War and Peace in 20 minutes and concluded, “It’s about Russia.” I’m not saying that AI summaries are that vague. In fact, the reason they’re dangerous is that they’re good enough. They allow you to fake it, to proceed with some understanding of the subject. Just not a deep one.

As an example, let’s take AI-generated summaries of voice recordings, like what Otter does. As a journalist, I know that you lose something when you don’t do your own transcriptions. It’s incredibly time-consuming. But in the process you really know what your subject is saying, and not saying. You almost always find something you missed. A very close reading of a transcript might allow you to recover some of that. Having everything summarized, though, tempts you to look at only the passages of immediate interest—at the expense of unearthing treasures buried in the text.

Successful leaders have known all along the danger of such shortcuts. That’s why Jeff Bezos, when he was CEO of Amazon, banned PowerPoint from his meetings. He famously demanded that his underlings produce a meticulous memo that came to be known as a “6-pager.” Writing the 6-pager forced managers to think hard about what they were proposing, with every word critical to executing, or dooming, their pitch. The first part of a Bezos meeting is conducted in silence as everyone turns all 6 pages of the document. No summarizing allowed!

To be fair, I can entertain a counterargument to my discomfort with summaries. With no effort whatsoever, an LLM does read every page. So if you want to go beyond the summary, and you give it the proper prompts, an LLM can quickly locate the most obscure facts. Maybe one day these models will be sufficiently skilled to actually identify and surface those gems, customized to what you’re looking for. If that happens, though, we’d be even more reliant on them, and our own abilities might atrophy.

Long-term, summary mania might lead to an erosion of writing itself. If you know that no one will be reading the actual text of your emails, your documents, or your reports, why bother to take the time to dig up details that make compelling reading, or craft the prose to show your wit? You may as well outsource your writing to AI, which doesn’t mind at all if you ask it to churn out 100-page reports. No one will complain, because they’ll be using their own AI to condense the report to a bunch of bullet points. If all that happens, the collective work product of a civilization will have the quality of a third-generation Xerox.

As for Robert Caro, he’s years past his deadline on the fifth volume of his epic LBJ saga. If LLMs had been around when he began telling the president’s story almost 50 years ago—and he had actually used them and not turned so many pages—the whole cycle probably would have been long completed. But not nearly as great.

Time Travel

Earlier this year I had a conversation with Sam Liang, the CEO of Otter. Once specializing in straight transcription, the company now offers a range of meeting-based AI tools, including of course summarization—but also edgier features, like AI avatars that can attend your meetings and run the discussion. In my Plain View essay on the subject, I wondered whether this could defeat the purpose of meetings.

I ask Liang whether the prominence of AI in meetings might make humans less likely to attend. Knowing that there will be a summary available seems a disincentive to actually showing up. Liang himself says that he attends only a fraction of the meetings he’s invited to. “As CEO of a startup, I get tons of invitations to go to meetings—oftentimes I’m double booked or triple booked,” he says. “With Otter, I can look at my invitations and rank them. I classify them based on the content, the urgency, importance, and whether my presence adds any value or not.” Since he’s the CEO, he may find it easier to opt out. On the other hand, the boss’s presence in a meeting makes it more valuable to those who want clues to his thinking or an instant yes on a proposal.

Of course, the premise behind meetings is that every person’s presence adds potential value. It defeats the purpose if at the moment everyone turns to the single person who can weigh in on a problem, they find only an empty seat. But Liang has an AI solution for that too. “We’re building a system called Otter Avatar that will train a personal model for each employee for meetings where the employee doesn’t want to go or is sick or on vacation. We will train the avatar using your historical data, or your past meetings, or your Slack messages. If you have a question to ask that employee, the avatar can answer the question on their behalf.”

I point out that this might lead to an AI arms race. “I’m going to send my avatar to every meeting, and so will everyone else,” I explain. Meetings will be just a bunch of AI avatars talking to each other—afterward, people will check out the summary to see what the AIs said to each other.

“That can happen,” says Liang. “Of course, there are always situations where you want a personal relationship directly.”

Ask Me One Thing

Judith asks, “Many STEM-related autobiographies contain childhood memories of hands-on experience with transistors, radios, rocket and chemistry sets. Are kids nowadays missing out on these experiences?

Thanks for the question, Judith. You’re probably right that the heyday of those kid-centric science projects might have passed. One reason is that we’re a lot more safety (and lawsuit) conscious now. Some toy chemistry kits used to have some toxic stuff, including poisonous sodium cyanide and radioactive uranium ore. Caution dictated less dangerous—and less interesting—kits.

But, as we both likely know, it isn’t safety concerns that make kids less likely to venture into hands-on experimentation. With universal access to computers, childhood scientific explorations can easily be done in simulation. Still, I suspect that kids with a consuming hunger to learn about the way the world works don’t have trouble getting their grubby little hands on rocketry tools, telescopes, and even interesting chemicals. Future Richard Feynmans will use the internet, local junkyard, or maybe clubs or friendly science teachers to get the gadgetry they need. For those not so naturally driven, the presence of world-building tools like Minecraft and games like Civilization—as well as coding itself—provides a great gateway into the world of science. Geeks are gonna geek.

You can submit questions to [email protected]. Write ASK LEVY in the subject line.

End Times Chronicle

Old: The mountains in North Carolina were safe from the storms that plague the eastern half of the state. New: Western North Carolina is the new Hurricane Alley.

Last but Not Least

Here’s the scorecard on how AI is messing with elections around the globe.

Bobbi Althoff mommy-blogged her way to fame—and, as she’d hoped, fortune. She explains all in our Big Interview.

As fewer people sign up for DNA testing, 23andMe’s valuation is approaching spit.

License plate readers are scooping up the info on bumper stickers—taking a toll on privacy.

Don’t miss future subscriber-only editions of this column. Subscribe to WIRED (50% off for Plaintext readers) today.

Facebook
Twitter
LinkedIn
Telegram
Tumblr