This Wearable AI Notetaker Will Transcribe Your Meetings—and Someday, Your Entire Life

If you want to coast through meetings, keep track of everyone you meet, or just remember the name of that obscure dog food your veterinarian told you to feed your pooch, there’s a necklace for that. Or a wristband. Or a pin.

Plaud is an AI company that makes the creatively named Plaud Note—a slim ChatGPT-enabled audio recorder that can be stuck on the back of your phone or slipped into a shirt pocket to record, transcribe, and summarize your conversations.

The company’s newest offering is called the Plaud NotePin (the naming scheme doesn’t get any better here), and it takes basically all the same features of the Note and packs them into a wearable device about the size of a lipstick tube. The NotePin can be worn as a necklace, a wristwatch, or a pin, or clipped onto something like a lapel.

PHOTOGRAPH: PLAUD

It costs $169 and lets you record up to 300 minutes of audio per month. To record more than that, you can pay a $79 annual fee for the pro plan that gets you 1,200 minutes per month and additional features like labels that identify different speakers in a transcription.

If a wearable device with these capabilities sounds familiar, it’s because we’ve been here before. AI wearables abound, even if it’s not quite clear whether they provide enough utility to make people actually want to wear them. Consumers responded quite poorly to the first wave of big AI gadgets, including the Humane AI pin and the Rabbit R1—mostly because they either didn’t really work, could have just been an app, or also looked kind of dumb. Friend, the AI necklace that just wants to be your pal has not been released yet, but its announcement was met with a wave of indignant condemnation for how its always-listening design breaks social norms that discourage eavesdropping on conversations. So far, the only AI-adjacent hardware device that has garnered some moderate success is Meta’s Ray-Ban smart glasses (even if their AI capabilities can use some work). Everything else has either looked too dorky, did not function as advertised, or simply could be bested by the features on a smartphone. Hardware is hard, as they say.

That hasn’t stopped the work-oriented AI gadget hopefuls. Wearable devices are being thrust into the world en masse from companies like Plaud, Rewind.AI, and Limitless. (Hardware development takes a while, after all, so chances are these devices were in the works before Humane tanked, and now the companies have got to do something with their gadgets.) Google’s Pixel phones and Apple’s iPhones are being loaded up with similar productivity features, all in the effort to make people’s work lives more manageable and more productive.

Plaud is eager to toss its new little hand grenade into that fray. The company is pitching its new product squarely at productivity junkies—business bros trying to make connections at conferences, salespeople tracking leads, or anyone eager to get a grip on their innumerable daily meetings. There’s a sort of simplicity to the NotePin. Instead of the many promises some AI devices try to keep, the purpose here is primarily for note-taking. Switch the recorder on, let it do its thing, then check the bullet points for the big takeaways later.

“Most companies are innovating with AI with already digitized data on the internet,” Plaud CEO Nathan Hsu says in a press briefing ahead of the NotePin’s release. “But there is so much data in our real-life scenarios. What we say, what we hear, and what we see.”

Are You Getting This Down?

Transcribing your life is a noble endeavor. A decent amount of the long, tedious task of transcribing an interview or meeting notes by hand can be handed off to a good speech recognition service. But—take it from a journalist who routinely uses automated transcription services to type out interviews—those services sure aren’t perfect, and they can often generate entirely wrong sentences, completely misspell names, or mangle basic facts.

PHOTOGRAPH: PLAUD

Avijit Ghosh, a policy researcher at the AI company Hugging Face, points out that AI speech recognition also historically has trouble recognizing people speaking with particular accents, which can lead to misunderstandings. (Hsu says this hasn’t been an issue that Plaud users have brought up.) Add in the extra idiosyncrasies that generative AI systems can hallucinate into existence and you’re often left with an almost-but-not-quite-there picture of what happened. It may be better than the types of transcriptions you had access to before, but it’s important to recognize what limitations the tools may have. Relying on that incomplete information to guide your work life could result in some uncomfortable misconceptions, or just lead to embarrassment.

“It might completely make up things that have never been said,” Ghosh says.

There are also security concerns that come from both relying on AI for business meetings and having so much information stored in a wearable device. Plaud says its cloud transcription and summarization service is encrypted by default, but the device itself is not. If a user loses a device and someone else snatches it up, any recordings stored on the device could be accessed if they connect it to their computer. Hsu says this is not likely to be a problem, because the NotePin uses a proprietary charging connector, so bad actors wouldn’t be able to access the device unless they have a NotePin of their own. (To which I would say, have you seen the lengths that hackers are willing to go to in order to steal secrets?) Also, the NotePin has a built-in “find my” feature that helps keep it from getting lost. Still, it’s not a perfectly closed system.

“In that case, if you’re not taking precautions and you lose the device, that could be accessible,” Hsu says. “But that’s very extreme.”

Ultimately, Hsu has greater ambitions for his company than work-focused devices, though he’s careful to point out that this is what they’re concentrating on now, and he’s cognizant of the uneasiness it might cause.

“We have this grand vision, where what happens if users could just record all of the conversations in their daily lives, maybe even after decades,” Hsu says. “If it always listens to you, it learns you, and over time it gets to know your personality, your preferences, your interactions. Someday, you’re going to be able to utilize AI to reproduce yourself—create this real digital twin. That’s kind of this grand mission, where we think if we’re able to help users connect to so many memories, it’s going to be grand.”

It’s clear that AI has the potential to upend much of how humans operate. But some advocates and experts express concern about what happens when these capabilities are entrusted to AI devices—especially ones that are designed to be worn all the time.

In an interview for a previous story about AI gadgets, Jodi Halpern, a professor of bioethics and medical humanities at UC Berkeley likened the trend of offloading human capabilities onto AI devices to the way people don’t need to keep track of directions when they can rely on a service like Google Maps.

“There may be dimensions of human development that just don’t occur anymore,” Halpern says. “Like we don’t develop senses of direction, we may not develop social emotional depth of dealing with people different than ourselves and being empathically curious. If we have a constant feeling that something’s listening and sort of surveilling us, it’s a way to not learn how to be, in a certain way, alone with ourselves.”

All that philosophical grandiosity aside, it still isn’t clear whether people are actually willingly to invest in these kinds of devices in the first place. Plaud has a compelling use case, but it is entering a crowded field where it has to compete with other devices and, well, thousands of apps on smartphones—the devices people already carry around all day.

And users may find that the boring old tools they’re already using are more mature and more effective than any of these splashy inventions.

“Everything that ChatGPT does, it does worse than something else that was designed to do that thing,” Ghosh says. “I think people being gaslit into thinking these systems are more accurate than they are is the main problem.”

Facebook
Twitter
LinkedIn
Telegram
Tumblr