If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more. Please also consider subscribing to WIRED
The year began with such promise. Back in January, I remember sitting in a presentation hall at a Las Vegas hotel during CES 2024 as Rabbit CEO Jesse Lyu unveiled the R1. This colorful and fun pocket-sized AI companion promised to do everything, from ordering an Uber to answering all your vexing questions. My story on the R1 had just gone live and within hours—I’m not trying to pat myself on the back here—there were a lot of eyeballs on it. The device was unlike anything that had come before, and showed us a novel vision of how these newfangled AI agents would fit into our lives. Rabbit’s R1 became the breakout story of CES, and the company claims it sold 100,000 units by March.
Soon after it came Apple’s Vision Pro—a $3,499 mixed-reality headset that lets you escape the real world to watch movies, play games, and work in a cyber-office with multiple screens hovering around you. Apple had spent the year prior hyping it, and it was the first new product category for the company since the Apple Watch in 2015.
A few months after that, Humane’s Ai Pin arrived. Like the Rabbit, this was yet another AI assistant, though it was designed to be worn on your lapel. I was visiting my mom shortly before the Ai Pin launched, and out of the blue, she asked me if I had heard about this “pin” product, because she had heard of it on an Indian news channel. It was not only the talk of the town, but the talk of the world.
These three spectacularly hyped products have one thing in common: They all flopped.
Humane’s Ai Pin barely worked and was widely panned by critics upon release. A short while later, more Ai Pins were being returned than purchased. (Do I need to mention the Charging Case posed a fire safety risk?) The company now wants to license its CosmOS operating system to third parties to be injected into cars and smart speakers, though this strategy has no product integrations to show for it.
Rabbit similarly received a swath of negative reviews at launch as the R1 barely did anything, and many third-party integrations it featured were half-baked. The underlying software was revealed to be just a simple Android app. Oh, and there were critical security issues too.
And then there’s the Vision Pro. Renowned Apple analyst Ming-Chi Kuo claims the company slashed shipments mere months after the headset launched, from between roughly 700,000 and 800,000 units to between 400,000 and 450,000—paltry for a new Apple product. And that was before the headset became available outside the US. However, unlike the other two products, the Vision Pro was at least a technological marvel, with impressive hardware and software. The price was just way too high, especially since most people in the target audience for a VR headset were already happy with a $300 Meta Quest and its robust selection of games and experiences. Also, Apple mistakenly thought everyone would suddenly be OK with wearing headsets around one another, a rare misfire.
Despite negative reviews and poor sales figures, these products didn’t just disappear. You can still buy all of them. Apple, Rabbit, and Humane have been meticulously rolling out updates with bug fixes, new features, and—in the case of Humane and Rabbit—things that were supposed to be available at launch.
I fired all three of these devices up briefly to try them again.
Apple Vision Pro
I am writing this story on the Apple Vision Pro. I like using it for work more so than entertainment, though I did watch The Weeknd’s Open Hearts, an interactive music experience where I was face-to-face with the musician as he was being hauled away in an ambulance through various Inception-esque environments. The visual quality of the film is impressive and it feels immersive to watch—you can see beads of sweat on his face—but there are also moments when the camera changes to sweeping views that look too grainy and pixelated. Apple has released several immersive experiences like this throughout 2024, including a film called Submerged.
I’m running the latest visionOS 2.2 operating system, and the small changes that have been introduced in recent months are welcome. You can now rearrange the home screen and move your apps around—riveting stuff. I do really like the new way to access the Control Center: Look at your hand and a circle pops up, then flip your hand the other way and you’ll see a little system tray widget that you can tap on to access the Control Center. It feels far more futuristic than just looking up at an icon.
Technically, I’m using the Vision Pro in Mac Virtual Display mode. I have it wirelessly connected to Apple’s new Mac Mini, which brings your computer’s screen into a spatial environment, allowing you to place other visionOS apps around it. This mode now gives you options for your Mac’s screen size: Standard, Wide, and Ultrawide. I’m using the latter and have two browser windows open side by side, plus Slack off to the left, and I’m using Apple’s Magic Keyboard and Magic Trackpad. It’s great.
It’s hard not to leave the Vision Pro experience without feeling impressed each time, though I wish this darn battery cable was a little longer. (It slid off my desk while I was wearing the headset, but I thankfully caught the pack before it pulled the wire.) But with Google and Samsung’s recent announcement of Android XR, we can expect similar mixed-reality headsets on the way in 2025, and I have a hard time believing most of them will cost as much as the Vision Pro. That’s ultimately the main problem with Apple’s big 2024 release: It’s too far out of reach for almost everyone.
Humane Ai Pin
Humane has been delivering so many updates to its Ai Pin since its launch that the changelog is intimidating. But scrub through it and you’ll find most of the updates are just … fixing things.
When I first dusted off my unit and recharged it, the Ai Pin refused to update to the latest version. I asked it what to do, looked at the company’s instructions, and nada. After a few days on the charger, it finally updated itself. Yay. In between all of this, the Ai Pin told me it was overheating and needed to cool down. That’s the Pin I remember.
Post-update, things do seem to be marginally better with the overall experience. It no longer feels super hot on my body. The company claims the Pin’s thermal regulation has been improved by a factor of three, and that the laser that projects an interactive display on the wearer’s outstretched hand now runs 50 percent longer. The battery also lasts 25 percent longer, the Ai mic reliability is 10 percent better, and speech recognition is 20 percent better. I’m not sure how all of these updates are so precisely quantified, but the claims of improved performance seem to track from my brief time retesting the Pin.
Two new features stuck out to me: Voice Authentication and Personal Voice. The former lets you bypass having to use the laser display to input a passcode every time you want to use the Pin. You can just use your voice to authenticate—however, I have yet to find a way to enable this.
When I asked it for help, the Ai Pin told me to go to Humane’s web portal and dive into settings, but I have gone through every setting option and do not see anything about Voice Authentication. I will just keep using the laser I guess. Speaking of, I still hate this “display.” The laser system continues to be frustrating, and it’s just a terrible user experience, since minute movements of my hand can change the interface.
Thankfully, I was able to try Personal Voice. This feature has you teach the Pin your voice, and then when you use the translation feature, it will spit out the translation in … your own voice. It’s a synthetic, AI-generated version of your voice. Before you enable this, you have to agree to the following:
By clicking “Accept and Continue,” you consent to Humane’s collection, use, and storage of voice prints and similar biometric data about your voice. This data will be used to create a synthetic version of your voice for real-time translation on Ai Pin and for authentication purposes. Where Humane has access to voice data, Humane may retain the voice data for up to 3 years following your last interaction with us, except as otherwise provided by law. Humane may disclose this data to third party service providers to assist in providing these services. For additional information, please refer to our Privacy Policy.
Everything in my body wanted to say no, but in the pursuit of journalism, I accepted. I then repeated a few sentences aloud so the software could learn my voice. The results of all this are creepy. It’s bizarre hearing your own voice saying things you have never said. Even crazier is hearing myself speak Mandarin or Hindi despite not knowing either language. I tried translating some text to Malayalam, my second language, and the Pin did a decent job of it. Maybe my mom will finally be happy that “I” can finally speak the language properly.
Ultimately, I didn’t really want to use the Ai Pin. It sat on my shirt for a few days, and I would ask it a question here and there and be satisfied with its answer, but I can also do all of that fairly easily with Gemini on my Android phone, or Siri and its new ChatGPT integration on my iPhone. The Pin felt like just another physical device I had to take care of, babying the battery and dealing with its annoyances. I have enough on my plate.
Rabbit R1
Finally, I went back to the R1. Booting it back up, this cute retro square gadget still wins on aesthetics. As with the Humane Ai Pin, I ran into some trouble getting the device updated. Magically, after a few days of charging, it finally connected to Wi-Fi and updated.
Hilariously, almost all the third-party integrations the company launched—DoorDash, Uber, Midjourney, and more—are being retired, so those functions just don’t work. (Not that they really worked before.) The scroll wheel is less janky, and the interface improvements are welcome. For example, you can now press and hold the push-to-talk button and scroll up or down to change the volume.
There has been a litany of updates to the R1 over the year, but three stand out: Beta Rabbit, LAM Playground, and Teach Mode.
Beta Rabbit uses enhanced large language models for a more conversational experience when you ask the R1 anything. I didn’t find it all that conversational like GPT-4o or Gemini Live. I asked it how we know anything about the early years of the universe, and it started reading an excerpt. At one point, it mentioned “cosmic microwave background” at which point I interrupted it and asked how that is detected. The R1 then began a tirade of, “Searching for cosmic microwave background,” “Searching for this,” and “Searching for that.” After five of these, it finally started reading out an answer about CMBs.
LAM Playground is an interesting feature accessible on Rabbit’s web portal. These “large action models” run on a virtual browser you can interact with, and it’s largely meant to showcase how Rabbit will be able to execute tasks on your behalf. (You know, because it couldn’t do that at launch!) Enter a prompt and Rabbit’s bots will execute it. For example, you can ask it to find an item and add it to your Amazon cart, though you will need to log into Amazon via this virtual browser, which seems like a massive privacy risk.
I asked it to search Google for the “best office chair” and then to take me to the retailer’s webpage. It took way too long to do this, and it typed “best office chair reviews 2023.” (Is the R1 in a different timeline?) But it still went to the first result (which is my very own office chair guide, thank you very much), and it took me to the product page for Branch’s Ergonomic Chair Pro, my top recommendation.
Where LAM Playground lets you see how all of this works, Teach Mode lets you put it into action. It’s still in beta (I’d argue the R1 is still in beta too). After a few attempts, I kept getting an error that rendered the feature unusable. Finally, it worked on a different day. I created a lesson, and then executed the steps within a virtual browser—the R1 logs every click. Then, when I told my R1 device to perform the action, it mimicked my actions.
For example, if you reliably purchase a box of heavy-duty trash bags once every six months, you can train the R1 to take the exact steps it needs to take to buy the bags from Amazon. Then, when you tell the R1 to “buy trash bags,” it will follow that process. I got it to successfully add the item to my Amazon cart (I didn’t want to buy trash bags at the moment, so I didn’t teach it how to complete a purchase).
None of this feels like “artificial intelligence.” Isn’t the whole point that AI is smart enough to just be able to do it for you? Teach Mode is a very neat trick, and I can see it being convenient, but imagine having to train Siri, Alexa, or Google Assistant manually first to perform basic actions on your behalf. It’s a little ridiculous.
I’m sure I’ll use the Vision Pro again, but it will probably see long stretches of time in between sessions when I slip it back to watch a video or do a video chat. The R1 and the Humane Ai Pin will be going back into their respective boxes. I’ll be ready to fire them up again if and when a major update lands with new features to test. Or maybe it just won’t be worth the hassle.