What makes AI hardware sell?
Do one thing very well—and stop overpromising
Welcome to Reading the Waves, where we highlight tech trends beyond Silicon Valley by chatting with people on the ground. To learn why we started this project, read About.

The recently unveiled Sam Altman–Jony Ive collaboration has reignited excitement around AI hardware, following a wave of underwhelming launches in recent years. But the big question remains: what makes AI hardware sell?
We talked to a handful of hardware founders in the trenches and two stood out—AI glasses maker Even Realities, whose users log significantly more hours than Meta’s Ray-Bans, and smart voice recorder Plaud, which is quietly approaching one million units sold.
There’s no hidden formula behind their early traction, but both follow a simple playbook—nail one core feature that keeps users coming back.
Plenty of startups don’t do that. The first crop of AI hardware, aimed at bringing large language models into people’s daily lives, largely overpromised and underdelivered: Humane’s much-hyped Ai Pin, backed by Sam Altman and built by ex-Apple employees, shut down after selling just 10,000 units. Rabbit R1, another hyped-up gadget, disappointed users with its limited and clunky functionalities.
Both devices tried to do too much and ended up doing nothing well. What Ai Pin offered—search the web, play music, take photos—smartphones already do, much better and with longer battery life. Similarly, R1 failed to deliver its promises to smoothly book rides, order food, and search for information through voice commands, something one could do on their phones with ease.
In contrast, Meta’s Ray-Ban, widely seen as the most successful AI hardware to date for its sales volume, focused on delivering a few features well—the device acts like ordinary sunglasses until summoned to take photos from the first-person perspective, make calls or play music. Even Realities and Plaud share that same laser focus.
Does it serve the users, or the AI model?
Teleprompting turned out to be Even Realities’ killer use case. The feature lets users see their script line by line on both lenses as they speak, while appearing to be wearing ordinary glasses. Palmer Luckey sported the startup’s G1 glasses to “cheat” during his Ted Talk, which, according to the Oculus founder, doesn’t allow prompters.
The popularity of teleprompting far exceeded Even Realities’ expectations. Initially intended more for public speaking, it’s found use cases in meetings, lectures, legal proceedings, or any scenario where speakers need to recall specific information on the fly.

“Politicians, executives, professors, journalists, lawyers, consultants—many people could benefit from having notes in front of their eyes while talking,” said Will Wang, CEO and cofounder of Even Realities. “Even if we only do teleprompting well, that alone is enough, because no one had solved it properly before.”
On average, users wear Even Realities for 8-10 hours per day, “significantly more” than the usage of Meta Ray-Bans, which are worn mostly outdoors, according to Wang.
The Even Realities glasses, which start at $599 and launched a year ago, notably lack a camera, a key feature of the Meta Ray-Bans. It was a deliberate, difficult decision, but the team believes the camera serves the AI more than the user.
“A lot of companies don’t genuinely want to build great hardware to serve users. Instead, they’re mostly thinking about creating better hosts for their AI,” suggested Wang. “Once you add a camera, your AI can gather more information, and your model can access higher-quality data. But what's really happening is that you're sacrificing an enormous amount of user privacy.”
The G1, which comes with prescription lenses, function like regular glasses with a text-based heads-up display that shows notes, turn-by-turn navigation, phone notifications, all of which can be controlled by voice.
Based between Shenzhen, China and Lugano, Switzerland, Even Realities has made inroads into a crowded space. Other high-profile users include a16z general partner Anjney Midha, who sported them while meeting president Emmanuel Macron, as well as Nahayan Mabarak Al Nahyan, an Emirati royal.
Less is more

Much like Meta’s Ray-Bans, Plaud’s core functionality is deliberately narrow and intuitive. In essence, Plaud is a recorder on AI steroids that promises to help people never take notes again. Available either as a ‘pad’ that attaches to one’s phone or a ‘pin’ that clips to the collar, Plaud can listen to meetings, lectures, half-formed thoughts spoken out loud, and turn them into organized, shareable notes and to-dos.
Founded in December 2021, Plaud has quietly shipped over 700,000 units of its AI notetakers and attracted hundreds of thousands of daily users. “One million units will soon be within reach,” its co-founder and CEO Nathan Xu told us. That will put Plaud not far behind Meta Ray-Ban’s sales of two million units.
Xu, a fourth-time entrepreneur and a former venture capitalist, started out by identifying user needs—rather than shoehorning a solution around AI. Back in 2021, he noticed that Live Transcribe, an app for Google’s Pixel phones, was racking up over one billion downloads. Meanwhile, traditional recorders like Sony and Olympus hadn’t changed in a decade. The opportunity to build a next-gen recorder was clear. Then when LLMs came around, he immediately saw how the tech was perfect for summarizing unstructured conversations.
Plaud has overtime expanded its features from transcribing and summarizing to extracting meaningful insights from speech. Users can now choose between GPT and Claude to sort their notes. Thanks largely to advances in these AI models, usage has soared—tripling Plaud’s annual recurring revenue between December and May, according to Xu. The device itself costs $159, with various subscription plans based on minutes used.
Today, most of Plaud’s users are in the US, Japan and Europe. They’re professionals who rely heavily on meetings and spoken communication: real estate agents, car salespeople, business consultants, and even doctors.
Plaude’s product philosophy recalls that of the early iPhone; it was just an iPod with the ability to make calls and browse the internet, only evolving into an everything device as the tech stack matured. Do less first, but do it for real user demand and execute it exceptionally well.
What are some other AI hardware you think are working? Leave us a comment below!
Other things that caught our eye
More people are turning to AI to tell stories, using Google’s Veo, OpenAI’s Sora and other video models to create clips, then stitching them into short films. Even that last step—building a coherent narrative—can now be handled by AI, with some platforms generating entire storyboards. This sounds powerful, but it raises some questions. Without first doing the hard work to build craft, will people still acquire the taste and discernment necessary to instruct AI? Will they get too comfortable with mediocre computer output?
More importantly, would you try an idea-to-story AI tool that can turn your thoughts into a short film?
Thanks for reading! We welcome your thoughts in the comments or via email (write@firstrobin.com). 🌊



