I don't want AI to sit on my face.

A woman wearing a mixed-reality headset stands in a living room, viewing a large immersive projection of colorful hot air balloons floating over a rocky landscape.
Photo credit: Samsung Global Newsroom

“Nobody wants to strap a computer to their face.”

That’s what I remember from when the Apple Vision Pro was announced. I heard it on podcasts, in conversation with my gadget-obsessed but practical colleagues in IT, in Mastodon posts. There were other gripes, of course. It started at $3,499. Some said this device would isolate us from each other. That it was alienating (remember the dad in Apple’s video who wore a Vision Pro headset at his child’s birthday party?).

I thought all of the criticisms were valid, except one: Nobody wants to strap a computer to their face.

Because (you guessed it) I want to strap a computer to my face.

While many virtual reality devices are positioned primarily as entertainment devices, Apple pitched Vision Pro as a general-purpose “spatial computer.” And from that perspective—as an interface for a laptop via Mac Virtual Display or as a standalone productivity device—I found the idea immediately captivating. Floating windows with VisionOS and iPadOS apps, full immersion in a 360-degree environment (Apple has shipped several nature- and space-themed environs), and integration with noise-cancelling AirPods all added up to what sounded to me like a computing dream.

Depending on whether I’m in my home office or my office office (and, when I’m in the office office, whether I’m at my standing desk or my desk desk), I work with anywhere from three to five screens visible in front of me. On the one hand, yes, this is crazy; on the other hand, you can see how working in virtual space might appeal to someone who works like I do.

(Also, did I mention I’m a massive introvert? That thing earlier about Vision Pro being isolating is a feature, not a bug.)

I had attended Apple’s in-store demo, and a friend at work recently let me borrow his practically-unused Vision Pro for an afternoon (ok, yes, not everyone who thinks they want to strap a computer on their face actually wants to, and many of these devices end up gathering dust).

While I very much enjoyed working in virtual space, the complaints are also real: the headset is heavy, it becomes uncomfortable to wear for long periods of time, and the available apps and content are still somewhat lacking. As a proof of concept for the future, though, it’s breathtaking. Particularly when imagined in the context of what I believe is Apple’s long-term vision (pun intended): something more akin to Meta’s 2024 Orion tech demo, extended or augmented reality glasses, rather than a virtual reality headset.

There are already entrants in this category, including Even Realities’ G1 and Meta’s own Ray-Ban Display Glasses, both of which incorporate a small display in the user’s field of view. These are not “floating holograms in space”–type displays; there are devices in a glasses form factor that do that sort of projection, like some of Xreal’s specs, but I have found the Xreal experience to be, I believe the technical term is, jank. The G1 and Meta AI glasses offer something more like glanceable notifications or contextual information (Victoria Song described watching a live transcription of her conversation on a recent episode of Better Offline, for instance).

Sadly, while this hardware direction intrigues me, these products are marketed—like most smart glasses—as being AI-powered.

This week’s 404 Media podcast noted that users are putting Meta’s camera-equipped smart glasses to all sorts of gross uses even without AI. When AI enters the picture, though, it can be a force-multiplier for harms, as when a student project connected a previous version of Meta’s smart glasses to facial recognition software to allow the wearer to insta-dox anyone they were looking at. Even when the AI is Meta’s built-in variety, the experience isn’t great; Gizmodo’s James Pero recently observed that the AI is the least-compelling feature of AI smart glasses. They provide computer vision features he struggled to find uses for, and a voice-assistant experience that’s just like pretty much every pre-LLM voice assistant (thinking back to my own Vision Pro experiences, I realized one thing that I never thought to try: Siri.)

When I consider this category, I think about technology I already put on my face: my glasses. You know, the non-smart ones. Glasses glasses. The glasses I wear to see a conference room screen clearly. The prescription sunglasses I wear for driving. And the one pair of smart glasses I have owned—a pair of Bose Frames, in which I swapped out the tinted sunglasses lenses for clear prescription ones. The Frames were only “smart” in that they worked as a Bluetooth headset. But they helped me see and connected me to my iPhone, which was smart enough.

That is what I look for in a pair of smart glasses, an XR headset, or any other computer-on-face experience: a more intimate and portable extension of my existing computing life. A new interface. I already have ways to talk to AI if I want to, and I don’t want to. Facial recognition to remind me of the name of everyone I see would be great for me, but creepy for everyone else. And while there are features that cameras and microphones and sensors can power, some of which may even require AI, I don’t want to be Mark Zuckerberg or Sam Altman’s robotic eyes and ears. 

And I suspect there is enough value in this category without most of the AI bells and whistles. As of this writing, the Samsung Galaxy XR, a Vision Pro competitor that’s nearly 50% cheaper, is just a few weeks old. One reviewer on Samsung’s product page writes that Samsung “lied” about the AI features that peppered their launch video, and lists a litany of things Galaxy XR’s AI can’t do.

But he still gave the $1,800 headset a five-star review. And recommended it.

There is something here. I do want a computer on my face. I just don’t want AI on my face.