Once upon a time, I worked on software products for local commerce.1 Our platform cataloged merchant inventories, let consumers send orders from their phones to point-of-sale systems, and coordinated the work of delivery folks ferrying goods from A to B.
Software-enabled platforms have changed the way cities work. But the changes smartphone-enabled apps have brought to cities pale in comparison to what wearables and augmented reality (AR) based products will enable next.
Wearables bring the internet off of screens and into our ears and eyes. They put a product layer between us and the environment; the next generation of these devices will do this to a greater degree than we’ve ever seen before. As this happens, it will impact the way we use, interact with, and relate to the physical world around us — and in the process, change a thousand things about urban life.
The Next Generation of Wearables
Wearables are not a new concept. For those of us of a certain age, we can remember Google Glass as the first big – though ultimately failed – attempt at an AR wearable. Later products like Apple’s AirPods have put some functionality in our ears, but the dream of a personalized heads-up display (or just anything beyond the smartphone) has remained elusive. That, of course, is rapidly changing.
Meta’s Ray-Ban smart glasses take pictures and record audio. They have bone-conduction speakers for making calls and listening to music. The newest versions will also identify objects (i.e. type of plant, car, animal, etc) in the wearer’s field of vision. Users ask the question out loud and, through the magic of Large Language Models (LLMs), the glasses provide an audio-based response. And on top of the feature set, they look like normal eyewear.
In contrast, the Apple Vision Pro looks more like snow goggles; in return for the bulk, users get an entire MacBook’s worth of functionality strapped to their face.
Independent device makers are also trying to get into the game.
The Rabbit R1 performs tasks like calling your Uber based on spoken instructions. Users say something like “Call me a car” and the device does the putzing around the relevant app for you. It’s very much positioned as an LLM-based abstraction layer between you and your apps.
The Humane Pin is another entry in this independent category. The device is worn on the chest and users interact with it via voice commands, gestures, and some limited on-device buttons. According to company propaganda, users can ask it questions to kick off an LLM-intermediated web search or have it take actions like creating documents in G-suite.
The Frame from Brilliant is a third AI-powered wearable. According to the company, their smart glasses will let you ask questions about whatever you happen to see or hear and have task-specific LLMs answer said questions.2 Use cases include real-time translation and identifying objects/locations; users ask a question and the glasses display a result in the user's field of vision. Similar to Meta’s product, but without the bone conduction speakers.
This resurgence in wearables is due to a confluence of factors. Advances in hardware design and battery tech have made certain form factors more feasible. More important, though, is the rapid advance of LLMs.
Voice-to-Text-to-LLM makes it possible to design user experiences free from looking at a screen. Many of us know the frustration of trying to get Alexa, Google Home, or Siri to understand what we want. LLMs, and specifically LLM Agents, bridge that gap and give users a non-screen-based UI that’s functional enough to rely on.3
Whether any of these specific devices see mass adoption is beside the point. It’ll probably take a couple of generations for wearables to take off. What’s important is that companies are committed to moving past smartphones and screen-based UIs.
Ambient Computing Is the Future
I believe in the idea that science fiction precedes science fact. We have to imagine something before we can build it after all. What we’re on the cusp of building is something that looks like Jarvis from Iron Man. And I don’t mean that just in terms of an AI assistant we can talk to.4
If we think back to the MCU movies, Tony Stark doesn’t talk into a particular device or look at a special screen to interact with Jarvis. Whether he’s in his Iron Man suit or walking around the lab, he calls on Jarvis and Jarvis is just there, communicating through whatever hardware surface happens to be at hand. Tony Stark’s AI butler is omnipresent.This is called ambient computing. And it’s what LLM-powered wearables will enable as they wallpaper the internet onto the world around us.
As a product person, I can imagine a couple specific ways this plays out.
First, we’ll annotate the physical world. To be fair, we already do this today. Services like Google Maps or Yelp give you information about real-world places you might want to go and the best way to get there.5 Pulling this information out of a phone and into a head-up display isn’t a huge conceptual leap.
There are additional use cases, though. For example, merchants might set up digital facades, visible to folks using AR wearables. We know brick-and-mortar stores optimize their spaces for Instagram-ability, so anything effective for marketing will get adopted at some point.
We’ll also create new things (and places) out of thin air. Consider Niantic’s Pokémon Go. Players use the Pokémon Go app to walk around their neighborhoods searching for the titular pocket monsters.
Once in the right location, they use their phones like magic looking-glasses to “see” the Pokémon and try to capture them. I think it’s obvious that these types of experiences will be 1000x more engaging once they’re mediated via an AR wearable (in this case smart glasses).
There’s also an opportunity here for digital placemaking. A mobile phone screen is a tiny workspace. AR glasses give us a person’s entire field of vision as a canvas. Once we have that much more real estate to work with, we can build entire environments instead of being limited to discrete objects (like an individual Pokémon).
Picture something like a silent disco in a large open space. Instead of just music, though, we get a visual motif that everyone experiences together. My super cool use case is something like a giant group study hall held outside in a university quad, complete with a Kokiri Forest-themed overlay and the low hum of Zelda Lofi beats in the background.6
We might also reincarnate web1-style community building. Once upon a time, the internet was unindexed and fragmented. In the primordial soup of early online creation, little communities took root across disparate, niche message boards. Most of these places eventually died off as folks moved to the web2 platforms we know (and have deeply conflicted relationships with) today. But the memory of small-scale, idiosyncratic community building lives on in the minds of some elder millennials and Gen-Xers, even today. I believe AR wearables could bring some of that back.
Imagine a future where wearable AR glasses are as common as smartphones are today. People with some identity, group membership, or niche interest could make themselves (literally) visible to others with the same affiliation.
Maybe you’re new to town and open to running into other members of your religious or ethnic group. Or – to make this really web1 – have a hardcore interest in medieval underwater basket weaving and would welcome a conversation with someone about it while waiting in line for coffee. Whatever the specific use case, a killer feature might be opting into serendipitous connection by communicating commonality.
How the Digital Reshapes the Physical
Technology is shaped by the material conditions that create it. But once it scales, it reshapes those material conditions in turn. The classic urbanist example is the automobile. Cars only took off after we remade the world to better accommodate them (at least in the states). Digital technologies work in the same way.
AR products will be virtual experiences built on top of the physical environment. Initially, they’ll be designed given the world as-is, but they’ll eventually exert their own influence back on the way cities work.
AR environments like my Zelda-themed immersive silent disco study hall™ might bring more people to public fields or plazas if a large open space makes for a better canvas. At scale, they might change where and when people hang out. They might also increase demand for public spaces that can support these types of use.
Economic incentives will show up here as well. AR games that encourage players to go to certain physical locations will 100% be used for local marketing.
Pokémon Go strategies for small business have existed for years. Merchants can make their business a point of interest in the game (luring Pokemon to their location, hosting Gym battles, etc), thereby attracting players (who might then buy a boba tea or whatever). Niantic even ran a Sponsored Locations for Business program for a while that let merchants “pay for placement”.7
In some ways, this model looks like a less toxic version of paid marketing on existing social media platforms. The second-order effects of bringing people together IRL feel less fraught than keeping an infinite number of users glued to their screens with rage bait.
On a final note, AR may also change the way we physically construct certain types of spaces. Maybe having a plain physical facade better facilitates elaborate AR signage. If that were the case, physical exteriors might begin looking more like blank slates.
Now, all of this is total speculation. I have no idea what AR’s specific impacts on construction or use of public space will be. But what I do know is that there will be an impact. Once we decide that AR is useful, we’ll start tweaking the physical environment to better facilitate whatever those use cases actually are.
How This All Goes Wrong
Up to this point, I’ve been painting an optimistic picture. Admittedly, I’m a techno-optimist at heart – I believe technology can and should make the world a better place. But I also believe that doesn’t happen by accident and I recognize that technology is only as good or bad as what it’s used for. So, let’s talk about how all this could go wrong.
Companies like Meta and Google built businesses tracking our behavior between different apps and websites.8 This digital surveillance state has powered highly personalized advertisements, giving rise to online marketing as we know it.
One danger is that the extra information generated by wearables gets poured into the open maw of digital advertising. This then feeds into the valid fear that AR wearables will be the next frontier in monetizing our attention by selling it to marketers.
From this point of view, letting the internet slip loose from the confines of our screens would be like opening a portal to hell and inviting dark gods of chaos into everyday reality.9
To this, I say…it’s possible (but not inevitable).
Product Design is Downstream of Economic Incentives
Web2 social media works the way it does because of the business model. These platforms make their money by aggregating attention and selling it to marketers. To that end, they’re incentivized to keep as many people glued to their screens as possible. The product design and algorithmic prioritization of the worst possible kind of content follows from that.
If wearables are just an extension of web2, things go to a dark place. That’s both in terms of an advertisement-laden AR landscape and toxic product experiences designed to maximally monetize our attention.
That said, I remain optimistic.
Apple is hot for wearables (and not trying to monetize engagement as a business strategy). And the other independent wearables startups seem to be running hardware + subscription models. So, as long as Meta doesn’t win and create a monopoly on AR hardware, we’re probably good.
The YouTuber / trained architect Dami Lee (whose channel you should 100% check out) also raises equity concerns. She suggests that the wealthy might end up with access to better AR experiences (e.g. the AR equivalent of an ad-free YouTube subscription). I think these fears are overwrought, but there’s something worth unpacking here.
Software products have (almost) zero marginal cost. Once you’ve built a platform for one user, it doesn't cost anything to provide access for n subsequent users.10 With regards to who has access to certain AR spaces, that’s going to be more a function of who has access to the physical spaces that AR experiences will be built on top of – which is not a new concern in the history of urbanism.
To the more specific question of whether AR experiences will be littered with advertisements, again, that’s a function of the business model. And if we get a “bad” business model, I think that applies across the board. Social media breaks all our brains, irrespective of socio-economic status.
Final Thoughts
I was of two minds writing about this topic. On the one hand, exploring something that cuts across two deeply held interests (product and urbanism) is super engaging. On the other, there are ten thousand additional thoughts, observations, and arguments I could have made that got cut. There are limits, after all, to how many cul-de-sacs we can explore in a single post.
With that in mind – what did I miss?
For the urbanists:
Can you imagine planners, architects, or developers designing spaces with an eye to how they’ll be used for AR? Or do you think AR design remains the sole province of product folks at tech firms?
Similarly, are there any problems that planners or architects face that AR might help solve?
For the tech folks:
What use cases have I forgotten?
What model of software distribution do you think wins out? (i.e. do AR products all just come from apps we download from an app store?)
Let me know in the comments and thanks for reading.
I spent half a decade at the on-demand delivery platform Postmates. I worked on a few different parts of the product and gained a lot of perspective on the way platforms impact local markets and communities.
OpenAI for object recognition, Perplexity for web search, and Whisper for translation.
This is less so for the Vision Pro which tries to turn everything you can see into a screen, but our other examples are moving more towards this screen-less/screen-lite direction.
The book series Ilium/Olympos features a post-apocalyptic setting where surviving humans use futuristic technologies that they no longer understand. They access their version of the internet by visualizing commands like ‘green triangle’ or ‘yellow square’ and information gets piped directly into their minds (they’re all born with mind<>machine interfaces). To them, exhaling to blow out a candle is no different from thinking ‘purple oval’ to see where a friend is. It’s not technology, it’s just the way the world works. At some point, ambient computing might bring us to view technology in much the same way.
Technically, it’s almost all Google Maps with services like Yelp hitting the Google Places API.
I’m open to the feedback that using AR to facilitate group study sessions might not, in fact, be super cool.
The program appears to be closed and it’s unclear how widely it was ever rolled out.
Apple cracked down on this for iOS users in ‘21.
Yes, this is a Warhammer 40K reference.
There are caveats, but for purposes of this conversation, consider this effectively true.
It's almost been a year since Vision Pro was first announced and a few weeks since they came out. I've been pleasantly surprised that I haven't seen everyone rushing to buy it. Clearly VR/AR is a thing, but how much of a thing it will be has yet to be seen. I wrote a bit on the topic: https://thenewurbanorder.substack.com/p/reality-is-a-privilege