The mirrorworld doesn’t yet fully exist, but it is coming. Someday soon, every place and thing in the real world—every street, lamppost, building, and room—will have its full-size digital twin in the mirrorworld. For now, only tiny patches of the mirrorworld are visible through AR headsets. Piece by piece, these virtual fragments are being stitched together to form a shared, persistent place that will parallel the real world. The author Jorge Luis Borges imagined a map exactly the same size as the territory it represented. “In time,” Borges wrote, “the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it.” We are now building such a 1:1 map of almost unimaginable scope, and this world will become the next great digital platform.
Google Earth has long offered a hint of what this mirrorworld will look like. My friend Daniel Suarez is a best-selling science fiction author. In one sequence of his most recent book, Change Agent, a fugitive escapes along the coast of Malaysia. His descriptions of the roadside eateries and the landscape describe exactly what I had seen when I drove there recently, so I asked him when he’d made the trip. “Oh, I’ve never been to Malaysia,” he smiled sheepishly. “I have a computer with a set of three linked monitors, and I opened up Google Earth. Over several evenings I ‘drove’ along Malaysian highway AH18 in Street View.” Suarez—like Savage—was seeing a crude version of the mirrorworld.
It is already under construction. Deep in the research labs of tech companies around the world, scientists and engineers are racing to construct virtual places that overlay actual places. Crucially, these emerging digital landscapes will feel real; they’ll exhibit what landscape architects call placeness. The Street View images in Google Maps are just facades, flat images hinged together. But in the mirrorworld, a virtual building will have volume, a virtual chair will exhibit chairness, and a virtual street will have layers of textures, gaps, and intrusions that all convey a sense of “street.”
The mirrorworld—a term first popularized by Yale computer scientist David Gelernter—will reflect not just what something looks like but its context, meaning, and function. We will interact with it, manipulate it, and experience it like we do the real world.
At first, the mirrorworld will appear to us as a high-resolution strata of information overlaying the real world. We might see a virtual name tag hovering in front of people we previously met. Perhaps a blue arrow showing us the right place to turn a corner. Or helpful annotations anchored to places of interest. (Unlike the dark, closed goggles of VR, AR glasses use see-through technology to insert virtual apparitions into the real world.)
Eventually we’ll be able to search physical space as we might search a text—“find me all the places where a park bench faces sunrise along a river.” We will hyperlink objects into a network of the physical, just as the web hyperlinked words, producing marvelous benefits and new products.
The mirrorworld will have its own quirks and surprises. Its curious dual nature, melding the real and the virtual, will enable now-unthinkable games and entertainment. Pokémon Go gives just a hint of this platform’s nearly unlimited capability for exploration.
These examples are trivial and elementary, equivalent to our earliest, lame guesses of what the internet would be, just after it was born—fledgling CompuServe, early AOL. The real value of this work will emerge from the trillion unexpected combinations of all these primitive elements.
The first big technology platform was the web, which digitized information, subjecting knowledge to the power of algorithms; it came to be dominated by Google. The second great platform was social media, running primarily on mobile phones. It digitized people and subjected human behavior and relationships to the power of algorithms, and it is ruled by Facebook and WeChat.
We are now at the dawn of the third platform, which will digitize the rest of the world. On this platform, all things and places will be machine-readable, subject to the power of algorithms. Whoever dominates this grand third platform will become among the wealthiest and most powerful people and companies in history, just as those who now dominate the first two platforms have. Also, like its predecessors, this new platform will unleash the prosperity of thousands more companies in its ecosystem, and a million new ideas—and problems—that weren’t possible before machines could read the world.
Glimpses of the mirrorworld are all around us. Perhaps nothing has proved that the marriage of the virtual and the physical is irresistible better than Pokémon Go, a game that immerses obviously virtual characters in the toe-stubbing reality of the outdoors. When it launched in 2016, there was an almost audible “Aha, I get it!” as the entire world signed up to chase cartoon characters in their local parks.
Pokémon Go’s alpha version of a mirrorworld has been embraced by hundreds of millions of players, in at least 153 countries. Niantic, the company that created Pokémon Go, was founded by John Hanke, who led the precursor to Google Earth. Today Niantic’s headquarters are housed on the second floor of the Ferry Building, along the piers in San Francisco. Wide floor-to-ceiling windows look out on the bay and to distant hills. The offices are overflowing with toys and puzzles, including an elaborate boat-themed escape room.
Hanke says that despite the many other new possibilities being opened up by AR, Niantic will continue to focus on games and maps as the best way to harness this new technology. Gaming is where technology goes to incubate: “If you can solve a problem for a gamer, you can solve it for everyone else,” Hanke adds.
But gaming isn’t the only context where shards of the mirrorworld are emerging. Microsoft, the other big contender in AR besides Magic Leap, has been producing its HoloLens AR devices since 2016. The HoloLens is a see-through visor mounted to a head strap. Once turned on and booted up, the HoloLens maps the room you’re in. You then use your hands to maneuver menus floating in front of you, choosing which apps or experiences to load. One choice is to hang virtual screens—as in laptop or TV screens—in front of you.
Microsoft’s vision for the HoloLens is simple: It’s the office of the future. Wherever you are, you can insert as many of your screens as you want and work from there. According to the venture capital firm Emergence, “80 percent of the global workforce doesn’t have desks.” Some of these deskless workers are now wearing HoloLenses in warehouses and factories, building 3D models and receiving training. Recently Tesla filed for two patents for using AR in factory production. The logistics company Trimble makes a safety-certified hard hat with the HoloLens built in.
Eventually, everything will have a digital twin. This is happening faster than you may think.
In 2018 the US Army announced it was purchasing up to 100,000 upgraded models of the HoloLens headsets for a very nondesk job: to stay one step ahead of enemies on the battlefield and “increase lethality.” In fact, you are likely to put on AR glasses at work long before you put them on at home. (Even the much-maligned Google Glass headset is making quiet inroads in factories.)
In the mirrorworld, everything will have a paired twin. NASA engineers pioneered this concept in the 1960s. By keeping a duplicate of any machine they sent into space, they could troubleshoot a malfunctioning component while its counterpart was thousands of miles away. These twins evolved into computer simulations—digital twins.
General Electric, one of the world’s largest companies, manufactures hugely complex machines that can kill people if they fail: electric power generators, nuclear submarine reactors, refinery control systems, jet turbines. To design, build, and operate these vast contraptions, GE borrowed NASA’s trick: It started creating a digital twin of each machine. Jet turbine serial number E174, for example, could have a corresponding E174 doppelgänger. Each of its parts can be spatially represented in three dimensions and arranged in its corresponding virtual location. In the near future, such digital twins could essentially become dynamic digital simulations of the engine. But this full-size, 3D digital twin is more than a spreadsheet. Embodied with volume, size, and texture, it acts like an avatar.
In 2016, GE recast itself as a “digital industrial company,” which it defines as “the merging of the physical and digital worlds.” Which is another way of saying it is building the mirrorworld. Digital twins already have improved the reliability of industrial processes that use GE’s machines, like refining oil or manufacturing appliances.
Microsoft, for its part, has expanded the notion of digital twins from objects to whole systems. The company is using AI “to build an immersive virtual replica of what is happening across the entire factory floor.” What better way to troubleshoot a giant six-axis robotic mill than by overlaying the machine with its same-sized virtual twin, visible with AR gear? The repair technician sees the virtual ghost shimmer over the real. She studies the virtual overlay to see the likely faulty parts highlighted on the actual parts. An expert back at HQ can share the repair technician’s views in AR and guide her hands as she works on the real parts.
Eventually, everything will have a digital twin. This is happening faster than you may think. The home goods retailer Wayfair displays many millions of products in its online home-furnishing catalog, but not all of the pictures are taken in a photo studio. Instead, Wayfair found it was cheaper to create a three-dimensional, photo-realistic computer model for each item. You have to look very closely at an image of a kitchen mixer on Wayfair’s site to discern its actual virtualness. When you flick through the company’s website today, you are getting a peek into the mirrorworld.
Wayfair is now setting these digital objects loose in the wild. “We want you to shop for your home, from your home,” says Wayfair cofounder Steve Conine. It has released an AR app that uses a phone’s camera to create a digital version of an interior. The app can then place a 3D object in a room and keep it anchored even as you move. With one eye on your phone, you can walk around virtual furniture, creating the illusion of a three-dimensional setting. You can then place a virtual sofa in your den, try it out in different spots in the room, and swap fabric patterns. What you see is very close to what you get.
When shoppers try such a service at home, they are “11 times more likely to buy,” according to Sally Huang, the lead of Houzz’s similar AR app. This is what Ori Inbar, a VC investor in AR, calls “moving the internet off screens into the real world.”
For the mirrorworld to come fully online, we don’t just need everything to have a digital twin; we also need to build a 3D model of physical reality in which to place those twins. Consumers will largely do this themselves: When someone gazes at a scene through a device, particularly wearable glasses, tiny embedded cameras looking out will map what they see. The cameras only capture sheets of pixels, which don’t mean much. But artificial intelligence—embedded in the device, in the cloud, or both—will make sense of those pixels; it will pinpoint where you are in a place, at the very same time that it’s assessing what is in that place. The technical term for this is SLAM—simultaneous localization and mapping—and it’s happening now.
For example, the startup 6D.ai built a platform for developing AR apps that can discern large objects in real time. If I use one of these apps to take a picture of a street, it recognizes each car as a separate car-object, each streetlight as a tall object different from the nearby tree-objects, and the storefronts as planar things behind the cars—dividing the world into a meaningful order.
And that order will be continuous and connected. In the mirrorworld, objects will exist in relation to other things. Digital windows will exist in the context of a digital wall. Rather than connections generated by chips and bandwidth, the connections will be contextual, generated by AIs. The mirrorworld, then, also creates the long-heralded internet of things.
Another app on my phone, Google Lens, can also see discrete objects. It is already smart enough to identify the breed of a dog, the design of a shirt, or the species of a plant. Soon these functions will integrate. When you look around your living room with magic glasses, the system will be taking it all in piece by piece, informing you that here is a framed etching on the wall and there is four-colored wallpaper, and that this is a vase of white roses and this is an antique Persian carpet, and over here is a nice empty spot where your new sofa could go. Then it will say, based on the colors and styles of the furniture you already have in the room, we recommend this color and style of sofa. You’ll like it. May we suggest this cool lamp as well?
Augmented reality is the technology underpinning the mirrorworld; it is the awkward newborn that will grow into a giant. “Mirrorworlds immerse you without removing you from the space. You are still present, but on a different plane of reality. Think Frodo when he puts on the One Ring. Rather than cutting you off from the world, they form a new connection to it,” writes Keiichi Matsuda, former creative director for Leap Motion, a company that develops hand-gesture technology for AR.
The full blossoming of the mirrorworld is waiting for cheap, always-on wearable glasses. Speculation has been rising that one of the largest tech companies may be developing just such a product. Apple has been on an AR hiring spree and recently acquired a startup called Akonia Holographics that specializes in thin, transparent “smart glass” lenses. “Augmented reality is going to change everything,” Apple CEO Tim Cook said during an earnings call in late 2017. “I think it’s profound, and I think Apple is in a really unique position to lead in this area.”
But you don’t need to use AR glasses; you can engage using almost any kind of device. You can kind of do this today with Google’s Pixel phone, but without the convincing presence that you get with 3D visors. Even now, wearables like watches or smart clothes can detect the proto-mirrorworld and interact with it.
Everything connected to the internet will be connected to the mirrorworld. And anything connected to the mirrorworld will see and be seen by everything else in this interconnected environment. Watches will detect chairs; chairs will detect spreadsheets; glasses will detect watches, even under a sleeve; tablets will see the inside of a turbine; turbines will see workers around them.
The rise of a massive mirrorworld will rely in part on a fundamental shift underway right now, away from phone-centric life and toward a technology that is two centuries old: the camera. To recreate a map that is as big as the globe—in 3D, no less—you need to photograph all places and things from every possible angle, all the time, which means you need to have a planet full of cameras that are always on.
We are making that distributed, all-seeing camera network by reducing cameras to pinpoint electric eyes that can be placed anywhere and everywhere. Like computer chips before them, cameras are becoming better, cheaper, and smaller every year. There may be two in your phone already and a couple more in your car. There is one in my doorbell. Most of these newer artificial eyes will be right in front of our own eyes, on glasses or in contacts, so that wherever we humans look, that scene will be captured.
The heavy atoms in cameras will continue to be replaced with bits of weightless software, shrinking them down to microscopic dots scanning the environment 24 hours a day. The mirrorworld will be a world governed by light rays zipping around, coming into cameras, leaving displays, entering eyes, a never-ending stream of photons painting forms that we walk through and visible ghosts that we touch. The laws of light will govern what is possible.
New technologies bestow new superpowers. We gained super speed with jet planes, super healing powers with antibiotics, super hearing with the radio. The mirrorworld promises super vision. We’ll have a type of x-ray vision able to see into objects via their virtual ghosts, exploding them into constituent parts, able to untangle their circuits visually. Just as past generations gained textual literacy in school, learning how to master the written word, from alphabets to indexes, the next generation will master visual literacy. A properly educated person will be able to create a 3D image inside of a 3D landscape nearly as fast as one can type today. They will know how to search all videos ever made for the visual idea they have in their head, without needing words. The complexities of color and the rules of perspective will be commonly understood, like the rules of grammar. It will be the Photonic Era.
But here’s the most important thing: Robots will see this world. Indeed, this is already the perspective from which self-driving cars and robots see the world today, that of reality fused with a virtual shadow. When a robot is finally able to walk down a busy city street, the view it will have in its silicon eyes and mind will be the mirrorworld version of that street. The robot’s success in navigating will depend on the previously mapped contours of the road—existing 3D scans of the light posts and fire hydrants on the sidewalk, of the precise municipal position of traffic signs, of the exquisite details on doorways and shop windows rendered by landlord scans.
Of course, like all interactions in the mirrorworld, this virtual realm will be layered over the view of the physical world, so the robot will also see the real-time movements of people as they walk by. The same will be true of the AIs driving cars; they too will be immersed in the mirrorworld. They will rely on the fully digitized version of roads and cars provided by the platform. Much of the real-time digitization of moving things will be done by other cars as they drive around themselves, because all that a robot sees will be instantly projected into the mirrorworld for the benefit of other machines. When a robot looks, it will be both seeing for itself and providing a scan for other robots.
In the mirrorworld too, virtual bots will become embodied; they’ll get a virtual, 3D, photorealistic shell, whether machine, animal, human, or alien. Inside the mirrorworld, agents like Siri and Alexa will take on 3D forms that can see and be seen. Their eyes will be the embedded billion eyes of the matrix. They will be able not just to hear our voices but also, by watching our virtual avatars, to see our gestures and pick up on our microexpressions and moods. Their spatial forms—faces, limbs—will also increase the nuances of their interactions with us. The mirrorworld will be the badly needed interface where we meet AIs, which otherwise are abstract spirits in the cloud.
There is another way to look at objects in the mirrorworld. They can be dual use, performing different roles in different planes. “We can pick up a pencil and use it as a magic wand. We can turn our tables into touchscreens,” Matsuda writes.
We will be able to mess not only with the locations and roles of objects but with time as well. Say I’m walking along a path beside the Hudson River, the real Hudson River, and I notice a wren’s nest that my bird-watching friend would be keen to know about, so I leave a virtual note along the path for her. It remains there until she passes by. We saw the same phenomenon of persistence with Pokémon Go: virtual creatures remaining in a real physical location, waiting to be encountered. Time is a dimension in the mirrorworld that can be adjusted. Unlike the real world, but very much like the world of software apps, you will be able to scroll back.
History will be a verb. With a swipe of your hand, you will be able to go back in time, at any location, and see what came before. You will be able to lay a reconstructed 19th-century view right over the present reality. To visit an earlier time at a location, you simply revert to a previous version kept in the log. The entire mirrorworld will be like a Word or Photoshop file that you can keep “undoing.” Or you’ll scroll in the other direction: forward. Artists might create future versions of a place, in place. The verisimilitude of such crafty world-building will be revolutionary. These scroll-forward scenarios will have the heft of reality because they will be derived from a full-scale present world. In this way, the mirrorworld may be best referred to as a 4D world.
Like the web and social media before it, the mirrorworld will unfold and grow, producing unintended problems and unexpected benefits. Start with the business model. Will we try to jump-start the platform with the shortcut of advertising? Probably. I am old enough to remember the internet before it allowed commercial activity, and it was just too broke to grow. A commercial-free mirrorworld would be infeasible and undesirable. However, if the only business model is selling our attention, then we’ll have a nightmare—because, in this world, our attention can be tracked and directed with much greater resolution, which subjects it to easy exploitation.
On a macro scale, the mirrorworld will exhibit the crucial characteristic of increasing returns. The more people use it, the better it gets. The better it gets, the more people will use it, and so on. That self-reinforcing circuit is the prime logic of platforms, and it’s why platforms—like the web and social media—grow so fast and so vast. But this dynamic is also known as winner-take-all; this is why one or two parties come to dominate platforms. We are just now trying to figure out how to deal with these natural monopolies, these strange new beasts like Facebook and Google and WeChat, which have the characteristics of governments as well as corporations. To muddy the view further, all these platforms are messy mixtures of centralization and decentralization.
In the long term, the mirrorworld can only sustain itself as a utility; like other utilities such as water, electricity, or broadband, we’ll have to pay a regular recurring fee—a subscription. We will be happy to do that when (and if) we believe we get real value from this virtual place.
The emergence of the mirrorworld will affect us all at a deeply personal level. We know there will be severe physiological and psychological effects of dwelling in dual worlds; we’ve already learned that from our experience living in cyberspace and virtual realities. But we don’t know what these effects will be, much less how to prepare for them or avoid them. We don’t even know the exact cognitive mechanism that makes the illusion of AR work in the first place.
We reflexively recoil at the specter of such big data. We can imagine so many ways it might hurt us.
The great paradox is that the only way to understand how AR works is to build AR and test ourselves in it. It’s weirdly recursive: The technology itself is the microscope needed to inspect the effects of the technology.
Some people get very upset with the idea that new technologies will create new harms and that we willingly surrender ourselves to these risks when we could adopt the precautionary principle: Don’t permit the new unless it is proven safe. But that principle is unworkable, because the old technologies we are in the process of replacing are even less safe. More than 1 million humans die on the roads each year, but we clamp down on robot drivers when they kill one person. We freak out over the unsavory influence of social media on our politics, while TV’s partisan influence on elections is far, far greater than Facebook’s. The mirrorworld will certainly be subject to this double standard of stricter norms.
Many of the risks of the mirrorworld are easy to imagine, because they are the same ones we see on current platforms. For instance, we’ll need mechanisms in the mirrorworld to prevent fakes, stop illicit deletions, spot rogue insertions, remove spam, and reject insecure parts. Ideally, we can do this in a way that is open to all participants, without having to involve a Big Brother overseer like a dominant corporation.
Blockchain has been looking for a job, and ensuring the integrity of an open mirrorworld might be what it was born to do. There are enthusiastic people working on that possibility right now. Unfortunately, it is not too difficult to imagine scenarios where the mirrorworld is extensively centralized, perhaps by a government. We still have a choice about this.
Without exception, every researcher in this field that I’ve spoken to has been acutely aware of these divergent paths and claims to be working toward a decentralized model—for many reasons, including the chief one that a centralized and open platform will be richer and more robust. Clay Bavor, vice president of AR and VR at Google, says, “We want an open service that gets better each time someone uses it, like the web.”
The mirrorworld will raise major privacy concerns. It will, after all, contain a billion eyes glancing at every point, converging into one continuous view. The mirrorworld will create so much data, big data, from its legions of eyes and other sensors, that we can’t imagine its scale right now. To make this spatial realm work—to synchronize the virtual twins of all places and all things with the real places and things, while rendering it visible to millions—will require tracking people and things to a degree that can only be called a total surveillance state.
We reflexively recoil at the specter of such big data. We can imagine so many ways it might hurt us. But there are a few ways big data might benefit us, and the prime one is the mirrorworld. The route to civilizing big data so that we gain more than we lose is uncertain, complex, and not obvious.
But we already have some experience that can inform our approach to the mirrorworld. Good practices include mandatory transparency and accountability for any party that touches the data; symmetry in the flow of information, so that the watchers are themselves watched; and the insistence that data creators—you and me—receive clear benefits, including monetary ones, from the system. I am optimistic that a viable path can be found to handle this ubiquitous data, because the mirrorworld is not the only place it will accumulate. Big data will be everywhere. My hope is that with a fresh start, the mirrorworld is the place we can figure this out first.
From the earliest stirrings of the internet, the digital world was seen as a disembodied cyberspace—an intangible realm separated from the physical world, and so unlike material existence that this electronic space could claim its own rules. In many respects, the virtual and the physical worlds have indeed run in parallel, never meeting. In the virtual there was a sense of infinite liberty, unleashed by disconnecting from physical form: free of friction, gravity, momentum, and all the Newtonian constraints holding us back. Who wouldn’t want to escape into cyberspace to become the best (or worst) version of themself?
The mirrorworld bends that trajectory upon itself. Rather than continue two separate realms, this new platform melds the two so that digital bits are embedded into materials made of atoms. You interact in the virtual by interacting in the physical, moving your muscles, stubbing your toes. Information about that famous water fountain in a Roman plaza can be found at that fountain in Rome. To troubleshoot a 180-foot wind turbine, we troubleshoot its digital ghost. Pick up a towel in your bathroom and it becomes a magical cape. We will come to depend on the fact that every object contains its corresponding bits, almost as if every atom has its ghost, and every ghost its shell.
I imagine it will take at least a decade for the mirrorworld to develop enough to be used by millions, and several decades to mature. But we are close enough now to the birth of this great work that we can predict its character in rough detail.
Eventually this melded world will be the size of our planet. It will be humanity’s greatest achievement, creating new levels of wealth, new social problems, and uncountable opportunities for billions of people. There are no experts yet to make this world; you are not late.
Kevin Kelly ([email protected]) was WIRED’s founding executive editor. He’s the author of The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future and many other books, including What Technology Wants; New Rules for the New Economy; and Out Of Control: The New Biology of Machines, Social Systems, and the Economic World.
This article appears in the March issue. Subscribe now.
Listen to this story, and other WIRED features, on the Audm app.
Let us know what you think about this article. Submit a letter to the editor at [email protected].