“But I don’t want to wear glasses”

I wrote this as a way to think through some of the issues coming in the Spatial Computing world in preparation for a book coming in 2020 that Irena Cronin and I are writing, but we decided not to run this in the book, so am running it here instead.

You will be fired if you don’t wear fourth paradigm Spatial Computing devices.

Those of us who have to wear older-style corrective lenses, like humans have worn for hundreds of years,  simply to see, already get that wearing something on your face makes life easier, safer, and even more enjoyable. We get that we couldn’t even keep our jobs if we didn’t wear glasses.

These corrective lenses in the form of glasses turn the world from blurry to sharp. From unreadable to readable. From headache inducing to beautiful. Yes, those of us who wear glasses would rather not wear them either, if that was an option, but it isn’t. Without them life would simply suck and, yes, we’ve looked at all alternatives, from surgery to contacts. Glasses are simply better for many of us who have eyes that need optical correction.

People who wear glasses are already augmenting their world around them. Without them we can’t see the words on the screen in front of us as we type. Everytime we put on our glasses, the real world gets noticeably better. We can’t imagine living life without our glasses. Soon we won’t be able to imagine living life without devices that not just correct our vision but add new virtual and augmented reality capabilities to them.

They won’t stop at augmenting our eyes, either, Bose already has wearable devices (they look like glasses, but don’t really do much for your eyes, it’s rather your ears they focused on) that augment our hearing and there are tons of haptic devices being developed that augment our ability to feel, touch and other things.

In labs we’ve been visiting we’ve seen Spatial Computing glasses that are coming that will augment the world around even more. We call them fourth paradigm devices because they are the fourth paradigm that the personal computing era has brought us. More on the four paradigms in a bit.

We have already seen some devices like Google Glass that gave us a taste of how they would improve the world, even while introducing new problems, deepest amongst them was that they didn’t provide enough benefit to offset the real social costs of wearing a computing device and the weight that a computer put on our face. The small monitor that Google Glass showed us is tiny and not impressive at all when compared to devices that are coming on the market in 2020.

The fact that we can now state you will be fired for not wearing these new devices introduces yet even more social problems: there will be those who resist and resent those who wear these new devices. New cultural divides will appear. New privacy challenges, amongst others. That said, those who resist wearing will be putting themselves at a huge disadvantage to those who do wear the glasses. Given enough time we can see pretty much everyone will jump on board here, but the Amish prove that some people will resist new technologies. That said, most of you will probably not choose to live life like the Amish do, driving horse and buggy and resisting most smartphones and all the benefits computing brings.

There are new optics coming. The ones we are seeing in labs can literally remove light from the air before those photons arrive at your eyes. Will be useful in turning the sky dark, for instance, or replacing objects or people in front of you with virtual replacements and avatars.

These new optics and wearable screens, coming by 2022, along with the smaller computers that will drive them, will bring us all into a 3D world that few can imagine today. This shift in technology and how we view the world will be the biggest shift in technology in human history and will change literally everything we know about the way the real world works and the industries humans run.

We know we won’t be able to get jobs without them. Just like an Uber or Lyft driver would be unable to get jobs if she refused to carry a modern smartphone.

If we didn’t wear our corrective lenses/glasses there is no way we could write a book, drive a car, see notes up on screens, or even throw ball with our kids. It is why we answer that if you don’t wear the Spatial Computing glasses of the future you will lose your job and your ability to participate in modern society. Those who wear glasses will simply be able to out perform those who refuse to wear glasses and the difference in performance will be so huge that it will be like if we don’t wear the optical correction lenses we currently wear.

Yes, we hear those who say “but the real world is awesome enough, after all how can you make something like Yosemite even more beautiful?” Along with the cruder rejection of glasses “I hate glasses and will never wear them.” Those who wear glasses already know both statements are ridiculous, because we are already seeing a much more beautiful world with our old glasses and we already are forced to wear them. Even in beautiful places like Yosemite National Park.

We agree that the real world is pretty beautiful and humans have evolved over millions of years to sense the real world and the analog wave of photons that reflect off of surfaces around us. Our eyes are remarkable sensors that haven’t yet been matched by even the most expensive cameras, although cameras are also about to get much better and smaller (LG’s team that builds the cameras in your iPhones or built into a Tesla told us that in a few years they will be). Our human eyes can see in low light, in high contrast situations, in rapidly changing conditions, and in the glare of sunlight on white snow, all without seemingly much challenge where cameras in smartphones still can’t match the light collecting abilities of our eyes and retinas that turn that light into signals our brains can interpret. We are jealous of those who have eyes that can see this world perfectly but our wearing glasses is soon to turn the tables: those of you who have perfect eyes for seeing the real world will have to start wearing glasses and many will resist. This resistance will give those of us who already wear glasses a major advantage: we are ready for the move to 3D computing, er, Spatial Computing, already.

The difference between the real world, which we sense by collecting analog waves of photons and sound, and the digital world, is quickly shrinking and this is significant.

To understand this gap, let’s go back to rock star Neil Young’s studio. Listening to his music recorded in analog format is the most enjoyable. Why? It’s a smooth wave that he and his audio engineer captured on tape. You know this to be true if you have ever listened to music on a vinyl record. That’s analog. A small needle rides in a groove in the vinyl as it spins around, moving the needle back and forth as smoothly as the singer or musician sang or played. When you compare to what you hear on your phone, which arrives as streams of numbers, there is a slight difference. The vinyl is more enjoyable. You can hear more detail in the highs.

So why didn’t we stick with vinyl if it’s so much better to listen to? Well, some do, but most of us use Spotify, Apple Music, or Youtube, to listen to music now. Why? Well, digital is a lot more convenient. Plus, vinyl can’t work while commuting to work in a subway or a car or in a plane while traveling the world. For the music industry distributing music on vinyl is much more difficult. On Spotify and other services there are millions of songs to search and instantly play. To do that on vinyl would require a huge library of albums, something that even radio stations can’t afford to do anymore, and then you’d have to either be a skilled librarian or pay someone to organize all those albums. Plus, on some of the digital services these songs come with lyrics, and on others they come with music videos that are impossible to encode using analog techniques.

Digital wins, except in the rarest of cases. It is too inconvenient, too expensive, to distribute or listen to, analog. Even Young admits this and brought to the market a device, the Pono, to let us listen to much higher quality digital music. We were in his studio to understand that the gap between digital and analog was shrinking because of that and become computers and wireless networks were getting so fast that we could listen to much higher resolution digital music that got close to the original analog master.

A similar win will happen as we move to an augmented world that Spatial Computing brings. Oh, and this augmented world will arrive with several different approaches, the first, virtual reality (AKA “VR”), shows you a completely digital, er, virtual, world that you can move around in. If you haven’t experienced VR yet you should make a commitment to getting into a VR headset for a few hours. Those who haven’t experienced VR for a few hours will have a tough time understanding why we are so adamant that the world is about to deeply change. Already there is a spectrum of devices that let you see everything from a completely virtual world (VR) to virtual items put on top of, or replacing, the real world (AR, Augmented Reality). Some devices you wear will be more focused on letting you see some of the real world. Others will be more focused on higher resolution and wrap-around screens for enjoying a mostly virtual one.

This is why, instead of calling these things VR or AR glasses, we just lump them all into Spatial Computing, which is computing that you, a robot, or a virtual being can move around in. Spatial Computing includes all the devices that do VR and AR, along with other things like autonomous cars (they move around the real world and use very similar computer technology to see, process, and navigate around the real world as our VR or AR devices do) and cameras that help computers “see” the real world.

Spatial Computing devices were developed for military purposes for decades (the F35 pilots are forced to wear them simply to fly that plane), brings a dramatically different kind of user experience to bear than, say, Microsoft Windows, designed to be used on flat screens on a desk or a laptop, or even from smartphones that run Android or iOS, which also are designed for flat screens that you can touch.

In Spatial Computing you can move around a virtual world and with controllers you can touch, manipulate, shoot, and otherwise use your hands in a way that is closer to the real world than other computing types. It also lets you look around a 3D world that completely surrounds you. Some people denigrated early efforts, like the VR headsets that HTC and Oculus produced, as a fad, comparing it to the Segway or earlier 3DTV attempts. These denigrations show they really didn’t understand how deeply VR technology changes how we interact with computers. We saw that in our houses as kids came over, found our VR headset and controllers, and without instruction were able to start up experiences like Job Simulator, and do things like move files that looked like real-world files from file cabinet to cabinet.

Doing similar things on our Apple II’s back in the late 1970s took hours of learning commands to type into the computer. In VR a four-year-old did it within seconds without any instructions. These observations, both anecdotally, and in research labs shows why Spatial Computing will eventually bring computing to billions of new people who have never considered owning a smartphone.

If you visit Jeremy Bailenson’s VR lab at Stanford University, like Mark Zuckerberg did before acquiring Oculus, which was an early consumer VR producer, you’ll learn a lot of the magic that VR brings to bear.

His lab is a simple looking room with some VR headsets hanging on the wall. Yeah, he has an expensive audio system built into the floor, and he does use that to great effect in some of his research. On the day we visited he asked us to walk across a virtual plank while wearing one of those headsets. We had been expecting this since we had seen the same demo on YouTube and watched as people freaked out, unable to walk across the plank. Their minds had fooled them into thinking they were on a plank above a big abyss. Their eyes had fooled them into thinking they might fall to their deaths.

We were going to be braver and smarter than that.

So, when he started us out on the plank we thought we would just remember we were walking on a conference room floor and that there was nothing to be afraid of. What we hadn’t expected is that the floor would shake due to that big audio system under the floor. Our minds freaked out. Certain death seemed imminent even though our rational brain remembered it just was a regular floor we were walking on.

This, and other demos, where he embodied us into both children and homeless people, show us that VR is doing something to our brains that other mediums like TV, radio, or flat video screens couldn’t even touch. He then explained how this embodiment worked to fool our brains. When you are wearing a VR headset you are “immersed” into what the software developer built for you to experience in your headset. Your brain buys into this immersion and thinks it’s the real world. This immersive effect is so powerful that you instantly start flying like a bird if you look down and see your arms have become wings. This ability to embody other people, or animals is powerful. We saw dozens of people fly like birds while experiencing one of Chris Milk’s creations at the Sundance Film Festival in 2017. They all started flapping their arms and flying within half a second of being put into a bird’s form.

In VR you will experience vertigo, or the fear of heights that you probably have, the same way where it’s uncomfortable to look down off of a cliff’s edge in Yosemite. Your brain interprets the virtual and the analog the same: as a threat to your safety. We have seen people throw off VR headsets because what they are viewing is so uncomfortable. We tell you this not to scare you off of trying VR but to explain its power to so completely fool your brain into thinking what it is seeing is real. Even if it’s not real at all.

For the past few years Bailenson and his students have been doing research around this embodiment effect, studying whether this can lead to greater empathy for other people’s experiences and even the environment itself. His team found that it does, indeed, have a strong empathy-forming effect. People who were taken on a virtual scuba dive to see how carbon emissions are affecting the oceans came away with changed opinions on that and, in other research, where people were put into the view of a black child, changed their understanding of racism.

Bailenson also told us about some of the other magic of Spatial Computing. For one, it can so completely fool the brain that it can take away pain even amongst those who suffer from horrible burns on their skin. He told us about research into pain that VR pioneer Tom Furness, and his teams at the University of Washington, had done. Furness stumbled onto this discovery after dentists started buying early Spatial Computing headsets after they discovered that patients didn’t feel as much pain when they were wearing these headsets. He, and his students, did more research into this and found that Spatial Computing, in particular, VR, by distracting your brain away from pain, is better than morphine at taking away the pain burn victims feel. They do this by playing a snowball throwing game on a snowfield and their research is on the Web at http://vrpain.com

The 3D thinking that Spatial Computing brings to us lets us see our world and ourselves quite differently and its ability to change our brains will be built upon by the fourth paradigm glasses of the future.

So, how will you get fired if you refuse to put the glasses on? Well, we can imagine one morning your boss brings in a set and has bought a data visualization tool to help your team walk around factory floors in a new way. For instance, BadVR’s Suzanne Borders builds such tools that we discussed in the first chapter. Won’t you look ridiculous if everyone else puts the glasses on and is walking around the factory floor with them and you are standing there telling everyone else on your team “I refuse to wear those glasses.” Yeah, and if you soon don’t join in you will be at a significant disadvantage to everyone else on the team. We imagine that within a week your boss will sit you down and say “hey, if you aren’t willing to put on the glasses you can no longer work here.” It has happened before, can you imagine someone surviving long in a modern company by saying “I refuse to touch those computer things?”

Advertisements

Capturing humans to make a synthetic world: converting 2D videos of humans to 3D with RADiCAL

It used to be that motion capture was so expensive that only big movie studios could afford to do it. No more, we learned in New York.

Irena Cronin, CEO of Infinite Retina, and I are talking with people all over the world who are building Spatial Computing companies. Doing research for a book about the seven industries that will be disrupted by Spatial Computing in the 2020s we will be publishing in 2020. Entertainment is one of them, and this video shows the disruption is well underway.

This week we were at Betaworks Studios to meet companies who are building synthetic characters and worlds for our computing of the future. Here we visit with Gavan Gravesen, founder and CEO of RADiCAL, and Anand Ravipati, product manager, show us their latest work, Motion by RADiCAL, which senses humans in a new way: by using AI to convert videos coming from a standard smartphone camera.

Note Gravesen says that studios like Disney won’t be able to keep up with the 25 million content creators that are pumping content into YouTube and other places (he sees that number doubling between now and 2025).

This is significant. Why? They take a human captured on a standard 2D camera and turn that into a moving 3D skeleton that can be augmented in a digital world.

In a few minutes after you upload a video of yourself, or someone you captured dancing, up to RADiCAL’s servers, you get back a 3D model. By the end of 2020, and maybe by the end of 2019, they predict that entire process will be done on a modern smartphone without uploading.

What this means is we are about to be able to capture people, with a 2D camera on a smartphone, and get a virtual representation to use in a variety of ways. Bringing that model into a 3D tool like Unity. They also are exploring a bunch of self-contained consumer apps to do fun things with these 3D versions of ourselves.

Breakthroughs, like this, are happening all over the Spatial Computing space, with many more to come.