The Apple Privacy Wall

The Apple Privacy Wall/Strategy Analysis of How Apple will Come at Oculus Next Year.

At Infinite Retina we soon will be sharing research on the Spatial Computing industry, this post gives you a little taste. Contact Irena Cronin or me if you need business strategy help to enter the Spatial Computing world.

Apple is clearly going after Facebook. That became obvious to everyone today as Apple announced a ton of new things, including new AR features for its messenger, and new privacy-focused features, at its World Wide Developer’s Conference.

Irena and I are hearing that one of the devices, er, AR/VR head-mounted displays, Apple is considering launching next year could go directly after Oculus Quest and the rest of the VR space.

First, today we retweeted dozens of things from the Apple keynote over on our Twitter account, so if you have been away from the Internet today you can catch up on the Apple news, and reactions to it, there.

Since Oculus has such a head start, especially with its $400 Quest, how could Apple crush it?

We see several ways:

1. Go after VR as “unsafe.”
2. Go after Facebook as “privacy thief.”
3. Go after VR as “not good enough.”
4. Go after Oculus Quest as “not capable enough.”
5. Go after Oculus Quest as “not social enough.”

Let’s take these on one by one.

VR IS UNSAFE

I’ve seen this first hand. You can’t really use it in a subway. Why? You can’t see someone pickpocketing you. You can’t use it in a party at your house. Why? You will hit people standing around (this happened to my dad and it left him quite a bruise and was quite pissed since he didn’t realize someone was playing VR and that they couldn’t see him). You can’t use it in the street. Why? You won’t see a car or other danger coming toward you. Apple could push AR further and say that AR is the way to go, and that it will expose VR’s unsafety.

FACEBOOK IS A PRIVACY THIEF

In two years Spatial Computing, including Facebook’s Oculus VR headsets (we hear Facebook is working on Spatial Computing glasses too, along with many others like Microsoft, with its HoloLens, and Magic Leap) will have very advanced 3D imaging technologies and eye sensors, along with cameras and microphones. Facebook has done an awful job of protecting our privacy, even flaunting that privacy is dead (Facebook’s lawyer, today, said the same). Same with Google.

Apple is building a new privacy wall that will surround all of its services and devices. Yeah, it is a metaphorical wall, but it builds a huge strategic moat around its products and services that many are finding quite attractive. You saw that today based on applause during Apple’s keynote when these features were announced. We see Apple as making moves toward keeping user data from escaping over that privacy wall, which increases trust in its services (and usage, hence profitability).

Today Apple announced that it was forcing developers to use its own signon feature. You know all those apps that make it easy to sign on by clicking a Facebook or Google button? Now they will be forced to add an Apple button, too, and it promises to protect your privacy in several ways that Facebook and Google don’t, even creating randomly-created email addresses so that if that company starts spamming you you will be able to easily turn off that email address.

I get thousands of promotional emails a day. Why? Because I have signed up for so many services over the years and because my email address has always been public. So I know what a daunting problem email marketing and ad tech are for people. This new feature is hugely interesting to me and I’m really not a private person at all. Most of the consumers and enterprise workers/execs we do research with are far more private than I am, and some demographics, like women or people of color have far deeper concerns about their privacy than I do. Even Facebook recognizes this, and at its F8 conference, gave many talks on how it was turning the corner on privacy and other issues hitting its users.

VR ISN’T GOOD ENOUGH OR SHARP ENOUGH

The resolution of VR isn’t good enough (Oculus Quest has screens that are 1440 x 1600). It doesn’t have enough resolution and its GPU, while absolutely magical if you look at what it can do, is underpowered for more advanced Spatial Computing uses. I can’t use it to answer my emails, or do my social media work, or edit videos. It simply isn’t good enough and doesn’t have enough applications yet, even though it is very cool.

Until Apple jumps into the market the Quest is a MUST HAVE product, and will easily be the product of the year. We don’t expect Apple to announce products until the second half of 2020, and it’s very probable that you won’t be able to buy those products until the first half of 2021. So Facebook has a year or so to figure out how it will deal with Apple’s arrival to the Spatial Computing world. It has a lot of work to do.

Watching videos inside a VR HMD, like the $400 Oculus Quest, is a great use case, but the screens aren’t sharp enough to feel like you are in a movie theater. It’s more like you are sitting very close to a nice TV screen and you can see individual pixels. Netflix or YouTube on the Quest has the same problem: really amazing to have a personal viewing device where a huge virtual screen is in front of you, but leaves you wanting because the resolution isn’t there yet.

We can’t say what Apple is planning here because so many sources are giving us conflicting information but Apple has some screens in development that follow what you hear here: https://youtu.be/el2IWYfEaCQ (Mark Spitzer worked at Google on Google Cardboard and here he lays out how optics should work to make hyper-sharp displays. His talk really is eye opening, pardon the pun, we are shocked more people aren’t watching this and discussing his conclusions).

VR IS NOT CAPABLE ENOUGH

Even after you solve these two problems, which will prove daunting for Facebook, since Apple can separate Augmented Reality into two pieces, the phone and the glasses themselves: in the phone goes compute, wireless, and battery, and that will let it keep glasses much lighter than companies that have to put it all into glasses. It also separates the cost in a way that hurts Facebook when it comes to price.

We are hearing Facebook is going to try to do that by putting the compute into the cloud, which shows that it will punt on many privacy issues, where Apple is going toward keeping data on your own devices, and in an encrypted form, so that Apple’s privacy wall will be daunting as a technical challenge, and a cultural one (and it really requires 5G to be available widely, something we don’t expect for years). Facebook tried selling a phone a few years ago and failed, so it will be forced to figure out some other way to bring the cost, and weight of glasses down, and keep price affordable for its two billion users to consider, too.

Apple will have a huge advantage here because of the millions of apps that already run on its phones and tablets. Facebook has none of those running on Quest (and won’t, especially since Facebook has locked down Quest pretty significantly and is putting up many barriers to independent developers to participate in its ecosystem. Many developers, who have spent hundreds of thousands of dollars building VR experiences, have gotten refused access to the Quest store, which means they are locked out and Quest users will never see them). We can see Apple having many many apps on day one, from Uber/Lyft to photo editing to spreadsheets, and many of the other apps we use on phones. Facebook will struggle to gain acceptance with developers until it proves it can sell tens of millions. Most industry insiders we talk with expect Quest to sell a few million in next 15 months, where Apple could sell that many in the first weekend. That’s the expectation, at least.

Apple gave a hint at how it would go at it, too, by developing exclusive content only available inside its privacy wall. Movies, TV shows, that kind of thing. I expect it will go a lot further by showing off integration to its health services, music services, credit card, news service, and more. Facebook’s efforts in each of these is second rate, if they exist at all.

VR IS NOT SOCIAL ENOUGH

Like we discussed before, you can’t use Oculus on public transportation. Last week my sons used it on a plane and it was inconvenient, at best, and we couldn’t see staff members trying to get our attention. At worst this will lead to huge safety issues. AR could bring dramatically important innovations so that, even if your visual field is mostly covered by virtual things, the virtual layers could be “turned down” when it senses a human is looking at you.

You also can’t use VR while trying to talk with someone else, you need to take the device off, and Facebook hasn’t done a good job yet of showing off multi-party environments you can play or work in. Yes, some exist, most notably Rec Room, but since so few people have VR those use cases aren’t all that useful for playing with people you know, so they aren’t getting the hype they probably should get.

If you take these advantages Apple has, we believe Apple will be able to charge far more for the Apple ecosystem than Facebook can charge, while having consumers thinking that the Apple glasses aren’t that expensive, especially in today’s world where Apple released a $7,000 monitor that isn’t even 8k yet.

In our research we are finding people will pay $1,000, or more, just to get devices from Apple, whom our average consumers tell us it trusts, than from Facebook, which it does not (even though they all are using it on Instagram, Messenger, and here, but the usage pattern here isn’t one of the deep trust we put into our phones, which have many people’s health data, credit cards, bank account info, and much more — very few people trust Facebook enough to turn over their credit cards to it).

Can Facebook convince users to try a $500 headset instead of buying an Apple one that might be $1,500 or more? We see that challenge as very daunting indeed because of the brand trust that Apple users have.

WHAT SHOULD ZUCK DO?

Now, this doesn’t mean all is lost for Facebook. We see that an advertising-supported approach will be much cheaper and will be attractive to companies who will want to market to consumers. The combination of both will give Facebook opportunities to build experiences that will be attractive to many consumers. We are hearing it is developing a “metaverse” that could see significant buyin from consumers and industry too.

The cost difference may be hard for consumers to discern, though, because Apple has the advantage of putting much of the compute down in the phone, most consumers might not think of the $1,000 to $1,500 they will pay for a new 5G iPhone (which we think will be required to drive the Apple glasses, when they get announced in late 2020) as additive to glasses that will run $500 to $1,500. We can imagine, too, Apple bringing different glasses for different use cases. The glasses you would want to wear while running, for instance, could be quite different than the ones you use while playing games, or watching movies, or working. The ecosystem Apple is building could, our research shows, run thousands of dollars, once you throw in new AirPods, Watches, a new 5G iPhone, and new Glasses. Facebook and Google could come along with approaches that would be far cheaper, due to being subsidized by advertising dollars.

So, back to the privacy wall. It’s clear that Apple is building a privacy wall around its services. Inside the wall? Less data will be collected than outside the wall. Inside the wall? That data will be encrypted and won’t be shared with third parties outside the wall.

This will see Apple’s services thrive in a world that cares deeply about privacy. We can imagine lots of consumers, for instance, will go with Apple Music, instead of Spotify, because they understand Apple’s stance on privacy and trust it not to share data with others. Spotify is sharing with lots of people, as I’m reminded when I look at Discord, which is how our company runs internally, and see Irena is listening to Spotify on its user interface. In other words, Spotify has shared data with third parties. We can see Apple is going to take a harsh stance against that kind of “over the privacy wall” kind of sharing. Companies that want to do it will live outside of Apple’s privacy wall and will have to convince people that giving up their privacy will bring enough benefits. Based on our research we find many consumers simply not willing to take that risk.

So, what would we do if we were Mark Zuckerberg? Give them a reason to jump over the privacy wall. Develop content studios that will bring exclusive after exclusive. Give developers capabilities to add value to Spatial Computing that Apple will simply be unwilling to give, due to its privacy wall strategy. Make Facebook’s systems easier, and more fun to use, than Apple’s are, and more capable. We can see many ways to do that that Apple will be culturally reticent to do.

Also, make it very clear that there’s a huge cost difference between Apple’s privacy wall-enclosed services and devices and Facebook’s advertising and transaction-supported business models. Apple’s core DNA is that it won’t do anything, or allow anything, inside the Apple Privacy Wall that they can’t make at least 30% margins on. Which means if you are inside the Apple Privacy Wall you will be forced to use Apple Pay, with its 30% cost. Facebook, by developing its own currency, and by being far less greedy, could take something like what credit cards do, say 2 to 5% of each transaction. Consumers, and content developers, who will want to sell tons of virtual items in, say, a metaverse, will eventually figure out that Apple’s approach is expensive indeed and many will jump over the privacy wall looking for other approaches.

Mark Zuckerberg should be heartened by Apple’s behavior with iPhone, actually. When it first came out it seemed like Apple would have a monopoly. But then Google came with Android, which was more open, less expensive, and it won most of the market share.

Zuckerberg might be able to stay ahead of Microsoft or Google as the world moves toward using glasses to compute by building on the amazing experiences that just shipped on Oculus Quest to get there just as Android did against the iPhone.

We’ll see, and that’s what makes this industry so fun to watch and participate in.

What should your business do? Well, today, if you don’t have an Oculus Quest and either a Magic Leap or a HoloLens, or spent significant time in all three, your business is already behind and that will be very noticeable next year as many new products, not just from Apple, come to market. Really, though, each business is individual and needs its own strategy, even if that strategy is to wait to see the market develop further before jumping in. We would love to help, drop us a line. I’m at robert @ infiniteretina.com

By 2022 the entire world will change and we see $50 billion worth of investment from both big and small tech companies coming to bear. Ignore this market at your peril. Apple isn’t. Its privacy wall is proof enough of that.

“But I don’t want to wear glasses”

I wrote this as a way to think through some of the issues coming in the Spatial Computing world in preparation for a book coming in 2020 that Irena Cronin and I are writing, but we decided not to run this in the book, so am running it here instead.

You will be fired if you don’t wear fourth paradigm Spatial Computing devices.

Those of us who have to wear older-style corrective lenses, like humans have worn for hundreds of years,  simply to see, already get that wearing something on your face makes life easier, safer, and even more enjoyable. We get that we couldn’t even keep our jobs if we didn’t wear glasses.

These corrective lenses in the form of glasses turn the world from blurry to sharp. From unreadable to readable. From headache inducing to beautiful. Yes, those of us who wear glasses would rather not wear them either, if that was an option, but it isn’t. Without them life would simply suck and, yes, we’ve looked at all alternatives, from surgery to contacts. Glasses are simply better for many of us who have eyes that need optical correction.

People who wear glasses are already augmenting their world around them. Without them we can’t see the words on the screen in front of us as we type. Everytime we put on our glasses, the real world gets noticeably better. We can’t imagine living life without our glasses. Soon we won’t be able to imagine living life without devices that not just correct our vision but add new virtual and augmented reality capabilities to them.

They won’t stop at augmenting our eyes, either, Bose already has wearable devices (they look like glasses, but don’t really do much for your eyes, it’s rather your ears they focused on) that augment our hearing and there are tons of haptic devices being developed that augment our ability to feel, touch and other things.

In labs we’ve been visiting we’ve seen Spatial Computing glasses that are coming that will augment the world around even more. We call them fourth paradigm devices because they are the fourth paradigm that the personal computing era has brought us. More on the four paradigms in a bit.

We have already seen some devices like Google Glass that gave us a taste of how they would improve the world, even while introducing new problems, deepest amongst them was that they didn’t provide enough benefit to offset the real social costs of wearing a computing device and the weight that a computer put on our face. The small monitor that Google Glass showed us is tiny and not impressive at all when compared to devices that are coming on the market in 2020.

The fact that we can now state you will be fired for not wearing these new devices introduces yet even more social problems: there will be those who resist and resent those who wear these new devices. New cultural divides will appear. New privacy challenges, amongst others. That said, those who resist wearing will be putting themselves at a huge disadvantage to those who do wear the glasses. Given enough time we can see pretty much everyone will jump on board here, but the Amish prove that some people will resist new technologies. That said, most of you will probably not choose to live life like the Amish do, driving horse and buggy and resisting most smartphones and all the benefits computing brings.

There are new optics coming. The ones we are seeing in labs can literally remove light from the air before those photons arrive at your eyes. Will be useful in turning the sky dark, for instance, or replacing objects or people in front of you with virtual replacements and avatars.

These new optics and wearable screens, coming by 2022, along with the smaller computers that will drive them, will bring us all into a 3D world that few can imagine today. This shift in technology and how we view the world will be the biggest shift in technology in human history and will change literally everything we know about the way the real world works and the industries humans run.

We know we won’t be able to get jobs without them. Just like an Uber or Lyft driver would be unable to get jobs if she refused to carry a modern smartphone.

If we didn’t wear our corrective lenses/glasses there is no way we could write a book, drive a car, see notes up on screens, or even throw ball with our kids. It is why we answer that if you don’t wear the Spatial Computing glasses of the future you will lose your job and your ability to participate in modern society. Those who wear glasses will simply be able to out perform those who refuse to wear glasses and the difference in performance will be so huge that it will be like if we don’t wear the optical correction lenses we currently wear.

Yes, we hear those who say “but the real world is awesome enough, after all how can you make something like Yosemite even more beautiful?” Along with the cruder rejection of glasses “I hate glasses and will never wear them.” Those who wear glasses already know both statements are ridiculous, because we are already seeing a much more beautiful world with our old glasses and we already are forced to wear them. Even in beautiful places like Yosemite National Park.

We agree that the real world is pretty beautiful and humans have evolved over millions of years to sense the real world and the analog wave of photons that reflect off of surfaces around us. Our eyes are remarkable sensors that haven’t yet been matched by even the most expensive cameras, although cameras are also about to get much better and smaller (LG’s team that builds the cameras in your iPhones or built into a Tesla told us that in a few years they will be). Our human eyes can see in low light, in high contrast situations, in rapidly changing conditions, and in the glare of sunlight on white snow, all without seemingly much challenge where cameras in smartphones still can’t match the light collecting abilities of our eyes and retinas that turn that light into signals our brains can interpret. We are jealous of those who have eyes that can see this world perfectly but our wearing glasses is soon to turn the tables: those of you who have perfect eyes for seeing the real world will have to start wearing glasses and many will resist. This resistance will give those of us who already wear glasses a major advantage: we are ready for the move to 3D computing, er, Spatial Computing, already.

The difference between the real world, which we sense by collecting analog waves of photons and sound, and the digital world, is quickly shrinking and this is significant.

To understand this gap, let’s go back to rock star Neil Young’s studio. Listening to his music recorded in analog format is the most enjoyable. Why? It’s a smooth wave that he and his audio engineer captured on tape. You know this to be true if you have ever listened to music on a vinyl record. That’s analog. A small needle rides in a groove in the vinyl as it spins around, moving the needle back and forth as smoothly as the singer or musician sang or played. When you compare to what you hear on your phone, which arrives as streams of numbers, there is a slight difference. The vinyl is more enjoyable. You can hear more detail in the highs.

So why didn’t we stick with vinyl if it’s so much better to listen to? Well, some do, but most of us use Spotify, Apple Music, or Youtube, to listen to music now. Why? Well, digital is a lot more convenient. Plus, vinyl can’t work while commuting to work in a subway or a car or in a plane while traveling the world. For the music industry distributing music on vinyl is much more difficult. On Spotify and other services there are millions of songs to search and instantly play. To do that on vinyl would require a huge library of albums, something that even radio stations can’t afford to do anymore, and then you’d have to either be a skilled librarian or pay someone to organize all those albums. Plus, on some of the digital services these songs come with lyrics, and on others they come with music videos that are impossible to encode using analog techniques.

Digital wins, except in the rarest of cases. It is too inconvenient, too expensive, to distribute or listen to, analog. Even Young admits this and brought to the market a device, the Pono, to let us listen to much higher quality digital music. We were in his studio to understand that the gap between digital and analog was shrinking because of that and become computers and wireless networks were getting so fast that we could listen to much higher resolution digital music that got close to the original analog master.

A similar win will happen as we move to an augmented world that Spatial Computing brings. Oh, and this augmented world will arrive with several different approaches, the first, virtual reality (AKA “VR”), shows you a completely digital, er, virtual, world that you can move around in. If you haven’t experienced VR yet you should make a commitment to getting into a VR headset for a few hours. Those who haven’t experienced VR for a few hours will have a tough time understanding why we are so adamant that the world is about to deeply change. Already there is a spectrum of devices that let you see everything from a completely virtual world (VR) to virtual items put on top of, or replacing, the real world (AR, Augmented Reality). Some devices you wear will be more focused on letting you see some of the real world. Others will be more focused on higher resolution and wrap-around screens for enjoying a mostly virtual one.

This is why, instead of calling these things VR or AR glasses, we just lump them all into Spatial Computing, which is computing that you, a robot, or a virtual being can move around in. Spatial Computing includes all the devices that do VR and AR, along with other things like autonomous cars (they move around the real world and use very similar computer technology to see, process, and navigate around the real world as our VR or AR devices do) and cameras that help computers “see” the real world.

Spatial Computing devices were developed for military purposes for decades (the F35 pilots are forced to wear them simply to fly that plane), brings a dramatically different kind of user experience to bear than, say, Microsoft Windows, designed to be used on flat screens on a desk or a laptop, or even from smartphones that run Android or iOS, which also are designed for flat screens that you can touch.

In Spatial Computing you can move around a virtual world and with controllers you can touch, manipulate, shoot, and otherwise use your hands in a way that is closer to the real world than other computing types. It also lets you look around a 3D world that completely surrounds you. Some people denigrated early efforts, like the VR headsets that HTC and Oculus produced, as a fad, comparing it to the Segway or earlier 3DTV attempts. These denigrations show they really didn’t understand how deeply VR technology changes how we interact with computers. We saw that in our houses as kids came over, found our VR headset and controllers, and without instruction were able to start up experiences like Job Simulator, and do things like move files that looked like real-world files from file cabinet to cabinet.

Doing similar things on our Apple II’s back in the late 1970s took hours of learning commands to type into the computer. In VR a four-year-old did it within seconds without any instructions. These observations, both anecdotally, and in research labs shows why Spatial Computing will eventually bring computing to billions of new people who have never considered owning a smartphone.

If you visit Jeremy Bailenson’s VR lab at Stanford University, like Mark Zuckerberg did before acquiring Oculus, which was an early consumer VR producer, you’ll learn a lot of the magic that VR brings to bear.

His lab is a simple looking room with some VR headsets hanging on the wall. Yeah, he has an expensive audio system built into the floor, and he does use that to great effect in some of his research. On the day we visited he asked us to walk across a virtual plank while wearing one of those headsets. We had been expecting this since we had seen the same demo on YouTube and watched as people freaked out, unable to walk across the plank. Their minds had fooled them into thinking they were on a plank above a big abyss. Their eyes had fooled them into thinking they might fall to their deaths.

We were going to be braver and smarter than that.

So, when he started us out on the plank we thought we would just remember we were walking on a conference room floor and that there was nothing to be afraid of. What we hadn’t expected is that the floor would shake due to that big audio system under the floor. Our minds freaked out. Certain death seemed imminent even though our rational brain remembered it just was a regular floor we were walking on.

This, and other demos, where he embodied us into both children and homeless people, show us that VR is doing something to our brains that other mediums like TV, radio, or flat video screens couldn’t even touch. He then explained how this embodiment worked to fool our brains. When you are wearing a VR headset you are “immersed” into what the software developer built for you to experience in your headset. Your brain buys into this immersion and thinks it’s the real world. This immersive effect is so powerful that you instantly start flying like a bird if you look down and see your arms have become wings. This ability to embody other people, or animals is powerful. We saw dozens of people fly like birds while experiencing one of Chris Milk’s creations at the Sundance Film Festival in 2017. They all started flapping their arms and flying within half a second of being put into a bird’s form.

In VR you will experience vertigo, or the fear of heights that you probably have, the same way where it’s uncomfortable to look down off of a cliff’s edge in Yosemite. Your brain interprets the virtual and the analog the same: as a threat to your safety. We have seen people throw off VR headsets because what they are viewing is so uncomfortable. We tell you this not to scare you off of trying VR but to explain its power to so completely fool your brain into thinking what it is seeing is real. Even if it’s not real at all.

For the past few years Bailenson and his students have been doing research around this embodiment effect, studying whether this can lead to greater empathy for other people’s experiences and even the environment itself. His team found that it does, indeed, have a strong empathy-forming effect. People who were taken on a virtual scuba dive to see how carbon emissions are affecting the oceans came away with changed opinions on that and, in other research, where people were put into the view of a black child, changed their understanding of racism.

Bailenson also told us about some of the other magic of Spatial Computing. For one, it can so completely fool the brain that it can take away pain even amongst those who suffer from horrible burns on their skin. He told us about research into pain that VR pioneer Tom Furness, and his teams at the University of Washington, had done. Furness stumbled onto this discovery after dentists started buying early Spatial Computing headsets after they discovered that patients didn’t feel as much pain when they were wearing these headsets. He, and his students, did more research into this and found that Spatial Computing, in particular, VR, by distracting your brain away from pain, is better than morphine at taking away the pain burn victims feel. They do this by playing a snowball throwing game on a snowfield and their research is on the Web at http://vrpain.com

The 3D thinking that Spatial Computing brings to us lets us see our world and ourselves quite differently and its ability to change our brains will be built upon by the fourth paradigm glasses of the future.

So, how will you get fired if you refuse to put the glasses on? Well, we can imagine one morning your boss brings in a set and has bought a data visualization tool to help your team walk around factory floors in a new way. For instance, BadVR’s Suzanne Borders builds such tools that we discussed in the first chapter. Won’t you look ridiculous if everyone else puts the glasses on and is walking around the factory floor with them and you are standing there telling everyone else on your team “I refuse to wear those glasses.” Yeah, and if you soon don’t join in you will be at a significant disadvantage to everyone else on the team. We imagine that within a week your boss will sit you down and say “hey, if you aren’t willing to put on the glasses you can no longer work here.” It has happened before, can you imagine someone surviving long in a modern company by saying “I refuse to touch those computer things?”

Capturing humans to make a synthetic world: converting 2D videos of humans to 3D with RADiCAL

It used to be that motion capture was so expensive that only big movie studios could afford to do it. No more, we learned in New York.

Irena Cronin, CEO of Infinite Retina, and I are talking with people all over the world who are building Spatial Computing companies. Doing research for a book about the seven industries that will be disrupted by Spatial Computing in the 2020s we will be publishing in 2020. Entertainment is one of them, and this video shows the disruption is well underway.

This week we were at Betaworks Studios to meet companies who are building synthetic characters and worlds for our computing of the future. Here we visit with Gavan Gravesen, founder and CEO of RADiCAL, and Anand Ravipati, product manager, show us their latest work, Motion by RADiCAL, which senses humans in a new way: by using AI to convert videos coming from a standard smartphone camera.

Note Gravesen says that studios like Disney won’t be able to keep up with the 25 million content creators that are pumping content into YouTube and other places (he sees that number doubling between now and 2025).

This is significant. Why? They take a human captured on a standard 2D camera and turn that into a moving 3D skeleton that can be augmented in a digital world.

In a few minutes after you upload a video of yourself, or someone you captured dancing, up to RADiCAL’s servers, you get back a 3D model. By the end of 2020, and maybe by the end of 2019, they predict that entire process will be done on a modern smartphone without uploading.

What this means is we are about to be able to capture people, with a 2D camera on a smartphone, and get a virtual representation to use in a variety of ways. Bringing that model into a 3D tool like Unity. They also are exploring a bunch of self-contained consumer apps to do fun things with these 3D versions of ourselves.

Breakthroughs, like this, are happening all over the Spatial Computing space, with many more to come.