It used to be that motion capture was so expensive that only big movie studios could afford to do it. No more, we learned in New York.
Irena Cronin, CEO of Infinite Retina, and I are talking with people all over the world who are building Spatial Computing companies. Doing research for a book about the seven industries that will be disrupted by Spatial Computing in the 2020s we will be publishing in 2020. Entertainment is one of them, and this video shows the disruption is well underway.
This week we were at Betaworks Studios to meet companies who are building synthetic characters and worlds for our computing of the future. Here we visit with Gavan Gravesen, founder and CEO of RADiCAL, and Anand Ravipati, product manager, show us their latest work, Motion by RADiCAL, which senses humans in a new way: by using AI to convert videos coming from a standard smartphone camera.
Note Gravesen says that studios like Disney won’t be able to keep up with the 25 million content creators that are pumping content into YouTube and other places (he sees that number doubling between now and 2025).
This is significant. Why? They take a human captured on a standard 2D camera and turn that into a moving 3D skeleton that can be augmented in a digital world.
In a few minutes after you upload a video of yourself, or someone you captured dancing, up to RADiCAL’s servers, you get back a 3D model. By the end of 2020, and maybe by the end of 2019, they predict that entire process will be done on a modern smartphone without uploading.
What this means is we are about to be able to capture people, with a 2D camera on a smartphone, and get a virtual representation to use in a variety of ways. Bringing that model into a 3D tool like Unity. They also are exploring a bunch of self-contained consumer apps to do fun things with these 3D versions of ourselves.
Breakthroughs, like this, are happening all over the Spatial Computing space, with many more to come.