What we learned about Spatial Computing by visiting Niantic, the #1 mobile Augmented Reality business

You know Pokemon-Go, right? It has hundreds of millions of users. It’s the number one most-used example of mobile AR that’s out there. It was built by Niantic Labs, a company that was started by the team behind Google Earth.

So, to kick off the video series that our new Spatial Computing Agency, Infinite Retina, will do with business leaders in the space, we visited Niantic to talk with Ross Finman. He runs augmented reality research there, and his team is readying a new platform that will first see the light of day in “Harry Potter Wizards Unite”, a new mobile game that will come this year.

  1. They are patient and are taking a data vs flash approach. Niantic started life building a game very few knew, or played. I was familiar with it, because back when Google + was just starting out, and gaining some popularity, it was popular with that crowd. People would go out, and play the game in the real world on their mobile phones. That gave them the data about the real world, like where parks were, where real users wanted to play the game, etc. That helped them land the contract to do Pokemon-Go, and also, let them work through the early challenges of figuring out the user experience, and scaling the databases, etc, up, in relative obscurity before the millions of users showed up thanks to the Pokemon brand. They now are planning to leverage all that into a bigger Harry Potter event, and, then, into a platform for developers to build their own world-scale apps on top. Some in the Spatial Computing community don’t like talking about Niantic, seeing it as a weak example of AR/Spatial Computing. They are right, and Finman and his team doesn’t care about being the sexiest user of augmented reality, just that he sees new opportunities to use technology to improve game dynamics and collect even more data to make the game better.
  2. This company is thinking about world-scale from its first breath. Many companies in the Spatial Computing space start out by building marketing activations, or immersive games, that make sense in a living room context, or a booth context. Niantic started by thinking about a game you play while walking or moving around the real world. Listening to Finman you hear that focus and core competency everywhere. They don’t want to just figure out what you are doing in your Living Room, they want to figure out the real world and how to apply that to making their World-Scale platform and games better. But don’t underestimate where they are going from here. Last year Niantic bought Finman’s firm, Escher Reality, because they were building an AR Cloud, so look for lots of new AR features in its future games.
  3. It avoids privacy concerns by focusing on user utility. Both on and off camera I peppered Finman with all sorts of questions about how they were going to ingest data from their users. Soon we’ll be wearing cameras, microphones, 3D sensors, eye sensors, and more. Some are looking for ways to ingest everything, but that attitude will, I believe, eventually run into major trouble with not just regulators, but a new, astute customer base (my nine year old son and I talk regularly about the data that’s being collected by the devices and social networks in our world and how that data might be used for, or against, us). Finman says Niantic will be very measured and particular about the data it collects, only doing so to make the game more interesting. So users will get deep utility in trade for the data that they are giving the system and the system won’t do an overreach, grabbing for data the user doesn’t expect it to (or doing something unexpected with that data).
  4. Planning on new technology, like 5G, and augmented reality glasses. Finman, especially off camera, knows deeply the technology coming (he had cofounded a company, Escher Reality, which Niantic bought) and is building his systems to take advantage of 5G, and Augmented Reality/Spatial Computing glasses, like Magic Leap or Hololens, when those get popular with consumers over the next few years. He knows expectations on entertainment, and, even, things like robots and autonomous cars, will change and that Niantic has to keep pushing the technology to show consumers mind-blowing new entertainment, and other apps, that keep it ahead of the rest of the industry.
  5. Understanding the real world is key. Where Finman lights up the most is when you talk about artificial intelligence and its use in understanding the real world. He came out of the robot/autonomous car world, where technology has to be taught to “see” and understand the world. He’s applying that kind of computer science to bear in entertainment instead of moving a robot around the world. He is seeing systems that will “look around” the world, understand you are in a forest, say, and present the right virtual beings and right entertainment for that environment. He shows that Pokemon-Go already does some of that. If you are near water, he says, the system will present different Pokemon characters to you than if you are, say, in a shopping mall. He says that AR Cloud technology is just about to make that much more important. He asked “what if you could know you are on a sidewalk, vs. walking on grass in a park?” He posits that games could present much better experiences if they understood the world a lot better.

This is the start of a new video series called “Spatial Computing Catalyst” that Infinite Retina and Irena Cronin, CEO, and me, Chief Strategy Officer, will be doing. We also are doing a new Podcast series, as well, which we will do a few times a month.

Instead of doing a lot of focus on consumer stuff, like VR games, since there are plenty of others out there who do that, we retweet the best over on our Twitter account, we will be focusing on the businesses underneath these products and on the ecosystem that helps build these businesses, whether it’s the VCs, the PR people who help these companies build their stories, or the lawyers who help these companies set up and protect intellectual property.

Please let us know who you would like us to interview next by dropping us a line on Facebook or on Twitter.

Advertisements

One call to change my life, and another to redefine Spatial Computing

Irena Cronin called me last year.

That call lead to the development of a new Spatial Computing Agency, Infinite Retina, which opens its doors today. Thought I’d give you a little detail behind what brought 20 people together, and offer the community to come together to define, for Wikipedia, what Spatial Computing is (Magic Leap has been popularizing the term, but we think it is bigger than just their form of augmented reality glasses).

Back to the call and what has happened since.

We had stayed in touch because we both are passionate about the same things: helping entrepreneurs grow their businesses and seeing Spatial Computing make life better for all of us. Spatial Computing is computing we, or machines, move around in. More on the definition in a bit.

See, eight years ago Peter Meier, then CTO of Metaio, promised me monsters on the sides of buildings in the video I embedded here. He, today, is working inside Apple on the augmented reality team.

For me, it is personal. My son, Milan, is autistic, and has trouble communicating, reading social cues that I take for granted, and doing other tasks.

I see how a pair of Spatial Computing glasses will give him, and really all of us, super powers.

It wasn’t the first time I had seen these technologies demonstrated.

Yelp, back in 2009, added a feature because I heard that it was working on such. They claim they weren’t, but an intern there added the feature after I came up with it. Before that I’ve seen various augmented reality or virtual reality projects come to life, both in military, and in R&D labs.

Anyway, back to Irena’s call. She floated the idea of me working on a book with her, aimed at helping businesses. I answered, “I think there’s a lot more than a book here, want to build a business together?” I had spent the past year doing homework on the Spatial Computing industry, visiting tons of companies, and building new databases on social media and elsewhere of the industry and I saw a major new need coming that would need more than a book, it would need a team of people helping companies figure out their journeys as they build new technologies for products we haven’t yet seen yet (last week Microsoft launched a new Hololens, which shows that development is active and interest is going up).

After that call, the idea for a company quickly grew. Which leads us to:

  • Our team, seven other people who inspire me every day. On our podcasts, newsletters, or videos, you’ll eventually meet them, and see why I am humbled to work beside them.
  • Our advisors. 14 people who each bring something to the table, from being one of Steve Jobs’ key people who brought us the iPhone, to innovators in optics and sensors, to AR Cloud pioneers, to business experts.

Here’s the core team at our first meeting, which we did in Apple’s Palo Alto store. An aside, our name, “Infinite Retina” comes, in part, from Apple’s name for high resolution screens, “Retina screens.” Since Spatial Computing gives you all the virtualized high resolution monitors you want, we thought of a world of “Infinite Retina.”

Today, we turned on the InfiniteRetina.com website, shipped a new newsletter, more on that in a second, came up with a menu of services, recorded a podcast, and revealed two video interviews that we did, including with the head of Augmented Reality research at Niantic, the folks under Pokemon-Go. They represent our new research where where we dig into the business side of Spatial Computing and the companies building products and services using Spatial Computing technologies.

You’ve never seen me really dig into the business life of companies before, but we see the press that’s covering Spatial Computing isn’t really focusing on that much yet, and since our thesis is that many new companies will come over the next five years, that they will need help in seven areas (internally we call them “the Seven Voxels” but you can think of them as business imperatives). Things like getting funding, gaining attention, finding customers, winning talent, choosing and/or building infrastructure, guiding culture, and handling legal.

Which gets me to a question we have for the community.

One of the things that our team talks a lot about is how do we define Spatial Computing? While this term is not a new one, it’s not one that has been frequently used so far. There isn’t even a Wikipedia page for it yet.

In our first newsletter, we lay out our definition of Spatial Computing. We think it’s broader than just augmented reality or mixed reality or virtual reality, or even, all three put together. Spatial Computing technologies are being used in ovens, in cars, in drones, on factory lines, and lots of other places, which we’ve been tracking on our Twitter feed as we retweet the most interesting tweets from people and companies in this space.

We want to know what you think about our definition? How would you improve it, change it, etc? Feel free to tweet about it, or send us a note on our Facebook or LinkedIn pages, and we’ll aggregate and publish the best feedback. My friend Reuben Steiger did most of the homework for us, in his Ultimate Guide for XR evangelists.

To wrap it up, so excited that one phone call turned into so much already, with much more on the way. Our dreams of a new, more personal computing system that we can use all day instead of phones, laptops, TVs, etc, are still probably many years away, but we can see about $50 billion in investments already from companies like Apple, Facebook, Google, Magic Leap, Huawei, and many others, and when this wave comes in we predict it’ll create many new billion-dollar companies and we’re here to help.

We are working on a set of corporate values, and chief amongst them is that a rising tide lifts all boats, and we’re just offering to be helpful: to everyone who cares about Spatial Computing and all the magic that it can bring.

I’m honored to be Chief Strategy Officer at this new company and I’m here to answer your calls: robert@infiniteretina.com or +1-425-205-1921.




The mixed messages of Microsoft’s Hololens2: very few corporate use cases and lots of limitations

I’m a bit bothered by the overselling of mixed reality, or spatial computing, at least short term (long term my kids’ lives will be dramatically improved by them, we all can see that, but that might not happen for quite a few years). Notice that Microsoft says its Hololens2 is for enterprise uses only, yet to demonstrate it they had a piano on stage. And that’s just the start of the mixed messages I saw.

Microsoft is still overselling the technology. Why? Well, it demos amazingly well and positions Microsoft as being an amazingly cool, futuristic, company. Even though I bet it’ll only sell a few tens of thousands of these, just like the original Hololens.

Most of that oversell, or mixed message, is due to the “God view” in its on-stage presentations. You get to see every virtual object on stage. But when you actually get one on you realize you were sold a bill of goods: that you can’t see that view in the Hololens, but a small little view port.

Even that is oversold. “Greatly Increased Field of View” the Hololens website promises. That’s like saying having two pennys today is greatly increased wealth when you only had one yesterday.

Thanks to my lot in life I’ve gotten to travel to see a lot of jobs. Just last summer I visited the factories of Boeing, Tesla, Ford, and Louisville Slugger.

Where will we see Hololens2 being used? Not in many places. Where will it be used?

The corporate customer experience centers (every company has them, these are multi-million-dollar efforts to look impressive). Why? Because, like it’s doing for Microsoft, it could be used to make a company look cool. That’s the magic of augmented reality. Also, because they hide Hololens’ weakness: that you can’t really wear it for hours. Or, if you try, you really don’t want to.

But will the line worker at Ford wear them? Hell no. Too heavy. The optics will cause eye strain and block too much of the real world. They are too bulky. Some worker who is putting your dash in place inside a Tesla or a Ford would constantly be hitting his or her head and the device. And even if it was being used for, say, training, or tracking of parts, that use case requires millions of dollars of custom software to be written. Software that the current development team building flat, 2D UIs in, say, Visual Studio and C#, simply doesn’t have the skills to build (you need people who have experience with video game engines to do that).

To see what I’m saying, look at all the videos up on the Hololens YouTube site, or on the main site. They look impressive until you look closer. They are all visualization scenarios that only show things that are appropriate for looking at for a few minutes, at most. Even the surgery video is oversold. Let’s say you are a surgeon doing open-heart surgery and you’ll be working on a patient for more than an hour. Do you really want a device on your head that weighs more than a pound? No, and if you get itchy, or want to move it to adjust it, surgeons tell me that doing something like that will cost $1,500 because you’ll need to rescrub your hands and that’s what it costs to do that when you touch something that’s not sterile (due to the costs of everyone else waiting the few minutes for you to go through your scrub proceedure).

But it gets worse. If we are going to really do real work, rather than just amazing visualizations, we need real tools. Note what they demonstrated in the user experience demos: a few sliders and a few buttons. There wasn’t any real work being done. Like what you and I do on our computers a lot, like in, say, a CAD tool (note that Autodesk wasn’t included in any of these demos, Autocad’s leaders told me they were burned by the Hololens team before and are skeptical of Microsoft’s efforts) or even video editing, which would be a great thing to do in spatial computing. Why not? Because finger controls, even though they are much better in Hololens2 than in the original version, aren’t precise enough to be productive. You are better off using a pen on a Wacom tablet or on an iPad screen or a mouse.

Spatial computing glasses do have some major promise. Because you can see through them they could be used in customer service, for instance, or nursing but Hololens2 simply can’t deliver on those use cases, because of the social problems of wearing a big, ugly, black thing on your face, and because they are so heavy that wearing them for hours will end up hurting your neck.

Let’s talk about optics. Note what Microsoft didn’t talk about: multi-focal-point optics. Why not? My friends say these optics don’t do that. And the operating system for Hololens doesn’t yet have support for such a thing. Magic Leap does, and that was the core reason investors gave Magic Leap $2 billion. Why is this important? If you want to really work on virtual items you must be able to get close. My Hololens only lets me get a foot or so away from items and even then if I try looking at items that close for long my eyes get tired because the images aren’t refocused like a real item in your hand. And I’m told by optics experts that the accomodation and vergence handling isn’t nearly as good as in Magic Leap. (Accomodation is the technical term for how your eye changes shape and refocuses on things close to it, and vergence is the term for how your eyes get crosseyed when looking at something close to them).

Think about curling up in bed with your phone or tablet and how close that gets to your eyes. Hololens2 can’t do anything like that. Now, think about a worker who is putting in electrical systems into, say, a Ford truck. I’ve watched them work. They often are within six inches of their work to make sure that things get snapped in properly and, even, they are working in such tight quarters, like underneath a dash, that they don’t have much choice to be far away anyway.

And the optics still aren’t bright enough, nor sharp enough, to be comfortable reading, say, Tweetdeck or the New York Times in bright sunlight. It might be good enough to see CAD files laid over a building site, but, again, that doesn’t require much hard focusing on text or doing much real manipulation. In the video demos they are pretty careful to stay away from that kind of work and more on the “look at the cool visuals” kind of demo.

After the demos were done on Sunday I started hearing that Microsoft is going to be careful about who they sell these to, making sure that buyers actually have a real use case and they won’t try using them to do something outside of a small set of use cases. I don’t yet know if that’s true, but note that they aren’t setting expectations on shipping dates on the website yet.

At least Microsoft has been pretty consistent at saying these aren’t for consumers, although I wish it had been consistent and tried demonstrating on some enterprise machines, or designs, instead of having a little virtual angel flying around stage and a piano. That sends mixed messages to the market that Hololens simply can’t meet yet.

That said, the real battle over the future of computing has barely even begun, which is what I said in an analysis of what it means for Apple and Magic Leap, on Sunday.

Already, since then, Rony, the founder of Magic Leap, has been promising a new pair of glasses with mind-blowing optics and much better use cases next year.

Until then expect the Hololens to be used on limited corporate projects: things that are fun for the CEO or CTO to demo, but aren’t really used much to do real work. Hopefully that changes with future versions, but we need much better optics, much lighter weight, and much more software to do a wider range of use cases and make it easier for 2D software teams to move their old apps into the spatial computing world. I don’t expect all these problems to be solved for many years, do you?

Until then, I wish Microsoft would be more realistic in demos and stop using the “God view” so much in its demos so people get a real feel for what it is like to use these. It oversells the technology and that’ll hurt its credibility with the people it needs most: the evangelists who will need to help companies put them into use and the CTOs and developers who will be asked to build projects with them.

And start giving us a road map for how much effort Microsoft is going to put into Hololens in the future. We need a lot more tools to help turning our factories into spatial computing-ready workplaces. We need global mesh abilities that Microsoft hinted at, but really doesn’t have a strong vision like what Magic Leap lays out whenever it talks. No discussion of privacy, for instance. That punt is acceptable due to saying “these are enterprise only” (we all know we don’t have privacy at work) but real workloads require much more than what we’ve been told here.

One real positive step Microsoft made for workers: the flip-up screen. That shows that Microsoft understands that these devices are only useable for short periods of time and then you want to flip the screen up to go back to other computing devices, or to talk with other people, or use other tools.

Can MagicLeap, or others, take advantage of any of these mixed messages? Possibly, but Microsoft has such a strong lock over most of the computing used in enterprises that I’m skeptical. But Magic Leap, by having a more consistent vision, and one that’s free of having to cowtow to 2D customers, can really set itself apart for consumers and creators. Microsoft has left that door open so far. Will it shut it next year and stop sending mixed messages?