The Apple Table is Set

The big meal is about to come. We’ve been waiting for years for Apple to reveal its mixed reality products, including visors and glasses. We’ve been seeing the potential coming in other products like the Oculus Quest, the Magic Leap 1, or Microsoft’s Hololens. For years we’ve dreamed about an augmented world. Steve Jobs called them “bicycles for the mind.”

Yesterday it announced a bunch of things to developers and a few of these “human helpers.” Techmeme has all the reports here:

The most interesting was that, as of yesterday, Apple Music now is available in Spatial Audio. In a way, its headphones that enable that feature are the first to use its new philosophy: “how many ways can we improve lives by including more AI and 3D visualization in products?”

Apple knows that great audio is the foundation of great experiences in visual apps, like video games, entertainment, virtual shopping, education, and concerts. So it makes sense to upgrade all audio, which Apple is in the middle of doing. Another major upgrade comes next year when audio gets locked to the real world.

Apple is confusing people with all the new audio terms. Lossless and Spatial Audio are two I hear a lot, but Apple hasn’t been clear about why we want either, or both. By this time next year it’ll be clear, or, rather, it won’t matter. Most new media will be available in the better of the two formats, Spatial Audio. That format gives you infinite surround sound and, coming next year, it will be locked to the real world. Really these announcements are about moving us toward the experiential world.

When I say “experiential world” what do I mean? Well, going to a concert or a sporting event is experiential. You experience these things by being immersed in the event you are attending. What Apple is heading toward is shipping a “Holodeck” that will let you attend virtual concerts. Or go virtual shopping. Or attend a virtual school. Among other things. 

Apple will, before the end of the year, I hear, announce two products: a brand new iPod that we haven’t yet seen and a new “Holodeck” which is basically a high-end headphone and visor for experiencing mixed reality.

That said, I was expecting a little more about the 3D map. The fact that they showed us a new 3D map, but didn’t give developers a bunch of new capabilities, is interesting to note. That tells me that Apple will be more muted that I was expecting. That signals to me that it will do a “Viewmaster” product approach where the announcement of the product might not need, or have an affordance for, a lot of third-party apps. 

Look at Apple Music. To me that’s where Apple is setting the strongest tone about what is to come. It just upgraded a good chunk of its catalog to support Spatial Audio. No external developers needed. In fact, most people who work at music companies have no idea what Apple just did to the music industry. Let’s put it this way: I just cancelled my Tidal and Spotify accounts. 

Because of this trend toward experiential services and away from tons of new developer efforts, Apple has continued the course set by Steve Jobs. Apple knows the consumer electronics industry better than anyone. The iPhone was launched directly against the Consumer Electronics Show, held every January in Las Vegas. That wasn’t an accident.

Anyway, now I’ll turn my efforts to the business impact of the moves now that Apple is coming after owning your home. Both on its competitors and on developers that rely on Apple’s ecosystems for their business.

The “Holodeck” (there’s a rumor that Apple calls it “Apple View” but I’ll call it the Holodeck until Apple officially announces) will be aimed at our family/living rooms, entertainment context. I’m not expecting people to wear it a lot outside. I’m counseling entrepreneurs to focus on family time. Playing games, watching TV/movies, reading books, listening to music, etc. Entrepreneurs who have a strong showing for what people will do in their family rooms have a shot at building significant businesses.

Entrepreneurs who I’m working with are finding it challenging to raise money at the moment. That will change over the next few months, particularly after Apple announces. This isn’t the first time investors will change their attitude. Back when AltSpaceVR started up the founders told me that no one would invest in it. Then Mark Zuckerberg bought Oculus and several of those investors they visited called back.

Same will happen here. So, have an investment plan for before the Holodeck gets announced, and a different one for after. Same for your PR plan and your go-to-market plan. 

I’ve been talking to a bunch of companies who are aiming to where the puck will be, to use a hockey metaphor. One entrepreneur, Robert Adams, stands out as a good example of an entrepreneur that is building technology that Apple will need in the future. His business, Global edentity, builds a variety of sensors and AI that sees biomedical identity, among other things.

Now, that usually sounds dystopian, but Apple is already showing us how to thread that needle: stay focused on delivering products that help humans and minimize the consequences (and with this new genre of technology there are many). The things it showed us yesterday will improve our lives. Spatial Audio, for instance, makes music that sounds way better. That doesn’t sound dystopian now, does it? Nope.

Adams has shown me many new technologies that companies like Apple could add to future products and services. 

For instance, one of his inventions looks at the vascular and/or skeletal system of the user. This is something that a camera can do that your human eye can’t. Can you see the blood flowing through people’s faces? Nope but his system can. What does that lead to? All sorts of things, from earlier sensing of disease, to much better identity systems. If Apple wanted to make users much more secure and healthier it could, but it would need to invest in companies like the one Adams is building before a big company as Google or Apple buys them. This is what Apple was doing the past decade: buying a bunch of little companies that you don’t know much about that now are becoming important parts of Apple’s tech stack (like the 3D sensor in the latest iPhones, which came from a small company in Israel, Primesense).

Another of his has ways to make our health better and even further add identity capabilities by adding a smell sensor to a future phone or glasses. Such a sensor and associated AI could even smell that you are experiencing troubling emotions like anxiety, or, even, a health problem. I’ve seen dogs that can smell those, along with other things, on humans and sensors that can “smell” are coming along. They say even a blind dog knows its owner! So why not your phone? Imagine the possibilities.

Anyway, I’m seeing an explosion in entrepreneurial activity as Apple, Facebook, Snap, Google, all spend billions of dollars readying products that will ship over the next few years. So, I’m expecting to talk with a lot more people like Adams. Am available at +1-425-205-1921. 

The New 3D Apple Arriving at WWDC

I have been talking with hundreds of people across the industry and have discovered that the changes coming to Apple are deeper than just a VR/AR headset. Way deeper. The changes already being worked into products represent tens of billions of dollars of investment (I hear Tim Cook has spent around $40 billion getting ready for this new Apple over the past decade). Not all of this will be announced at WWDC. There will be a few announcements over the next year, and really these changes are going to lead to many new products, services, and experiences that will come for decades. This is the fourth paradigm shift for Apple. Previous paradigm shifts brought us the personal computer, the graphical user interface, and the phone. All of which continue changing our lives today, even decades after they were introduced.

First thing you need to know is the changes coming are WAY deeper than just a VR/AR headset, which would be important enough on its own. So, what is Apple getting ready to announce over the next year?

  1. A new realtime 3D map of the entire world.
  2. A new, rebuilt, Siri and a new search engine.
  3. A new mesh network built off of your iPhone that distributes AI workloads to M1 chips in your home.
  4. A new VR/AR headset, that I call “the TV killer,” or “the HoloDeck” with many devices being planned for next decade from glasses to contact lenses. (Arrives in 2022, with glasses to follow sometime before 2025).
  5. A new kind of programmatic surround sound (Spatial Audio is just the start).
  6. New 3D experiences for inside cars.
  7. Eventually a new car service itself.
  8. A new OS for wearable, on face, computers.
  9. A new set of tools for developers to build all of this.
  10. New 3D services for things like music, fitness, education, and more.
  11. A new, portable, gaming device that will interact with this 3D world.
  12. A new 3D audio service. Leaks from 9To5 Mac about that just today.
  13. A new kind of noise cancelling, built on array microphones (early version of this is just arriving in the new iMacs) and a new kind of video and audio sharing network, so that we will be able to have all sorts of new “walkie talking like” features.

Wrap these up and Apple is about to announce a major shift: from 2D screens, interfaces, and experiences, to true 3D ones. Eventually, I hear, Apple will even bring 3D to 2D monitors.

These changes have been planned all the way back to Steve Jobs and Tim Cook has been buying many small companies over the past decade, and has been traveling the world visiting factories that no other American leader has visited and quite a few startups, too. Tim Cook knows the competition extremely well, and is about to jump years ahead of everyone, many developers outside of Apple who are familiar with its plans tell me.

Why am I so confident? Because patents are raining out of the sky. When I worked at Microsoft I got in trouble with the lawyers over patents so they made me sleep with a lawyer (true story, that was my punishment). Over a weekend I learned a LOT about the patent system. A patent is a legal monopoly that lasts for 17 years. So, if hundreds of patents are being released, it tells you a major new set of products and services are coming. They wouldn’t release the patents if they weren’t ready to come to market with products. Doing so would be extremely stupid, the lawyer taught me, because a big company only has 17 years to make money before everyone else can copy it and drop the prices to the floor.

At first I was excited by rumors of glasses and VR/AR headsets. There are many. But developers who are building for Apple told me the biggest strategical shift is the move to build a new 3D map of the entire world. That is the basis for the new Apple and is important for a range of new products, from robots that will do things around your house, to cars that will drive you around, to augmented and virtual reality products that will bring new kinds of games and experiences to all of us.

Disclaimer. Apple is my #2 position in my investments after Tesla. I also am invested in Apple competitors, Qualcomm, Snap, Microsoft, Amazon, and about 50 other companies in a diverse portfolio. That said, I’m very bullish about Apple and expect it to be a lot bigger by 2030 than it is today, because of this strategical shift underway.

They also told me to look far deeper into what is currently in shipping iPhones, Macs, and other products. For instance, the M1 chip inside the new Macs Apple has just released, has about 17% of the chip’s capabilities dedicated to AI workloads. That part of the chip really hasn’t yet been used much yet. It’s sitting in my Mac right now doing nothing. Next to the M1 chip on my new Mac Mini (which is an amazing computer, fast, quiet, and fairly low cost — under $1,000) is an UltraWideBand Chip, which also is mostly unused, even if you just got some of those new AirTags that also has one of these chips inside.

If you’ve read up to here you’ve gotten the basics. Now I’ll dig into each of these, give you my thoughts on what this all means for all of us, and what I expect at WWDC on June 7.

I am tracking all of this on a new Twitter account at: If you follow that you’ll see mostly retweets of reports. It’ll be very active around WWDC for sure.


A world mapped in real-time 3D

Twenty seven months ago an Apple Mapping car drove down my street. On the front were five LIDARs. As they spin the lasers inside “cut up” the world into little virtual voxels (volumetric pixels, think of them as virtual cubes that you can see with Augmented Reality glasses — sort of like how Minecraft or, really, any video game works). I call them digital sugar cubes, to help people understand. Each virtual sugar cube, my street has millions of them now, has some unique things:

  1. A unique identifier. Probably an IP address, but might be proprietary to Apple. Either way, what this means is a computer somewhere else in the world can change the data on the cube, or show users what my street looks like. That part is already in Apple Maps. You can walk down my street virtually already, but that only hints at what will come next.
  2. A virtualized microphone. Huh? Yes, you soon will be able to talk to the street, to the trees, to, well, anything, including a Coke can or a car moving by.
  3. A virtualized speaker. So all that can talk back to you as you walk around with future devices.
  4. A virtualized display. So you can change the Coke can into something else, from a video screen to, well, something like a virtual animated character.
  5. A virtualized database. So developers can leave data literally on any surface or, even, the air around you.
  6. A virtualized computer. Think of what you would do if you had a virtualized Macintosh on every inch. Go even further. What if you had a virtualized datacenter on every inch? You could do some sick simulations and distortions of the real world.

Soon your entire house will be scanned in 3D and Apple will, due to new advances in Computer Vision, catalog everything it sees. That might sound scary, and it is, even to me, but it does bring amazing new capabilities which I’ll go into later. I have a Twitter list of people, and companies, doing this new Computer Vision, and what is coming to market now is absolutely stunning. A camera on your phone already can recognize when it is looking at a Coke can. Just get Amazon’s latest iPhone app. On top there is a new scanning feature that already does this.

To see what 3D scans look like, watch Alban Denoyel. He’s the founder of Sketchfab, a service that holds millions of 3D scans people are doing. There’s a whole community of people who are scanning their lives and uploading a 3D scan every day.

This new 3D map is the basis for an entirely new way of computing on top of the real world. Apple isn’t alone in building such, either. Amazon, Tesla, Google, Facebook, and others, including most autonomous vehicle companies, are building the same for various purposes. Just this week Niantic, the largest AR company who did the Pokemon-Go game, announced such a map/platform. It brags that its new platform is an “operating system for the world.” Now think about how much more data that Apple already has about the real world due to the mapping cars driving around, along with many other data sources. Soon, as we walk around the world with headsets or glasses on, we will add a LOT of data to this map, which will be updated in real time.


A new Siri and new search

Four years ago I had dinner with the guy who then was running Siri at Apple. I asked him “what are you learning by being at Apple.” He said “I learned Google is learning faster than we are.”

“How do you know that?”

“We instrumented Google and discovered that its AI systems are learning faster than ours are.”

You know this to be true now. Why? Google Home and Assistant are a LOT better at answering questions than Siri is. So, for the last four years Apple has been buying AI company after AI company and is building a new Siri and a new search engine.

From what I hear this new Siri will outperform Google in at least one hugely important area: It will know what you are looking at, and what you are holding or touching. Imagine looking at my Coke can (I actually drink mostly Hint Water, but CocaCola is a brand every human understands and probably drinks once in a while at least). Using the new Siri you will be able to ask “how much are 20 of these on Amazon?” For the first time a computer will be able to answer. Today Siri (and Google) have no freaking idea what you are talking about when you ask “of these.” (Amazon, like I said, can already do this via a camera, but very few people use that and understand how powerful it is — when you get glasses on your face the affordances change and you’ll see just what I’m talking about).

The rebuilt Siri will be far more flexible, and will be able to hook up to a lot more things. For instance, the old Siri understands me just fine when I ask it “how many people are checked in on Foursquare at the New York City Ritz?” Yes, Foursquare actually has an API and an answer to that question but Siri isn’t hooked up because its AI is an older, inflexible, design that needs a ton of hand coding.

The new Siri will be able to learn about such APIs that exist on the Web much easier, and will write the AI as users search.


A new mesh network

If you buy a new iPhone it has in it a new “U” chip for a new kind of network: UltraWideBand.

What does this new wireless chip bring us?

  1. It connects automatically. Which is why your AirTags can be found by other people who have iPhones.
  2. It brings between seven and 40 megabits of bandwidth, which is more than Bluetooth.
  3. Location awareness. Each antenna broadcasting UWB encodes into the radio signal where in 3D space each radio is. Which is why you can use your iPhone to find Airtags inside your couch.

Really UWB is a rethought Bluetooth. I already have half a dozen devices in my home that have these chips, with many more arriving soon as I get a few more AirTags for various things in my home.

The more of these devices you have around you, the better the location lock can work. What’s that? Well, let’s say you have a new kind of volumetric game, or a screen on your table in front of you. The UWB network builds a “fingerprint” of your home, so that new devices that arrive know EXACTLY where they are, and even where they are aiming. This is something no other company can do yet. Samsung has started putting UWB chips into their devices but they don’t have the M1 chip sitting next to these things, so Samsung is way behind and, really, who will buy an entire house full of stuff from Samsung? Not nearly the numbers of people who will buy Apple.

Next year, when the VR/AR headset comes out, I will be buying one for each member of my family. As we wear these devices in our living room, playing new kinds of games or watching a new kind of TV together, we will also be able to talk to each other (and, because of the high bandwidth, even send full 3D meshes back and forth) in the real world due to this new network.


The TV killer “HoloDeck” and “volumetric football” arrives

Steve Jobs said “I think we figured out a way to do it, and it’s going to be fantastic.” He was talking about the product that is coming in 2022. That’s how long Apple has been working on this.

I got a sneak look at stuff being built for this product. It will bring a stunning set of capabilities that will blow away any physical TV you have ever seen. I hear it brings a lot of the capabilities of Star Trek’s Holodeck. Some familiar with Apple’s plans call it a helmet. Others call it a visor/headphone. I’m just gonna call it the Apple Holodeck until some better name comes along. One important point: Apple’s Holodeck covers your eyes and your ears. If the device is off you will not see the real world and you won’t hear the real world. Turn it on and you’ll see a representation of your living room, and you’ll hear everyone around you. The Apple AirPods Max over ears headphones already show us how the audio part of this will work: I wore them for hours at Christmas dinner and I could hear everyone as if I didn’t have headphones on. Push the “transparency/noise cancelling” button and it turns everyone off. Great if there are kids playing with their friends on Discord, like happens in my house every day.

In the back of every Apple store is a million-dollar 8K TV that is about 30 feet across. The Holodeck that will come next year will show 2D virtualized screens that will be way better than that million-dollar TV. Why? In front of you will be two 8K Sony chips, I hear. That will let Apple virtualize TVs to be way way way bigger than that TV in the back of the store. It also will let you have as many virtualized monitors/TVs as you want. So, if you want to build a Las Vegas Sports Book, or a room that has dozens of TVs, you will be able to do so. Imagine being able to watch ALL the football games on Sunday at once?

But doing 2D virtualized screens is just tablestakes (by the way, they will be WAY BETTER than what Facebook or anyone else can do because of the UWB network. They will be better locked to the real world, and the devices will know where they are without using much of the camera or other battery-hungry technology. Other companies can’t match what Apple is about to do.

“And what is that Scoble?”

Apple is about to put a TV on every inch of the world. “Huh? Why would I want to do that?”

Well, you could put a video screen on your ceiling, floor, tables, or, any surface, including wrapping the video around things.

And then 3D stuff can “pop out of” the 2D virtualized video screens. So, imagine a Super Bowl halftime show where a performer can “jump out of” your TV and onto the floor in front of you.

If you haven’t seen a Microsoft Hololens or a Magic Leap you probably have no clue about how amazing this will be. On my Hololens I play a game where aliens blow holes in my real walls and then crawl through the holes. It’s stunning, even on the shitty optics the Hololens has. When Apple does the same, but on a set of chips that are going to blow you away, it will completely change what you expect from the entertainment and education companies.

One such example is I saw a volumetric football service that is being prepared for the device next year. Around you you will be able to watch 2D screens like never before. Your home will feel like you are at the stadium watching. On the ground or table in front of you, and in front of those 2D screens, will be a new volumetric surface. I saw this and it’s stunning. Imagine being able to walk around the stadium, or, even, onto the field while the game is going on so you can study what the quarterback is seeing from his perspective, and, even, try to make the same shot he is.

By the end of next year I no longer will be watching much TV on flat pieces of glass that are small. The TV killer will have arrived and with it, a new kind of VR and AR. Some inside Apple call it Synthetic Reality. What it really is a way to both view the 3D world, and create content for others to enjoy in new 3D metaverses.

I hear the Holodeck will be announced sometime between now and the end of the year. I won’t be shocked if they tease it at WWDC to get developers excited by what is coming and so that everyone knows to watch the keynote later in the year.


Every inch of your life will have sound, including inside your new car

When I sold stereo equipment in a Silicon Valley store in the 1980s they only had two channels. In the 80s new surround sound that would put a couple more channels around you arrived.

Now imagine a trillion-channel surround sound system. That’s what’s about to arrive. Sound will be on buttons, on things around your home, and far far more eventually.

Backing up, when you only had two channels to play with musicians couldn’t really recreate the experience of seeing them play live. I’ve had bands in my home. What’s different about that is that you can literally walk through the band, hearing what it sounds like to be between the tuba and the drummer. That’s impossible to do with two channels, and is even not good enough with the bleeding edge of Spatial Audio today.

Tomorrow, thanks to the LIDAR and cameras on the front of the Apple Holodeck, the computer will know everything about the space around you. It can then put a virtualized band onto the floor in your living room. You’ll be able to do what I was able to do. Walk around the band. I’ve heard this demoed and it’s amazing, even for old music. I’ve been talking with a bunch of music companies and they are planning on re-releasing a lot of their masters in new spatial audio format, and next year they will go the full way, locking that audio to the real world. The industry has masters where every instrument is recorded on a separate track starting in the 1980s. Older masters will be improved via AI too, and will be “modernized” for the new HoloDeck.

Also, if you listen to surround sound movies today on headphones, as you move your head left or right all the sound moves with you and isn’t locked to the TV screen, like it is on a system like my Sonos that does multi-channel surround sound. As the Holodeck arrives that will no longer be true, so watching movies or TV in headphones will be stunning compared to today.

Also, the world will soon be able to apply sound to everything. Car companies tell me they are readying the same in their cars coming next year. Buttons, mirrors, steering wheel, will all be able to apply sound to everything. So your buttons can talk to you. Or, if in a Tesla, have fart noises applied. My kids will love that.


A new 3D CarPlay arrives

I’ve had at least one automaker say that all of their cars starting this year will have new 3D sensors watching the dash and probably the driver and passengers, to enable new 3D audio experiences in cars. Every button, or, even different parts of the road, can “talk to you.” These new experiences will be supercharged as Apple brings out its new autonomous car service that’s being built. Imagine playing new kinds of augmented reality games while your car drives. The car project, Titan, is rumored to be coming sometime between 2023 and 2027. That said, new kinds of spatial audio and 3D programmatic sound are coming to cars this year.


A new OS arrives for on-face wearables

RealityOS, Apple has called it so far. I haven’t seen this, so this is where I’ll be paying most attention during WWDC. But I hear it is years ahead of what Facebook is building in its Oculus Quest standalone VR headsets. We’ll see just how capable it is, and how easy it is to use. I’m expecting that it will be amazing at letting you see both the real world, and the new virtual layer on top of it and that it’ll be so easy to use that someone who doesn’t read will be able to use it, which will bring new people into the computing world (about 800 million people on earth can’t read).


A new, portable gaming device coming

The tech press has been reporting on rumors that a new gaming device, like a Nintendo Switch. From what I hear the device is far different than a Switch because it’s designed to integrate into this new 3D world that will arrive next year to our living rooms. What could this do? Well, if I am wearing a Holodeck it will be a controller. If someone is playing with me they could use their device to interact with me in 3D world and play their own games. Out of all the devices I know the least about this, so will be interesting to see what gets announced. I hear that will be announced sometime by the end of the year, but, as always, with rumors of dates, even ones that Apple employees give you, you can never be sure until they get on stage and announce things. If it slips.


Next-level noise cancelling is arriving now

In the new iMac they are using the sensors to track your mouth and are focusing the attention of several microphones on your mouth, which will bring that device much better noise cancelling as you do Zoom calls, for instance.

When I worked at Microsoft 15 years ago a researcher showed me the first array microphone that Research had built.

What are array microphones? Well, the new Apple AirPods Max over-the-ear headphones has nine microphones. That’s what an array is. A group of microphones that a computer can “focus” on things.

The magic here is if the computer knows where in 3D space sound is coming from, it will be able to focus on it, or, even, turn it off. Noise cancelling due to array microphones and new AI-based focusing technologies, will bring noise cancelling features that will be hard for others to match (you need an AI chip, like what is included in the M1 processor, to do this well).


A new 3D Apple and paradigm shift arrives

Put it all together and you can see Apple is about to unleash a new paradigm-shifting strategy. One that will change all of our lives very deeply and bring us many exciting new things from new kinds of education or concerts that will replace half of your living room with a virtual classroom or a virtual concert hall. That’s why I call it a Holodeck. Wearing the Holodeck will let you visit my kitchen and play games with me. This has deep implications on the future of a number of companies. Spotify looks threatened. So does Google.

I hear Apple is dropping lots of hints about all of this at WWDC. It needs to ship new emulators and tools to developers to enable them to build new experiences for this new 3D world coming soon. Then, following WWDC we will see a number of announcements about new products that will lead into shipping these products in 2022.

Exciting times for the technology industry are coming. And, yes, I still think I’ll be buying a new device from Snap, aimed more at photography and augmented reality than the Holodeck will be, and I’ll be buying new glasses from Facebook too, since it is also spending more than $10 billion to develop theirs.

If you know any of this is wrong, or if I’ve missed something you know about, I’ll be doing an audio Twitter Space today to talk about this (probably starting around 2 p.m.) at and you can email me at or you can send me a message on Facebook, Twitter, or LinkedIn (or Signal, Telegram).

As these announcements are made I’ll look back at this post and see how accurate I was. We’ll know a lot more by the end of the year for sure.

Tesla’s data advantage. Can Apple, or others, keep up?

This is a long post. Short version: Tesla is way ahead on data collection and is pulling further ahead every day.

Do you ever think about what the cameras on a Tesla are doing? Cathie Wood does. She runs ARK Invest and has made billions by investing in disrupting companies, particularly those who use artificial intelligence. Her favorite company, she reiterated again last week on CNBC, is Tesla.

I do too, and go even further than she does in studying this industry (I have been studying self-driving cars for 15+ years and have a Twitter list of people and companies building autonomous cars). I even count how many go by my front door. One every few minutes. No one else has a neural network of its type and no other self driving car has been seen on my street. Did you know a Tesla crosses the Golden Gate Bridge every 80 seconds or faster? Yes, that was me counting cars out there.

I spend hours out front every week cataloging what goes past (I have the new over-the-ears headphones from Apple and usually use them outside while walking around talking to dozens of people all over the world building the future –they absolutely rock, by the way).

The photo at the top of this post is the street I live on, right near Netflix’s headquarters, I just shot this on our daily walk. It is part of a multi-billion war over the future of transportation. Most people have no idea how far ahead Tesla is. Including a very smart software developer I talked with today (I won’t name him because that wouldn’t be nice).

Elon just wrote this over on Twitter:

He’s right. When I talk with people, even techies in Silicon Valley, most have no clue just how advanced Tesla is and how fast it’s moving. Wall Street Analysts are even worse. I’ve listened to EVERY analyst and NONE except for Wood’s talks about the data the car is collecting. None have figured out that Tesla is building its own maps, or, if they have, haven’t explained what that means. (I met part of the Tesla programming team building these features and they admitted to me that they are building their own maps, more on that later for the nerds who want to get into why this matters).

She says that the data leads Tesla to doing Robotaxis, which will be highly profitable. She’s right, but that’s only one possible business that Tesla can build off of this new real-time data. Others include augmented reality worlds, GIS data to sell to businesses and cities, and new utilities that will run far ahead of Apple and Google’s abilities. More on that in a bit. These are all multi-billion-dollar businesses and is why tens of billions of dollars are being invested in autonomous technologies, including at GM, with its Cruise division (worth already about 1/3 of GM’s total market value), and Apple has leaked that it’s going to be entering the space in 2024 with an effort that will cost many billions too.

First, some basics.

Go and watch some of the Tesla FSD videos on YouTube.

You will see just how it works. A Tesla has eight cameras (most of them on the outside of the car). It also has a radar in the front bumper, and several ultrasonic sensors around the car that can see things closer, like a dog running next to the car). These are all fused together into a frame by software and then 19 different AI systems go to work figuring out where drivable road surface is, where signs are, where pedestrians and bicyclists are, and much much more.

My car shows that it can “see” about 100 yards around the car, which you can see in the FSD (FSD stands for “Full Self Driving”) videos on YouTube.

When I say “see” I mean it shows me on my screen where stop signs, lights, pedestrians, other vehicles, and even curbs and other features of the road are.

For people who don’t track the bleeding edge of computer vision (I have a separate Twitter list of developers who are doing computer vision, which is how robots, autonomous cars, and augmented reality glasses will “see” the world and figure it out) you might not realize just how good computer vision is. The folks over at Chooch AI, a new startup out of Berkeley’s computer science lab, have shown me just how good computer vision is and how much cheaper it is getting literally every month. Their system can be trained to do a variety of things with cameras, even see if you are washing your hands properly (important if you are a restaurant worker or a surgeon).

Their system already recognizes 200,000 objects. On an iPhone.

My Tesla doesn’t show that it recognizes that many things, but what it does is amazing. For instance, if I drive by a traffic cone at 90 m.p.h. it shows the cone. If there are a string of cones my car automatically changes lanes to get away from the construction area and make it safer for everyone.

As I drive down the street it shows parked cars, and, even, garbage cans. But it isn’t showing me the “real” can. It’s showing me what the AI has in its system. This is very important. A lot of people don’t understand just how much data just one garbage can generate. It captures what kind of can it is. How big is it? How is it positioned in 3D space on the road bed? And it does this even on garbage days when there are hundreds of cans out on the street I live on.

Why would this data be valuable? Well, a garbage company might want to buy an autonomous garbage truck. They could use these systems to both drive the truck and have a robot figure out where each can is to pick it up (already our garbage trucks only have one person controlling such a robot). It will soon go way further than that.

Why “HD” Maps are So Important

Tesla’s current autopilot/self driving systems have some major flaws (many of these are fixed in the FSD beta, but most owners don’t have that yet):

  1. They can’t see debris in the road.
  2. They can’t see potholes that have just formed.
  3. They can’t join together to make energy usage more efficient.
  4. They can’t see around corners very well.

Now, there are two approaches. One is to put a LOT more cameras and sensors on the car and a LOT more silicon in each car to properly identify things. I hear that will be Apple’s approach, which is why it’s currently looking at LIDAR sensors and I met the LG camera team while touring the Tesla factory, of all such places (they make the cameras in your iPhone) and they said their cameras will soon be a LOT higher resolution than the ones in my three-year-old Tesla. Apple believes it will be able to put a lot more neural network capabilities into each car since it has its own chip manufacturing now.

Disclaimer, I own both Tesla and Apple stock, my number one and two positions.

So, could Apple “beat” Tesla? That is the $64 billion question. I don’t believe so. The photo above shows why.

In Silicon Valley there are already so many Teslas that scenes like this one, on HWY 17 between Los Gatos and Santa Cruz, are quite common. Tesla, I hear, will soon start transmitting data from cars in front of you to your car (and to everyone else too).

Why is this important? Well, one day I was driving in the fast lane of Freeway 85 using my car’s Autopilot/FSD features (my car automatically changes lanes, stops, and basically drives itself already, particularly well on freeways with the above limitations). Cars in front of me started swerving and breaking. Turned out a bucket had fallen out of a truck and was rolling around lane #1 (where I was had three lanes).

I grabbed the steering wheel and took control, also swerving around the bucket. My Tesla hadn’t seen the bucket, although was already assisting me in driving. Doing very advanced braking. Audi taught me just how good anti-lock and traction control systems are by teaching me to drive on ice. They turned those systems off and I instantly spun the car. Turned them on and they were independently braking each wheel to keep my car from losing traction. Something Elon Musk demonstrated to me with electric engines and traction, too (which is why Teslas are REMARKABLE on ice and snow).

Afterward I talked with the programming team and my car, they said, automatically captures TONS of data during such an event (we call that an intervention, because a human had to intervene in autonomous driving and take over). The programmers have a simulator where they can load all that data and actually “walk around” what happened. Chooch shows that the training is so good that an engineer could just circle the bucket and “teach” the AI systems about what that is.

That’s how they taught it to see stop signs and garbage cans, for instance.

So, now a future version of a Tesla will be able to see the bucket and let everyone else know that there’s an object in the road. “But Waze already does that,” you might say. No it doesn’t. Waze requires a human to tap the screen and say there’s an object on the road. But it doesn’t know whether it’s a bucket, a box, a bedspring, or a beam. And it doesn’t show what lane that thing is in. It certainly doesn’t predict, or track, how said object is moving.

I hear that by the end of the year Tesla will turn on such features. Now, what will that look like on the road? All Teslas will start switching lanes to lane #3. You won’t even know why, even if you are in one, until you pass by the bucket in lane #1.

Can Apple match this? Not until it gets a decent amount of cars on the road. Since Apple is aiming at 2024 I think it will be way behind and will find it difficult to catch up.

But it might get worse for Apple, and certainly will get worse for car companies that don’t have these capabilities being built. Why? Maps and Robotoxis.

The next trillion-dollar company

Let’s say Tesla, or, really, anyone, came out with a map that was updated in real time on your phone. Would you continue using Google or Apple Maps that aren’t? Of course not. Here’s why.

One day I was driving my kids on a long trip to pick up some fruits in the Central Valley (I know a strawberry farmer that grows a lot better strawberries than they sell in our local grocery stores). We witnessed a truck burning. Our cameras could have seen that, captured that, and showed it to everyone on the map and kept it updated as firetrucks arrived and the mess cleaned up. On the way back we were caught in traffic at that spot, which was still being cleaned up. The current maps show traffic, but don’t show WHY there is traffic.

In the future you’ll see that truck burning on the freeway and maybe you’ll decide to take a different route, or delay your trip, saving you much time. Can Google match this? Not really. Google has no cameras passing by the fire. No neural network deciding whether that’s important to upload for everyone else. And even if a human snapped a photo, it is already out of date and doesn’t show the current status. A Tesla rolls by every few minutes.

Look at the front of my home on Google or Apple. The photos there are 20 and 23 months old! On Tesla’s map, if my home was burning down, you’d see that fire within seconds of it starting and you’d be able to watch in near real time as roads get closed and emergency gear gets rolled by (a fire station is on our street, so I get to watch that often too).

So, soon, Tesla will have far better 3D maps, that are updated in real time. Google and Apple can’t match it. Why? No data. Tesla has all the data.

Now, Tesla COULD license this data to Apple, so that Apple could stay relevant. Since Apple has $250 billion in cash, that’s quite a possibility. Which is why I own both Apple and Tesla. Apple is building such a 3D map, from scratch, and is doing a ton of autonomous car work. Remember, Apple could buy GM out of petty cash, so I can never count it out (and Apple has an amazing new VR/AR headset coming in 2022, more on that in a different post) and its AR devices will use this new 3D map it is developing.

Tesla, soon, because of the real-time map it is developing, will be able to solve all four of the problems I named above and add major new capabilities. I hear it already is testing caravanning, for instance. This is the ability for an autonomous car in front to control the brakes and acceleration of everyone behind. Which will let Tesla build a string of cars going on long trips. Doing that will save about 20% of power for everyone in the chain. It’s why NASCAR drivers learn to “drift” other cars. Doing so lets them go faster on less fuel.

All of this work will lead to robotaxis.

Think of Uber. That company was invented right in front of me during a Paris snowstorm. The impulse there was to make transportation easier. When I visited a slum in South Africa I heard just how big a deal this was to people. One woman told me Uber changed her life since before then taxis wouldn’t visit the slum, so she had a tough time getting around.

In the research for our latest book, “The Infinite Retina,” I learned that Uber is closer to how we will own cars in the future than other models. Why? When autonomous cars happen having a car sitting in your garage doing nothing will be very stupid. Something only the rich will do.

Now the economics. An Uber costs about $60 per hour. Go ahead, order an Uber and tell the driver “stay here for an hour and charge me.” Keep in mind that it still is losing money at this rate, too.

When autonomous cars come? That cost will go down to less than $10 an hour. This is why Uber invested billions in autonomous vehicle research. For Uber, though, the problems are even worse. It doesn’t make cars. It doesn’t own cars. It can’t force its drivers to buy a new one to get new features. It can’t force its drivers to buy a specific brand.

Let’s talk about the role consistency plays in building a brand. For instance, let’s talk about Starbucks Coffee. Is it great coffee? No. I used to live in Seattle and everyone knows Starbucks’ coffee sucks when compared to high end coffee shops. So, why is Starbucks so loved? (Disclaimer: I own stock in Starbucks too). Because it is ubiquitous AND consistent. Uber is ubiquitous (I took one in Moscow, Russia) but it isn’t consistent for two reasons:

  • The driver in the car. Sometimes they smoke. I even had one who smelled of alcohol.
  • The car itself. Sometimes it’s a brand new Mercedes. Sometimes it’s a beat up Toyota.

Tesla’s robotaxis will fix both problems. Tesla’s economics are that it could rent you a Model 3 for about $10 an hour. Even a top of the line Roadster could rent for $30 an hour. A $200,000 car cheaper than an Uber? Yes! And at these prices Tesla will be HUGELY profitable compared to Uber. The wholesale cost of a Model 3 actually is only $3 an hour. Uber and Lyft can’t compete.

Remember back to that first ride that Elon Musk gave me? What was his sales pitch for Tesla? Fun? Yes! But he really emphasized how electric motors can be made cheaper than gas ones. He didn’t talk about autonomous vehicles back then, or saving the earth. He saw that he could make a car cheaper than a Toyota and, therefor, disrupt the entire industry. Now that more than a million of them are on the road we see he’s right. The lifetime cost of owning a Tesla is lower than owning a Toyota.

And it’s about to get far worse for Toyota.

Why? Well, if you buy a $45,000 Toyota it’ll cost you about $1,000 a month, if you include all the costs.

That’s a lot of hours of driving if a Tesla is $10 an hour. Most people don’t drive that much every month. Even me, I currently put about 30 hours a month into mine (in three years I put 53,000 miles on my Tesla, and I take a ton of long trips — most of my friends put far far less on their cars than I do). So, an autonomous Tesla will cost about $300 a month for me. Far less than owning a car and having it sit in my garage, like both our Tesla and Toyotas are right now.

One last thing. The auto industry asked me to do some research into customer acquisition costs. I went around the country asking “are you ready to get into a car without a steering wheel?”

“Hell no,” is the typical answer I got. One guy in Kansas told me “I’m a narcissistic control freak and there’s no way a computer is going to drive me around.”

Google’s head of R&D told me they had the data to prove that after three rides in one even that guy changes his mind. So I asked a second question “what if the car drove to you and then you had the choice of driving it or not?”

Almost everyone, including that guy in Kansas, said “yeah, I don’t have a problem with what it does when I’m not in the car.” So, Tesla will have very low customer acquisition cost, if any at all (everyone knows a Tesla is fun to drive).

Kraft food execs once told me they spend $34 to acquire a young customer to eat its cheese (in advertising, and other techniques). So, I imagine that Waymo (Google’s Robotaxi) will have to spend a lot more than that, I figure more than $100 for a while, to get people to try its system, which has no steering wheel and is totally autonomous (it just started working without a driver in Phoenix, Arizona, and San Francisco).

Plus, Tesla has a huge advantage in brand too over Waymo.

Translation: Cathie Wood is right. Robotaxis will make a crapload of money for Tesla. Now, here’s the rub and why Tesla’s valuation is so high (if you thought it was just a car company it’s extremely overvalued): a robotaxi system doesn’t need many cars on the road. Uber has something around a million drivers. Worldwide. Tesla could build that many cars for only a few billion dollars. Plus, its owners have already funded the building of more than a million already, and with the Cybertruck on the way, I expect to see that sell many millions.

So, Tesla has the brand, the distribution, the consistency, the low-customer acquisition cost, and other advantages (like a supercharger network that let us drive ours across America) to make a ton of profit PER CAR. That is what I’m betting on, and it’s what Cathie Woods is betting on too.

Thanks to Brian Roemmele who has been talking to me about this for more than a year. He sees ahead better than anyone else I track right now.