The Contextual, Sensual CES2013

It’s been a week now since the Consumer Electronics Show closed. I wanted to take that time to read all the reports and get rid of any overhype I picked up because of all those big screens that I saw.

Really the story was sensors. Whether video sensors on glasses, heart rate sensors on watches, or 3D sensors that you can interact with, this CES was more about sensors than anything else.

For a taste of just how big a deal this was this year, check out this video of Primesense’s private suite.

Don’t know Primesense? It licensed its technology to Microsoft for the Kinect sensor. You know, the one that can see you dancing, or gesturing, or moving around. It even does pretty good face detection. It knows I’m playing instead of my sons.

But this year the technology took a Moore’s law-style turn. It got a LOT smaller. It’s now a stick of chewing gum instead of something longer than most of my books. It’s lower cost. Will run less than $100. It’s much higher resolution. It now is so accurate it can see how hard you are pressing against a desk.

Listen to Primesense founder Aviad Maizels talk about his vision for 3D sensing.

Speaking of 3D sensors, I did see the Leap Motion. I like what they are doing too and we’ll do a video in the future with them. But their sensor is optimized for over keyboard use, not room use, so I find the Primesense has me dreaming about a contextual future a lot more.

At CES I had dinner with execs from GM and Ford and they are thinking about how to use these sensors in cars. Both to personalize the car (with a sensor like this they could tell you are sitting in drivers seat) but also to do things like wakeup alarms if you are falling asleep while driving. Also, hand gestures will be more efficient in many ways than voice systems, particularly for moving around user interfaces. Listen in to John Ellis, head of the Ford Developer Program, talk about the contextual future of cars:

The other thing I saw were wearable computers. Listen in to these two visionaries who are building really interesting wearables. Recon Instruments builds the heads-up displays that Oakley is including in its AirWave ski goggles and Pairasight has built a glasses with two 1080p cameras. The Texas Instruments chipset Pairasight was using lets you stream about 1.5 hours of 1080P video on a single battery charge (and the battery is tiny, so this is a breakthrough). Pairasight’s glasses are in prototype stage. Recon’s are shipping now.

That all led me to talk with Don Norman, who I ran into at CES. Don’t know who he is? He used to be a fellow as a User Experience Architect, which was the first time User Experience was used in a title at Apple and later became Vice President of Apple’s Advanced Technology Group, but that hardly explains Don, go read his wikipedia entry.

So, what does it mean?

Well, consumer electronics are about to become anticipatory and personal.

Think about Google Now, which shows you all sorts of ways to live your life better (like the fact that I better leave for my meeting now because traffic is bad on way into San Francisco). Our world will know you at a deep level. Don’t believe me? Look again at the Primesense video. In there is a demo by Shopperception which lets retail stores see what you are buying in real time. Freaky, huh? But you know we’ll let stores do this. Why? We’ll get paid to. I see everyone in Safeway using their Safeway card, which already allows pretty deep tracking of buying behavior. Imagine a display near the cereals saying “hi Robert Scoble nice choice of Cheerios, if you want a second box it’s half off.”

The sensual, contextual age of consumer electronics is here ready or not.

The contextual and exponential future of Facebook

Facebook book

Rocky Barbanica and I visited Facebook’s headquarters today and interviewed a bunch of people for the book “Age of Context” that I’m writing with Shel Israel.

What’s the age of context? Five radically expanding technologies/data types.

1. Sensors.
2. Wearable computing.
3. Big Databases.
4. Social network behavior.
5. Location.

What will that bring for consumers? Highly personalized experiences and products. For companies? An extraordinary level of contextual business intelligence: clear vision of what your business will be and who your customer is.

I interview Facebook's Mike Shaver

I met with people like Mike Shaver, director of engineering, at Facebook. Listen in:

He talks about context, and how different contexts, like, say, when you’re driving, which is different than when you are reading Facebook on your couch, and is different than if you were skiing down a slope, could be used to bring different items to you.

He told me before we started recording that Facebook is, indeed, trying to pick the best items for you to see. It’s a difficult job because there so many different kinds of users. My dad, for instance, might only read Facebook once a week and never clicks like on his items. Me? I read Facebook every few minutes and click like on thousands of things a week.

Facebook has to pick, out of millions of potential messages, only about 30 for us each to see, each time we refresh the page (or, better yet, drag down on the mobile app).

It’s clear Facebook is also in the midst of a huge shift: one from web pages that have no contextual data to mobile and wearable computers where there is a huge amount of contextual data. My desktop computer doesn’t let me use it in different contexts like driving, skiing, running, eating, or shopping at the local mall. My mobile phone does. Facebook is in the middle of being rebuilt for mobile users, and soon, wearable computer users and maybe automobile heads up display users. Oakley, for instance, just started selling ski goggles that have heads up displays in them (we did a separate interview with that team).

An office at Facebook

One thing I noticed, after having conversations with about seven of Facebook’s execs, is that some seem to be ahead of the rest of the company in their thinking. Sam Lessin, who is director of product, talked to me about the exponential growth in identity information and the kinds of personalized, contextual, experiences that will enable in the future. Imagine walking into a bar you’ve never been into before and they say “hey, Robert Scoble, welcome, do you want your usual Oban whisky?” Or, imagine skiing at Squaw Valley and they will know that you are probably hungry, since every day you check into a lunch place by 1:30 p.m. and it’s now 1:45 and you haven’t eaten yet. “Hey, Mr. Scoble, are you hungry yet? Our sushi restaurant has a seat available after your next run.” Then imagine that I can invite a friend to join me, all via our wearable computers, and I learn that that friend doesn’t like Sushi “hey, you invited Mr. Smith to join you, but we know he doesn’t like sushi, would you like to switch to our steak restaurant instead?” That is all very possible, and Lessin explains how that might work.

We also talked about what it was like to build for a billion users compared to building at a startup with a few thousand users.

Hack at Facebook

Finally, I met Mike Schroepfer, vice president of engineering. His teams, when they check in code, affect more than a billion people now. Think about the power to screw up that that gives an engineer. He told me that keeping up with exponentially-growing contextual data about all of us is very difficult. They rebuilt their engineering teams to check in code twice a day and ensure that Facebook doesn’t slow down as it gets more engineers. He showed me the engineering team, which mostly works in one big room so that system conditions can be discussed fast.

This was an extraordinary way to get inside one of the world’s great companies and hear how they think. Hope you enjoyed it.

By the way, while I was there, they rolled out a new feature: nearby (here’s Techcrunch’s writeup of the new feature). They gave me access while I was meeting there and I checked into one of the restaurants there (free food baby).

We’ll be back at Facebook on Wednesday to meet with the team that wrote that feature. It’s barely the start of where Facebook’s information discovery features are going. Hint: the future is contextual, which will let them build new kinds of search/discovery features.

Some things I learned, while there:

1. Everything you do on Facebook will affect what comes in your view in the future. If you like crappy things that you don’t care about, you’ll see more crappy brands that you don’t care about in the future and it might even affect your experiences when you walk into bars, churches, schools, shopping malls, etc. Using Highlight, for instance, I can see what kinds of things you like and I’ll treat you a lot differently based on what you’ve liked.

2. Facebook is teachable. If you hide items, you’ll see fewer of those kinds of items in the future. Like more items and you’ll see more of those in the future.

3. Facebook is looking to help you distribute content to who you want to distribute to. Facebook gets a lot better if you put each of your friends into either your “close friend” or “acquaintance” list. Put family members on your family list, and you’ll be able to send photos just to your family members very easily. Spending some time tuning your friends lists dramatically increases the quality of your feeds and also lets you see items from your friends and family so you don’t miss them.

4. Facebook’s new gift feature will be able to build new kinds of stores in the future. If I buy a gift, like I did for Sam Levin, who got engaged last night, Facebook can learn about what kinds of things I like to buy for people, but it also lets Sam switch his gift without letting me know, so now Facebook knows more about the kinds of things Sam likes to receive.

Anyway, hope you enjoy this look inside Facebook and also appreciate that you get to see the raw material for the writing of our book, which should be out in Q3, 2013. Here’s some other photos from the campus.

Facebook's headquarters

Fun at Facebook, not in Kansas anymore

Hack at Facebook

Ice cream at Facebook

Epic cafe at Facebook

Facebook clothes at company store

Facebook teddy bears

Facebook pens

Mark Zuckerberg on wall at Facebook headquarters

eBay talks about a contextual future for its mobile efforts; update on our book “Age of Context”

eBay sells billions of dollars of things on mobile. Cars. Boats. Jewelry. Clothes. Gadgets. And more. Here Steve Yankovich, VP of mobile and platform at eBay, tells me how eBay is competing with Amazon. What the trends on mobile are.

eBay’s new eBay Now app lets you have things delivered from stores in San Francisco too. Will eBay expand this offering to other cities? I bet it will and Steve gives his insights there.

Finally, Shel Israel and Robert Scoble are working on a book about contextual software and Steve talks about how eBay is using contextual data to help purchasers in the future. Think of what eBay would do with Google’s wearable computers, coming next year, called “Project Glass.”

Try out eBay on your mobile.

+++++++++++++++

Speaking of context, you might know that I’m writing a book titled “the Age of Context” with Shel Israel, author at Forbes. We’ve been doing lots of interviews lately. Some of them below. We’ve visited Oakley, Qualcomm, JBL, Autodesk. At LeWeb I interviewed CEOs of Gnip (they have one of the only firehose licenses from Twitter) and the CMO of Salesforce. On Monday we’re visiting Facebook, along with talking to a bunch of startups. Tons more work to come.

We have made some progress on getting the book funded but we need some more help. We have one corporate sponsor, but they don’t want to be the only one funding the book. So, we need someone who can help us sell companies on a sponsorship (we want to do this book right, which means tons of travel, lots of interviews, and then putting a great product together with editors, designers, etc).

One of the best books I’ve ever seen is the Human Face of Big Data. That had several corporate sponsors. Why corporate sponsors? Because selling books simply doesn’t pay for the production of them anymore. At least not if you are only going to sell a few tens of thousands of copies. Hey, not everyone can be Tim Ferriss. 🙂

If you are interested, contact me at scobleizer@gmail.com.

That said, we have a pretty good idea of where the book is going now, thanks to these first big interviews, and we are aiming at finishing it by the end of summer, 2013, and having it available to purchase by Christmas of next year.

Here’s some of the interviews we’ve done so far, with many more to come. We’d love your help, too! Let us know if you are seeing anything contextual happening. What’s that?

When you mix:

1. Sensors.
2. Wearable computers.
3. Big data and post-SQL databases.
4. Social networks.
5. Location data.

all together to make highly personalized services or products.

Oakley’s AirWave ski goggles with heads up display and contextual future:

Basis sensor for health monitoring.

Qualcomm’s augmented reality team:

Qualcomm’s healthcare team is seeing context and mobile:

Qualcomm’s President of Internet Services about Qualcomm’s vision of mobile’s future:

Foodspotting and Delectable CEOs talking about the mobile first world:

Vintank’s CEO talks to me about a contextual future (they provide 1.1 million tweets a day to wine industry):

Foursquare’s head of growth and analytics on contextual future:

MC Hammer (famous musician and entrepreneur) gives his tips for entrepreneurs as we head into a contextual future:

AOL’s CEO talks to me about AOL’s future in age of context:

Nest’s CEO (and guy who headed iPhone effort at Apple) talks to me about contextual future:

Geeking out with CEO of Apigee about contextual future:

Discussion with Maluuba’s CEO (contextual app for Android):

Discussion of future of customer service in contextual world with Harry Max (co-founder of wine.com):

Discussion of contextual customer service with CEO of SOcialAppsHQ:

Discussion of contextual social world with CMO of Salesforce:

Discussion of social streams in contextual world with CEO of Gnip at LeWeb:

Autodesk CTO on future in contextual world: