Mobile 3.0 arrives: How Qualcomm just showed us the future of the cell phone (and why iPhone sucks for this new contextual age)

Google Now screen shot

The world just changed yesterday. You probably didn’t notice. But I guarantee strategists at Apple, Facebook, Amazon, Microsoft, and Google did.

What happened? Qualcomm shipped a new contextual awareness platform for cell phones.

Yesterday the Mobile 3.0 world arrived. First mobile was the standard old cell phone. You talked into it. The second mobile era was brought to us by the iPhone. You poked at a screen. The third era will bring us a mobile that saves us from clicking on the screen.

We’ve seen lots of precursors. Heck, Google itself, a couple of weeks ago, shipped something called “Google Now” that tells you stuff based on your context. “Hey, Scoble, you better leave for your next appointment because it takes 53 minutes to get there” my new Nexus 7 tablet tells me. You see the actual screen shot above.

But in the future your mobile device, whether it be something you hold in your hand like a smart phone, or wear on your face, like Google Glasses, will know a hell of a lot about you.

How?

Well, Qualcomm just shipped the developer SDK, called Gimbal.

This SDK talks to every sensor in your phone. The compass. The GPS. The accelerometer. The temperature sensor. The altimeter sensor. Heck, we’ve known about sensors in cell phones for a while now. Here’s a New York Times report from May of last year.

But now, thanks to this SDK your smart phone will start to make sense of the data. Developers will have a single data pool on your cell phone to talk with (Qualcomm was very smart about privacy — none of this data leaves your own cell phone unless you give it permission to).

Today I was talking with Roland Ligtenberg, product developer at Qualcomm Labs. While talking with me I realized just what Qualcomm was up to.

See, if you do all this collection and analysis in software there is a battery cost. Remember Highlight? My favorite app of SXSW (and really the year). Did you ignore it? Well, investors aren’t. Ron Conway told me that aside from Pinterest Highlight is his favorite new company. Mine too because it showed me something no one else showed me before (a new kind of context of people who are near me). It actually is a lame app compared to what is coming, thanks to this Qualcomm SDK.

Qualcomm wouldn’t comment, but Roland told me that if you did all this in hardware there would be a lot less battery cost. So, look for this SDK to come to your mobile phone (or other wearable computing devices, like Google Glasses) soon.

Want to see what other use cases are coming? Check out this answer on Quora (actually 28 separate answers from techies) about what the Google Glasses world will bring (really they are talking about contextual and wearable computing, mashing together).

To add onto those answers, these new systems are going to know whether you are walking, running, skiing. Whether you are shopping, working, entertaining yourself (it knows whether you are in church, or in a strip club, or at school, or at work, or driving). Thanks to the wifi and bluetooth radios it can even know you are riding in your wife’s car, not driving. (Only available on Android, because Apple doesn’t let developers talk to the radios).

Which brings me to why Apple sucks.

Apple does NOT give developers access to the Bluetooth and Wifi radios. This is going to really hinder developers in this new contextual world.

Think about why your phone or Google Glasses might want to know you are in the kitchen, vs. sitting on your couch in the living room. The information that should automatically show up on your phone will be radically different. In the kitchen I’m in a food context. I want recipes, or healthy living guides, or I want my device to track just how many Oreo cookies I’m eating “hey, Scoble, you fat dude, this isn’t helping!” Already we’re doing this kind of quantified self stuff with Fitbit, Nike Fuel Band, and other devices. My wife is already tracking everything she eats and does on her cell phone.

Now, in the future our cell phones will know us at a very deep level. Already I’ve told Facebook more than 5,000 things I like. Check out my list. It’s public. On it you’ll see which startups I like. But also that I like Round Table Pizza. Think about that one for a moment.

In the future my cell phone will know I ordered a pizza. Will know when I get in my car. Will know who is in the car with me. And will give me contextual data that will make my life better. For instance, on my todo list I might have put “pick up a hammer at the hardware store.” It will know that Round Table Pizza is near the hardware store. It will know I have an extra 15 minutes. It can use Waze to route me to the hardware store first, tell me to pick up my hammer, and then head to Round Table to pick up that pizza. All while measuring how many steps I took (Nike Fuel points!) and telling me who has crossed my path. Oh, Joseph Smarr, who works at Google, is also at the Round Table? Cool! (He lives in Half Moon Bay too so this could happen at any time).

But when I get back, can my phone understand that I’m now in the dining room, eating? Or the living room, ready to watch a sports show (it knows already what sports I like — think about the next Olympics where it tells me that it has queued up the track and field finals for me to watch automatically)? Only if you don’t have an iPhone because Apple hasn’t given developers access to the wifi and bluetooth radios, so it can’t let developers let you map out your house accurately.

Which gets me to what Facebook and Amazon could do to totally disrupt the smart phone market (both are rumored to be working on hardware). See, you shouldn’t work on hardware if you only can match what Apple has already done. You should work on it if you can totally blow away what Apple has done.

I bet that Amazon and Facebook are building a new kind of contextual device. One that already knows you. Facebook already knows what I read, watch, listen to, and much more thanks to its Open Graph API system. Amazon already knows what I read, watch, and buy, thanks to its commerce system.

Add these two companies to Qualcomm’s new contextual platform and you have a new world.

By the way, Qualcomm is a $95 billion market cap company and is spending $3 billion a year in R&D and its chipsets are probably inside the phone you are currently holding. So, I take what they are doing very seriously.

So seriously that next week Forbes author Shel Israel and I will announce a new project all around contextual computing next week. See ya on Tuesday.

A new age just arrived. Mark yesterday in your calendar and see you on Tuesday.

By the way, for those at Rackspace, this will eventually change everything about our business too. We’re well positioned, thanks to our move to supporting a totally open cloud, which will pay big benefits next year as developers need to build new infrastructure to deal with this contextual age. The cloud is about to turn contextual in a very big way and that’s why we need to keep up with what Amazon, Google, and the other players are doing here and why we should start building support systems for this Qualcomm SDK now. It is that big a deal.

Watch this video to see a taste of what’s coming in the new contextual age.

36 thoughts on “Mobile 3.0 arrives: How Qualcomm just showed us the future of the cell phone (and why iPhone sucks for this new contextual age)

  1. Qualcomm’s labs and think tanks are creating amazing technologies and ideas. I am always impressed when i am meeting up with these guys. Yes, Gimbal is a game changer and once again it shows us how superior an open approach is.

    p.s. the next big thing in terms of mobile hardware should be audio! Qualcomm is already working on new speaker setups for handsets for some 3 years. When i metup with them in San Diego about a year ago, they’ve had a prototype with 8 speakers that was capable of playing surround sounds that kept me looking for some hidden speakers in the meeting room. Stunning!

    Like

  2. Here’s what you’re missing. Having it be a set of API’s for various developers to write to sounds cool and all, but in reality, it will mean a disjointed experience with whatever devices this lands on. Contrast that with whatever Apple will end up doing – something that will be integrated and make sense – and you’ll see actual change. I think I’m on safe ground predicting that 12 months from now, no one will remember anything about this. instead, we’ll be comparing Google Now and the next revision of Siri.

    Like

    1. That assumes that its not what Amazon or Samsung will integrate for their tech
      Google Now is great but it will and on ICS 4.1 only and that will hit only a minor set of devices so there is at least on the Android side a large spectrum of devices that would love to see this in use.

      Like

  3. Interesting stuff, though I kind of agree with Steve Dondley that this is very premature to be acting too certain about it. That and I think people like the control of touch interfaces. I don’t see a lot of advantage in giving that up for most applications. After all, touch is the most richly sensory, accurate, and reliable way we have to interact with things — why is giving that up seen as an advantage? (just witness how blind people rely on touch — what they can do is amazing when they have to rely upon it much more than visual people)

    Also, David Hilditch: actually, Apple has proven that their closed approach is better in most respects, which is why more companies are imitating them (like they imitate everything else that Apple does…).

    And Nelson Saenz: I wouldn’t agree that Apple is worlds behind, yet I agree that they also don’t always lead in every case. It’s a great company but it’s not perfect. I like that Google and other companies also lead with innovation. Competition is good, especially for a company as dominant as Apple. It’s also free R&D for Apple — they can observe what others do, and pick and choose what they want to implement over time. Sometimes I wonder if it’s even a deliberate strategy of theirs?…

    Like

  4. So all intelligent devices can be connected and aware of each other in a relational database sense. But will it be object oriented such that each object’s attributes can be published and known? And could an automatic mode be set such that objects are free to interact with each other without human intervention? This portends a critical mass for post-human consciousness.

    Like

  5. PEOPLE, the whole piecenisna tongue in cheek joke about the absurdity of the growing dependence on mobile devices.

    Like

  6. Hoever cool it sound to have my phone guide me through life, playing the shows I like, having me meet the people I like (if I’m in the mood for that) and ordering the ingredients for my favorite dish, it doesn’t completely feel right that it makes my decisions for me. Because that’s what it will eventually do. It will tel me what to do, based on what it knows about me and my environment. Although the actions resulting from it will most of the time be beter than what I would have decided, it feels a bit like giving up my autonomy. The only thing I have to do to be happy (besides following instructions) is expressing what I like. And not even thaty, because my phone will no doubt be able to ‘read’ me.

    I will probably want such a phone, I’m just not sure if I want to want it.

    Like

  7. Good analysis…until you hit your (usual) Apple-rant. Not giving access to Bluetooth and Wifi is why Apple sucks? Come on! That’s software. They can change that with one minor update to their iOS. And unlike Google’s Android users, iPhone users are actually upgrading their phone software.
    After all, I am happy access to more phone features haven’t been granted to apps yet. Not everyone wants to share his private life, phone book, private pictures, notes, location, emails, calendar, and text messages with some Beemuda-incorporated, hippie-(or terrorist)run company outside of the US like you. Just take a look at our next president (Mitt that is): he doesn’t even let us know his earnings and shareholdings–maybe he is indeed smarter than the rest of us! So, let’s first make sure I am aware of what is uploaded and shared before going all in. I seriously hope that Android will also change their rights management.

    Like

  8. Imagine your device routing you through the nearest CVS to get cheap Viagra or telling you to get McDonalds (because they paid for a point of sale) instead of the salad since you don’t have time for your meeting.
    The problems are never with the technologies. The problems are with the false promises or with the system that sells to the marketer who is willing to spend to most for a below par experience.

    Like

  9. And this “integration” of API’s it what is “embeded” inside iOS and available for years now. It’s easy for devs (I’m one of them) to use them and they aren’t as limited as you think. Qualcomm plays in marketing game. That’s all. And meanwhile real implementations are working on iOS.

    Like

  10. @ James Robertson
    I think that you’d be right except for the fact that all of this is being driven by MONEY. Up until now, companies like Google, Twitter and Facebook have been amassing gigantic silos of data of every type on their users by offering expensive-to-develop, costly-to-provide, FREE services for the world. These services have intertwined themselves to become integral parts of our lives, have spawned countless cottage support businesses and , dare I say it, have become essential to the lives of many consumers. The technology is useful, relevant, flexible and desirable for peoples of many colors, races and creeds. But now that we have the mechanisms to extract and gather the data and users are inputting the data at an ever increasing pace, what are we going to DO with the data? The “Google” method (using data to improve search) only goes so far and is mainly aimed at improving services already in practice. In it’s current form, search is flawed by too few datapoints. What you are looking for is only a part of the answer. Microsoft jokingly highlighted one of the short comings of search (skewed context) and without enough datapoints, not only is it not helpful, but it can be harmful, embarrassing and even dangerous. We can gather more data (by just asking for it, it seems), but if there’s one thing we (should have) learned from Web 1.0: cool technology for cool technology’s sake IS NOT a sustainable business model. There is, however, one thing that no matter what service or product you develop you will ultimately need: Advertising. Advertising (even mobile and web advertising) has had a difficult time with remaining relevant in multiple and changing contexts. What is relevant in a grocery store is not relevant in a movie theater. What you want to see out in a bar with your friends doesn’t help you when you are trying to pick out software at work. We are already sharing some data, but it isn’t enough to remain relevant at ALL TIMES. As Robert has stated, it isn’t smooth enough. Asking me questions is only going to raise my resistance because I am AWARE that I’m sharing information. The sensors on our mobile devices (be it Microsoft tablet, Google phone or Mac Laptop) are constantly polling and collecting information and we don’t give it a second thought. The only time we see it is when the “location sharing” dialogue confirmation box pops up when we install something, which we IMMEDIATELY click (it has become the modern day EULA). After that, nobody thinks about it. Having a little box that follows you around and sends data out is like having a little tattletale “spy” that follows you everywhere. And we KNOW how valuable that is! Right now we have (to some degree) the option to withhold data or skew data the way we want. We don’t HAVE to tell Facebook everything (and often lie to it) and can carefully craft our Tweets to maximize effectiveness. The thing is, you can’t hold a phone different so the GPS reads somewhere else without malicious, deliberate intent, making it more difficult NOT to share than it is to share. When NOT sharing becomes much more of a hassle than sharing, accurate, relevant data will be easily obtainable. Add to that multiple sensors (and multiple points of failure), and you have a MUCH better chance at delivering advertising that will be consumed and is more effective.

    So, yes, it might not happen in 12 months, but it is coming. There are too many opportunities for advertisers, too many startups that need to start showing some kind of profit and too much push back from the big mainstream media clamoring for results from this “new web medium” for it not to.

    Like

  11. Awareness has always been a good idea, but there are a few things they’re missing here and that’s a harsh license. I expect a market full of 3rd party awareness competitors, with an open-source solution eventually becoming most popular. Google and Apple will eventually have to provide similar capabilities as a core OS service. IMHO MS and BB are already out of the game.

    Like

  12. Core Location has had regional monitoring since iOS 5.0. This uses wifi and Bluetooth info to tell if someone has left or entered certain areas. You don’t need access to the radios- just the interpretation which is similar to what Qualcomm did. Apple is ahead of the curve- there are geo caching apps that have been out for a while. Just no one has done a decent personal assistant.

    Like

  13. “none of this data leaves your own cell phone unless you give it permission to” in response to an app that demands you do so before it will run. I already have a weather app that insists it needs access to my messages. Ramp it up, spyware…

    Like

  14. As cool as this sounds, what price are we paying for this loss of privacy and can we have an fdisk command line option? Part of me longs for the old rotary phone and the leisure of snail mail.

    Like

  15. The fly in the ointment is google & facebook.

    My take from this article is that this tech relies on info it can collect from my facebook profile and possibly from data in google apps? Two concepts totally alien to these companies are privacy and security. Many people are perfectly comfortable living in glass house and having their info sold to advertisers. Clearly this seems to be the case with Mr Scoble. Me, not so much. I do not have a FB account, I do not use google at all, nor do I envision changing my modus operandi in the future.

    I am not a fan of Apple’s walled garden and the push towards the “cloud” either. Until such time as a device can provide these services strictly based on private data INSIDE the device, not shared by any site, I will continue doing things old school, thanks.

    Like

  16. Without Wifi and Bluetooth Radio, this can’t be done and the iPhone is useless?

    What kind of mosquito brain do you have? This argument is just a crapload of bullshit!

    I don’t see any point to go beyond Location, GPS, Camera, Compass and “User”,
    and it is all that is needed to do something “More” than this crap has to offer.

    Like

  17. One word.
    Creepy.
    This is the golden opportunity for businesses for bombard any smartphone with adverts.
    It’s bad enough already wandering around any big city with Bluetooth turned on.
    Amazing technology will be put to incredibly annoying use.
    We, unfortunately, live in interesting times.

    Like

  18. This is amazing, after my presentation at ARE2011 on contextual awareness, sensors, and localized personalized relevant data, I was brainstorming with the Qualcomm SDK guys on this topic. Wow, they did move away from the traditional search approach, Congrats!

    Like

  19. Web 3.0 is an great paradigm Robert,

    a distinction is important in my view.
    the social graph is explicit, the Google Now knowledge graph is implicit,
    ones social graph is a small subset of ones knowledge graph, it’s driven by active
    likes on the facebook platform. knowledge graph is deeper, wider, broader. and driven
    by a user habits with browsing, search, and is more encompassing..

    google has a much better ‘context’ on a given user. derived from ‘search’ and its ancillary ecosystem products.

    building an API to harvest a centralized database in hardware is interesting. Apple would obviously be exited/onboarding if the implementation results in significant battery life savings. Google access to bluetooth and network is one of the reasons android devices have such miserable battery life. but a centralized DB would still require API calls, unless we’re looking at some kind of cached static low energy solution, I’m not sure of the savings there….

    thanks for the post
    k

    like Chien-Yu above, maturity and depth of the platforms varies

    Like

  20. “There you are, and so is Gimbal”,

    Creepy. Who wants to be stalked by people only interested in taking their money?

    Like

  21. Scoble, Nice! Remember, data is the robot.
    For techs looking to explain to executives and management why this is important and how, why and the estimated monetized benefit – hand managers Customer Worthy (at Amazon or get free pieces at website) – The book links this discussion with current business functions, capabilities and… customers.

    The CxC Matrix in the book starts several layers above your Qualcom SDK reference, but prescribes a messaging framework which delivers content or rules to each interaction (on behalf of customer, machine, network, etc) look here http://www.customerworthy.com/cxc-matrix

    Take a look at Customer Worthy as a means to articulate how to leverage these new capabilities – let me know what you think. (Oh, and there’s a data schema for the messaging component too)

    Regards,
    Michael R Hoffman, Author Customer Worthy
    Director Paragon Solutions, NJ

    Like

  22. Just what the world needs, another API from a chip vender.

    Let’s assume Apple and Google aren’t run by idiots ready to open…heck, passionately throw-off the kimono for Qualcomm and wrap its code. In that case, for the developer the addition of a hardware API only adds another layer of complexity, results in yet more debugging time, a longer time to market, more chances for breaking the app, and ruining the user’s day. The user-experience derived from a hardware API such as Qualcomm’s will be subpar, even compared to the Android OS, never mind iOS, at least for a time. Now, what would compel a developer do this to the company and its users?

    The level of integration of which you dream is coming, but not from a chip vender. Yes, Qualcomm controls the hardware some. But Apple controls the platform utterly.

    Like

  23. Google Glass could well be the ‘iPhone’ of Mobile 3.0. Plus you can understand if both the rumoured Facebook and Amazon Smartphones follow this Contextual Route.

    Ultimately what is really needed to make this Mobile 3.0 Landscape take off, is the full integration of Global Businesses, whom are all still stuck in Web 1.0.
    Global Businesses still follow the Google Creed of only using their Search and Advertising Model to attract Global Consumers.
    The Desktop Platform rules and Social Media experiments with Facebook Pages and Twitter Accounts for Businesses, are half hearted efforts to appear ‘hip and trendy’.

    Global Businesses are quite happy to be an ‘App’ on popular Smartphone and Tablet Platforms, but they still haven’t decided to be a ‘Real-time’ business there.
    What is needed is a Smart Platform that can disrupt Global Businesses from their cosy Desktop Web Environment and turn their online Consumer Operations 360.
    Right now I am working on the DNA of that Smart Platform.

    Like

  24. While the potential to accomplish this is ever increasing with more powerful, more connected phones loaded with sensors, it’s never going to be the idealized version depicted in the video. A real person’s life is fragmented across many devices, some of which won’t allow the capturing of data. For example, you order your pizza using an old-style land line phone. Neither your cell phone nor Facebook is aware of that expressed preference.

    Not to mention that Qualcomm is claiming that all this will be happening inside the phone. Does that mean it wont be pulling data from Facebook and other online sources of your preferences? Seems contradictory.

    Sure, this contextual stuff will happen, but its akin to speech recognition. We went though hundreds of cycles of thinking is was just around the corner, yet there never was a distinct inflection point like Scoble seems to be predicting for this context technology, where it was orders of magnitude better. It just slowly, progressively got better.

    The statement, “if you did all this in hardware there would be a lot less battery cost,” is guaranteed to be a vague and inaccurate. You *can’t* do all this in hardware, unless you build an insane amount of hardware. You can offload pieces of the problem to hardware, like having a super low power, slow microcontroller monitoring the GPS to see when the phone arrives in one of the “geofenced” contexts and then wake up the big CPU, but you’re still using a ton of software.

    This blog posting seems to be half intended just to be a way to goad Apple into providing an API. The reality is that if this takes off, Apple will provide an API to their app developers for accessing the radios. If it is functionality Apple wants, or functionality that market pressure makes Apple want, then it’ll be supported.

    The danger in choosing a more closed platform is not that you wont see support for functionality that’s heading towards mainstream, but that you’ll never have the ability to do the “long tail” things that just never rise to Apple’s attention.

    You like Apple products for their appliance-like behavior, so you should be prepared to accept the limits that come with that model.

    Like

  25. Not entirely sure I want my phone encouraging me to buy more stuff I don’t need (we could all do with buying less stuff).

    However, combine this with an internet of things and it becomes useful.

    Consider the running out of gas scenario. Currently, I have a phone with internet access, a GPS which knows where I am, and a car that’s running out of gas. Finding the nearest gas station with the cheapest price is in no way automated because these smart devices don’t have APIs. Now if my car could let my phone know it was low on gas, my phone could check for the cheapest gas, and tell my GPS where to route me that would be cool. Even cooler if it knew I also needed certain groceries because it could also speak to my fridge and therefore pick the gas station which had the items in stock. And even cooler if it knew my schedule and told me stuff like “If you don’t get gas tonight you’re going to need to start your commute 10 minutes early tomorrow and so I’ll set your alarm for you so you can leave on time”

    Like

  26. Scoble – mark this down – one more time when you are completely utterly wrong. I don’t think you get technology – you just jump up and down like a monkey without even getting the basics – time for you and Arrington to leave the industry to experts.

    Like

  27. And she’s getting and looking at / mentally processing offers while she’s driving home? I’d rather not see that…

    Like

Comments are closed.