Saga: the contextual app that shows us what Google Glasses could do, albeit today

Saga just launched.

What is it?

A new kind of mobile companion. It studies what you do. How you do it. When you do it. Where you do it. Who you do it with. It’s an app that studies your context and builds an intelligent companion.

Yes, it’s iPhone only today, but will come to Android soon.

Here founder Andy Hickl shows me the app and explains to me what it does, and how it’ll protect my privacy.

This app, after you run it for a while, tells you all sorts of stuff about you, and your day coming up.

It competes with a bunch of things. Including Siri, PlaceMe, and others. I’ll give a more full report in about a week.

Speaking of apps that gather your contextual information, like Saga does, we’re writing a book on this new genre of apps and services called “The Age of Context.” Shel Israel, over on Forbes, posted our table of contents. We’d love to know what you think about that. Are we on the right track? If not, what do we need to put in the book or take out?

The press is covering this. Here’s Venture Beat’s report. Here’s Fast Company. GigaOm’s report.

Advertisement

The coming automatic, freaky, contextual world and why we’re writing a book about it

Robert Scoble and Google Co-Founder Sergey Brin at Last Night's Dinner in the Dark in San Francisco

First, the short version of today’s news. Shel Israel and I are collaborating on a book, titled, The Age of Context: How it Will Change Your Life and Work.

The long version:

A new world is coming. It’s scary. Freaky. Over the freaky line, if you will. But it is coming. Investors like Ron Conway and Marc Andreessen are investing in it. Companies from Google to startups you’ve never heard of, like Wovyn or Highlight, are building it. With more than a couple of new ones already on the way that you’ll hear about over the next six months.

First, the trends. We’re seeing something new happen because of:

1. Proliferation of always-connected sensors.
2. New kinds of cloud-based databases.
3. New kinds of contextual SDKs.
4. A maturing in social data that nearly everyone is participating in.
5. Wearable computers and sensors like the Nike FuelBand, FitBit, and soon the Google Glasses.

More on these trends later in this post.

This new, automatic world, is already coming. Highlight tells you when people who are using Highlight are nearby. Automatically. Google Now tells you to leave early for your next appointment because traffic is bad. Automatically. PlaceMe checks me into every place I enter. Automatically (including places you might not want others to know about, like churches and strip clubs).

We’ve also seen several other examples that are coming over the next few weeks. A TV guide that shows you stuff to watch. Automatically. Based on who you are. A contextual system that watches Gmail and Google Calendar and tells you stuff that it learns. A photo app that sends photos to each other automatically if you photograph them together. And then there’s the Google Glasses (AKA Project Glass) that will tell you stuff about your world before you knew you needed to know. There is a new toy coming this Christmas that will entertain your kids and change depending on the context they are in (it will know it’s a rainy day, for instance, and will change their behavior accordingly). New kinds of algorithmic customer support is being developed by retailers and even at Rackspace that will answer your questions differently depending on your context (we are developing ways to figure out that you aren’t happy before you even call and yell at us, for instance, Rackspace has hired one of the world’s experts here, Harry Max, but we aren’t the only ones thinking about this). We’ve already talked to automobile companies that are thinking about this in a big way (and even startups like Waze are trying to show you stuff about the road before you get there).

Add to that new kinds of software developer kits coming from major companies like Qualcomm (AKA Gimbal), which will gather this new kind of contextual data together, send it off to cloud servers, where developers can build new kinds of apps that will, in real time, hook up to all sorts of databases about us and the businesses we buy from or work for, and bring us back interesting smart alerts and more.

Our announcement this morning: The Age of Context: How it Will Change Your Life and Work

Anyway, it’s very clear to us that there’s a major new trend underway and, so, today I’m announcing that I’m writing a book together with my pal, and Forbes author, Shel Israel (we wrote a book that kicked off the social age seven years ago, called “Naked Conversations” that is still being used in Universities and as a guide to corporate communicators). His take on the book is up on the Forbes Blog, or will be soon.

We will take on the fears of this new world and explain why users will end up giving over their most private of information. You will store everything you do in life in this system and, to most, that is extremely scary. Yes, these systems can even tell when you are having sex and, worse of all, will know what brands you like and where your favorite gas station is. Way over the freaky line for most people, at least today, this future is coming and coming fast thanks to a new range of sensors and contextual SDKs (software developer kits) that will sit on our smart phones and other devices we’ll wear.

We will also interview dozens, if not hundreds, of businesses about how they are preparing for this age of context. Already we’ve started this process, talking to car manufacturers, wineries, and many startups.

Shel and I are both seeing the same thing from different points of view. For the past few months he’s been working for Forbes, headed around the world to interview business executives. He has access to companies that I don’t, especially since he’s traveled the world after writing books, on the social age, on Twitter and How to Give Presentations (he used to help startups get ready for big demos).

Thanks, too, to our friends Buzz Bruggeman and Andy Ruff who introduced us years ago and told us to write our first book.

The book will be written over the next nine to 12 months, we’re hoping to publish it sometime in the first half of 2013. Right around the time the first Google Glasses come out. Why then? Because that’s when the Contextual Age will be very clear to everyone and businesses will need to figure out what to do because of these shifts. Just today I talked with the founder of Informix (he now runs the Connection Cloud, which is going to connect all of the cloud systems inside enterprises together, to give enterprise developers the tools to build real-time contextual systems that will do for businesses what the Google Glasses will do for consumers).

How will the book be funded?

Right now we don’t quite know how the book will be funded. The book industry is in extreme turmoil and we can’t bankroll it all ourselves. Just a single trip to see, say, what Ford is doing in Detroit, costs thousands of dollars. Double that if we both go. We have some ideas:

1. Get a corporate sponsor to bankroll the book and other media that will spring out of it. That’s happened on other books, like the Day in the Life series of photo books (IKEA and others sponsored those). If you’d like to participate, please get ahold of us at scobleizer@gmail.com.

2. Get a traditional book publisher to give us an advance to give us some funds to produce the book (that’s how we did Naked Conversations). Unfortunately, though, the book industry is less willing to do that at a significant level.

3. Fund it with a cloud-funding site, like Kickstarter (there are several to choose from, Kickstarter is just the most popular).

4. Some hybrid approach, or some approach where we sell access to chapters. There’s lots of innovation to come in the book industry, thanks to lots of people reading books on iPads, Kindles, and other mobile devices.

5. Fund it by doing speaking gigs or developing conferences around the idea. Other authors have done this, by charging $20,000 per speaking gig, which they use to fund book development, or pay off credit cards that get overused while investing time into a book.

There are other ideas, too, and we’re interested in hearing about them, (please write me at scobleizer@gmail.com). It’ll cost about $100,000 to do this book right over the next six to nine months.

So, what’s Rackspace’s role in this?

Rackspace is very excited by this new world. We are already seeing our customers building contextual apps, systems, and infrastructure and we want to help further innovation in this space by providing a set of open source and open cloud technologies that will enable developers to innovate on faster. Open source is a big trend, driving this book, and investors are behind that big time too, see Andreessen Horowitz’ $100 million investment in Github, for instance.

Rackspace is funding my research behind this book (I’m a full-time employee of Rackspace), and, indeed, already am doing interviews like this one with the CEO of CouchBase, the database that Zynga is running on.

Back to the trends

Let’s talk about the trends, introduced earlier.

Proliferation of always-connected sensors.

Your cell phone, alone, has these sensors: Video, Audio, GPS, Gyroscope, Accelerometer, Compass, Gravity, Wifi, Bluetooth, and possibly Temperature, and Barometer. All of these sensors can be used to figure out what context you are in. Are you walking, driving, skiing, sitting? These sensors can already know (companies like Alohar are already studying them and sending that data up to the cloud).

New kinds of cloud-based databases.

There are new kinds of cloud-based databases. Take a look at Firebase, for instance. http://www.building43.com/videos/2012/05/07/firebase-syncing-data-between-clients/ has my interview with its founders. This is a real-time system that enables new kinds of applications to be written. Add to that CloudBase, which already runs Zynga and is about to see a major revision that will allow real-time searching. Or, even, other kinds of social databases, like Pearltrees, which lets you organize data into contextual trees (interview with them coming soon).

New kinds of contextual SDKs.

Qualcomm’s Gimbal is only the first. https://www.gimbal.com/ It lets you build new kinds of geofences, interest sensing, and other features that will enable developers to bring this new world to us.

A maturing in social data that nearly everyone is participating in.

Facebook, Twitter, LinkedIn, Quora, and Google+ are all maturing very quickly. Facebook, in particular, has built APIs that enable new kinds of apps. Highlight shows the way here, with it you can see people near you who are already using the app. On its display you can see what Facebook likes and friends you share in common with this person.

Wearable computers and sensors like the Nike FuelBand, FitBit, and soon the Google Glasses.

When you go to the doctor in the future he or she will be able to see your vital statistics like weight, exercise activity, and more, thanks to sensors many of us are already starting to carry around or use. I have a FitBit scale that doesn’t just weigh me, but shows me my BMI (Body Mass Index). Just so I know how obese I am. Soon other sensors and contextual systems will arrive, too. I used one from Empatica, while on stage at the Next Web Conference. It is a galvanic skin response sensor which measures my emotions in real time. Imagine how companies could use this to improve customer support! (I’d love the airlines I use to see my graph as I interact with its employees, for instance).

Add these trends together with major announcements made by Google (who already shipped Google Now and has already previewed the Project Glass at its developer conferences) and the reactions I’ve gotten to posts where hundreds of comments have come in, like this post written last week, and we see there’s a need for a book to help everyone see what’s going on and how to take advantage of the new contextual age.

The role of my blog is going to change

Starting today I’m going to focus my blog here totally into this new project and what I’m seeing in the world over the next nine months. There are plenty of other places to watch me publish other things in my life (I’d recommend subscribing to me on Facebook, which is the best place to follow me since you’ll see my Quora answers, all my posts on Google+, my Soundcloud podcasts, my photos posted in various places, including on Facebook, and all my videos I post on various YouTube channels, and, of course, my blog posts here).

Next steps

We’re already working on the book and Shel will publish early versions of chapters onto Forbes where we can gather feedback about them. When we published Naked Conversations putting our work into the harsh eye of the public dramatically improved the book. People around the world gave us critiques, added new ideas and companies to it, and even grammar corrected our work. That process was totally unfamiliar to the book industry back then (we forced our publisher to accept it, rather unwillingly I might add) but it’s interesting to note how few authors are using that technique seven years later. It’s invaluable and you will definitely be part of every step of this process (which is why we’re announcing that we’re working on this now).

Anyway, read Shel’s post over on Forbes which adds more details about this project and let us know what you think.

PHOTO CREDIT: Photo taken of me and Sergey Brin, Google Co-Founder, by Thomas Hawk. Sergey is wearing the Project Glass prototypes that Google will release to developers sometime in 2013 (Scoble is the 107th purchaser).

Mobile 3.0 arrives: How Qualcomm just showed us the future of the cell phone (and why iPhone sucks for this new contextual age)

Google Now screen shot

The world just changed yesterday. You probably didn’t notice. But I guarantee strategists at Apple, Facebook, Amazon, Microsoft, and Google did.

What happened? Qualcomm shipped a new contextual awareness platform for cell phones.

Yesterday the Mobile 3.0 world arrived. First mobile was the standard old cell phone. You talked into it. The second mobile era was brought to us by the iPhone. You poked at a screen. The third era will bring us a mobile that saves us from clicking on the screen.

We’ve seen lots of precursors. Heck, Google itself, a couple of weeks ago, shipped something called “Google Now” that tells you stuff based on your context. “Hey, Scoble, you better leave for your next appointment because it takes 53 minutes to get there” my new Nexus 7 tablet tells me. You see the actual screen shot above.

But in the future your mobile device, whether it be something you hold in your hand like a smart phone, or wear on your face, like Google Glasses, will know a hell of a lot about you.

How?

Well, Qualcomm just shipped the developer SDK, called Gimbal.

This SDK talks to every sensor in your phone. The compass. The GPS. The accelerometer. The temperature sensor. The altimeter sensor. Heck, we’ve known about sensors in cell phones for a while now. Here’s a New York Times report from May of last year.

But now, thanks to this SDK your smart phone will start to make sense of the data. Developers will have a single data pool on your cell phone to talk with (Qualcomm was very smart about privacy — none of this data leaves your own cell phone unless you give it permission to).

Today I was talking with Roland Ligtenberg, product developer at Qualcomm Labs. While talking with me I realized just what Qualcomm was up to.

See, if you do all this collection and analysis in software there is a battery cost. Remember Highlight? My favorite app of SXSW (and really the year). Did you ignore it? Well, investors aren’t. Ron Conway told me that aside from Pinterest Highlight is his favorite new company. Mine too because it showed me something no one else showed me before (a new kind of context of people who are near me). It actually is a lame app compared to what is coming, thanks to this Qualcomm SDK.

Qualcomm wouldn’t comment, but Roland told me that if you did all this in hardware there would be a lot less battery cost. So, look for this SDK to come to your mobile phone (or other wearable computing devices, like Google Glasses) soon.

Want to see what other use cases are coming? Check out this answer on Quora (actually 28 separate answers from techies) about what the Google Glasses world will bring (really they are talking about contextual and wearable computing, mashing together).

To add onto those answers, these new systems are going to know whether you are walking, running, skiing. Whether you are shopping, working, entertaining yourself (it knows whether you are in church, or in a strip club, or at school, or at work, or driving). Thanks to the wifi and bluetooth radios it can even know you are riding in your wife’s car, not driving. (Only available on Android, because Apple doesn’t let developers talk to the radios).

Which brings me to why Apple sucks.

Apple does NOT give developers access to the Bluetooth and Wifi radios. This is going to really hinder developers in this new contextual world.

Think about why your phone or Google Glasses might want to know you are in the kitchen, vs. sitting on your couch in the living room. The information that should automatically show up on your phone will be radically different. In the kitchen I’m in a food context. I want recipes, or healthy living guides, or I want my device to track just how many Oreo cookies I’m eating “hey, Scoble, you fat dude, this isn’t helping!” Already we’re doing this kind of quantified self stuff with Fitbit, Nike Fuel Band, and other devices. My wife is already tracking everything she eats and does on her cell phone.

Now, in the future our cell phones will know us at a very deep level. Already I’ve told Facebook more than 5,000 things I like. Check out my list. It’s public. On it you’ll see which startups I like. But also that I like Round Table Pizza. Think about that one for a moment.

In the future my cell phone will know I ordered a pizza. Will know when I get in my car. Will know who is in the car with me. And will give me contextual data that will make my life better. For instance, on my todo list I might have put “pick up a hammer at the hardware store.” It will know that Round Table Pizza is near the hardware store. It will know I have an extra 15 minutes. It can use Waze to route me to the hardware store first, tell me to pick up my hammer, and then head to Round Table to pick up that pizza. All while measuring how many steps I took (Nike Fuel points!) and telling me who has crossed my path. Oh, Joseph Smarr, who works at Google, is also at the Round Table? Cool! (He lives in Half Moon Bay too so this could happen at any time).

But when I get back, can my phone understand that I’m now in the dining room, eating? Or the living room, ready to watch a sports show (it knows already what sports I like — think about the next Olympics where it tells me that it has queued up the track and field finals for me to watch automatically)? Only if you don’t have an iPhone because Apple hasn’t given developers access to the wifi and bluetooth radios, so it can’t let developers let you map out your house accurately.

Which gets me to what Facebook and Amazon could do to totally disrupt the smart phone market (both are rumored to be working on hardware). See, you shouldn’t work on hardware if you only can match what Apple has already done. You should work on it if you can totally blow away what Apple has done.

I bet that Amazon and Facebook are building a new kind of contextual device. One that already knows you. Facebook already knows what I read, watch, listen to, and much more thanks to its Open Graph API system. Amazon already knows what I read, watch, and buy, thanks to its commerce system.

Add these two companies to Qualcomm’s new contextual platform and you have a new world.

By the way, Qualcomm is a $95 billion market cap company and is spending $3 billion a year in R&D and its chipsets are probably inside the phone you are currently holding. So, I take what they are doing very seriously.

So seriously that next week Forbes author Shel Israel and I will announce a new project all around contextual computing next week. See ya on Tuesday.

A new age just arrived. Mark yesterday in your calendar and see you on Tuesday.

By the way, for those at Rackspace, this will eventually change everything about our business too. We’re well positioned, thanks to our move to supporting a totally open cloud, which will pay big benefits next year as developers need to build new infrastructure to deal with this contextual age. The cloud is about to turn contextual in a very big way and that’s why we need to keep up with what Amazon, Google, and the other players are doing here and why we should start building support systems for this Qualcomm SDK now. It is that big a deal.

Watch this video to see a taste of what’s coming in the new contextual age.