I finally get “semantic” Web

Yesterday I got a look at Radar Networks’ stealth stuff. It won’t be on the market until later this year but for the first time I finally understood what the semantic Web was all about and what benefits it’d bring us.

I don’t think I could explain it in ASCII text. I think that’s the problem. I read Tim Berners-Lee’s paper. I didn’t get it. I read tons of other stuff about it. I didn’t get it. It took someone building a new system and demonstrating it to me for me to get it.

Basically Web pages will no longer be just pages, or posts. They’ll all be split up into little objects, stored in a database (a massive, scalable one at that) and then your words can be displayed in different ways. Imagine a really awesome search engine that could bring back much much more granular stuff than Google can today. Or, heck, imagine you could view my blog by posts with most inbound links.

And I’m not even doing it justice (and I’ve been asked not to reveal what Radar really is doing until later this year).

It’s funny, yesterday I was thinking to myself “the industry has gotten boring.” Then came along Radar Networks, which definitely is doing something not boring.

79 thoughts on “I finally get “semantic” Web

  1. Wow, the way you describe it gives me this mental picture. It’s as if you can see all the different conversations that have involved that particular post, all at once.

    Perhaps, this is a better description. I’m not into comic books..but it sounds like a layout with all the “word bubbles” that display all the different conversations in the order that they took place.

    Am I right? Or am I totally off?

    Like

  2. Wow, the way you describe it gives me this mental picture. It’s as if you can see all the different conversations that have involved that particular post, all at once.

    Perhaps, this is a better description. I’m not into comic books..but it sounds like a layout with all the “word bubbles” that display all the different conversations in the order that they took place.

    Am I right? Or am I totally off?

    Like

  3. I’m quite skeptical of the Semantic Web because it appears to hinge on the availability of large quantities of high-quality metadata.

    Well, people couldn’t even get HTML right. They can’t get RSS right (Mark Pilgrim had to make his feed parser “ultraliberal” doing things that make baby Jesus cry for a reason). How are they going to get much more complicated data formats right?

    And if all this data is part of some kind of propositional logic-based forward-chaining inference framework, isn’t the entire chain only as strong as the weakest link, and all it takes to muck everything up is one guy who didn’t produce proper data?

    It seems doomed to fail.

    Correct me if I’m wrong because I’d love for this to work.

    Like

  4. I’m quite skeptical of the Semantic Web because it appears to hinge on the availability of large quantities of high-quality metadata.

    Well, people couldn’t even get HTML right. They can’t get RSS right (Mark Pilgrim had to make his feed parser “ultraliberal” doing things that make baby Jesus cry for a reason). How are they going to get much more complicated data formats right?

    And if all this data is part of some kind of propositional logic-based forward-chaining inference framework, isn’t the entire chain only as strong as the weakest link, and all it takes to muck everything up is one guy who didn’t produce proper data?

    It seems doomed to fail.

    Correct me if I’m wrong because I’d love for this to work.

    Like

  5. I agree with some guy.

    While the concept sounds really good, I just think that most (+90%) of people won’t be able to grasp it.

    If the process is broken up into steps, I think it’s more feasible. For example, if everyone started off by adding REL tags (relationship) to their content to identify themselves, a large social grid would be build.

    The problem with the current hCard movement is that it does nothing for verification. I can mark something as [choose name] and throw in a few relationship URL’s, and that’s it; there’s no verification. If someone searches [insert name again], they come across my spoofed content Instead, I think we need to begin by associating communication with ID tags (i.e. e-mail to ID).

    Like

  6. I agree with some guy.

    While the concept sounds really good, I just think that most (+90%) of people won’t be able to grasp it.

    If the process is broken up into steps, I think it’s more feasible. For example, if everyone started off by adding REL tags (relationship) to their content to identify themselves, a large social grid would be build.

    The problem with the current hCard movement is that it does nothing for verification. I can mark something as [choose name] and throw in a few relationship URL’s, and that’s it; there’s no verification. If someone searches [insert name again], they come across my spoofed content Instead, I think we need to begin by associating communication with ID tags (i.e. e-mail to ID).

    Like

  7. Congratulations, Scoble! There is a grokking point, where it all makes sense. Like with RSS, it’s difficult to explain why it’s cool until you see the reality for yourself.

    What really excites me is GRDDL, which basically allows one to provide a bridge between straight HTML and the Semantic Web.

    Like

  8. Congratulations, Scoble! There is a grokking point, where it all makes sense. Like with RSS, it’s difficult to explain why it’s cool until you see the reality for yourself.

    What really excites me is GRDDL, which basically allows one to provide a bridge between straight HTML and the Semantic Web.

    Like

  9. As the focus of Web resources slowly shifts from the file-and-document-repository paradigm to that of granular, globally unique resources, I imagine the knowledge market will sort out the weaker links.

    It will become (is) a critical business requirement to make sure one’s metadata is up to par, and (more) software will be built to help with the process.

    Moreover, I foresee Web applications in the not-too-distant future evolving similarly to 3D modeling applications did in the last few years – from being crude sculpting tools to putting the operator in the director’s seat. We won’t be schlepping bits of markup text anymore, we will be arranging information as we see fit.

    Like

  10. As the focus of Web resources slowly shifts from the file-and-document-repository paradigm to that of granular, globally unique resources, I imagine the knowledge market will sort out the weaker links.

    It will become (is) a critical business requirement to make sure one’s metadata is up to par, and (more) software will be built to help with the process.

    Moreover, I foresee Web applications in the not-too-distant future evolving similarly to 3D modeling applications did in the last few years – from being crude sculpting tools to putting the operator in the director’s seat. We won’t be schlepping bits of markup text anymore, we will be arranging information as we see fit.

    Like

  11. This sounds great, if not a little confusing just now. However I am sure that the concept of Google search sounded a little confusing when it was first thought of.

    Like

  12. Without some automatic way of turning existing pages into “semantic web” human lazyness and business tight fistedness is likely to delay the deployment of “semantic web” technologies.

    Not that there isn’t a lot of promise to “semantic web” technologies. These technologies are likely to join a whole list of technologies that help manage information, but I remain sceptical of magic bullet (killer app) status. I distinctly remember when the same was said of XML.

    Much of the magic of semantic web can be achieved by other means. There is certainly something to be said for making it easier to find common information (hell I would love all blogs were marked in a common to make it easy to process them) but there is a lot to be said for statistical analysis as well.

    Like

  13. Without some automatic way of turning existing pages into “semantic web” human lazyness and business tight fistedness is likely to delay the deployment of “semantic web” technologies.

    Not that there isn’t a lot of promise to “semantic web” technologies. These technologies are likely to join a whole list of technologies that help manage information, but I remain sceptical of magic bullet (killer app) status. I distinctly remember when the same was said of XML.

    Much of the magic of semantic web can be achieved by other means. There is certainly something to be said for making it easier to find common information (hell I would love all blogs were marked in a common to make it easy to process them) but there is a lot to be said for statistical analysis as well.

    Like

  14. 1, if you want to export your website’s services, there is XML/RPC, the remote procedure call Bill Gates refered to when he talked about the Web becoming an API.

    2. This won’t be accepted by the general public as a standard.

    3. Client side scripting is used sparingly by large sites because it’s more or less annoying. The standard browser isn’t built to navigate AJAX, and if a new browser comes out that will let you navigate DIVs and ajax stuff it will be a total pain in the butt to use.

    4. This is grasping at straws.

    5. This is more hype du jour from Silicon valley.

    Why more interfaces to the same old data?
    It’s not easier, it’s just annoying.

    They should find some better way to get venture cap.
    Like say, oh, working on something meaningful.

    Like

  15. 1, if you want to export your website’s services, there is XML/RPC, the remote procedure call Bill Gates refered to when he talked about the Web becoming an API.

    2. This won’t be accepted by the general public as a standard.

    3. Client side scripting is used sparingly by large sites because it’s more or less annoying. The standard browser isn’t built to navigate AJAX, and if a new browser comes out that will let you navigate DIVs and ajax stuff it will be a total pain in the butt to use.

    4. This is grasping at straws.

    5. This is more hype du jour from Silicon valley.

    Why more interfaces to the same old data?
    It’s not easier, it’s just annoying.

    They should find some better way to get venture cap.
    Like say, oh, working on something meaningful.

    Like

  16. What we’re building here at Radar Networks is a highly practical application of the semantic web for regular end-users (not just techies) on the Web. We’re still keeping under wraps until we’re ready to launch, but it’s definitely not hype. It brings together much of the promise of the semantic web, in a focused, real-world tool that we think will be very exciting and useful to people. So far we’re in internal alpha and so we are showing it selectively but still keeping it in stealth until we are actually able to launch it. There’s a lot more coming and I really look forward to releasing it later this year. We’re also looking for people to be part of our private beta tests this summer, so please sign up for our email list (on our site — http://www.radarnetworks.com) and we’ll let you know when we’re ready to let more testers in.

    Like

  17. Actually, I find this less hypeish than most services. I’m using vast amounts of metadata with some projects aimed at lawywers, even if I’m not using them yet… Even for us non-hackers, there’s a lot of benefit in a database/XML powered net.

    Like

  18. What we’re building here at Radar Networks is a highly practical application of the semantic web for regular end-users (not just techies) on the Web. We’re still keeping under wraps until we’re ready to launch, but it’s definitely not hype. It brings together much of the promise of the semantic web, in a focused, real-world tool that we think will be very exciting and useful to people. So far we’re in internal alpha and so we are showing it selectively but still keeping it in stealth until we are actually able to launch it. There’s a lot more coming and I really look forward to releasing it later this year. We’re also looking for people to be part of our private beta tests this summer, so please sign up for our email list (on our site — http://www.radarnetworks.com) and we’ll let you know when we’re ready to let more testers in.

    Like

  19. Actually, I find this less hypeish than most services. I’m using vast amounts of metadata with some projects aimed at lawywers, even if I’m not using them yet… Even for us non-hackers, there’s a lot of benefit in a database/XML powered net.

    Like

  20. The semantic web is the ultimate cop out, the very antithesis of everything computing is working towards. An admission that we’re too dumb to write smart software, and we have to spend inordinate time marking up our text so that our stupid software can understand it.

    I’m so disappointed that TBL supports this nonsense, back in the day his markup was simple, and even non-techies could write it. He now seems determined to take markup out of the reach of the common man.

    Even Google is in on the act. I would have thought Google were all about smart software, but the losing rel=”nofollow” stuff showed even they aren’t immune to stupid hacks.

    Like

  21. The semantic web is the ultimate cop out, the very antithesis of everything computing is working towards. An admission that we’re too dumb to write smart software, and we have to spend inordinate time marking up our text so that our stupid software can understand it.

    I’m so disappointed that TBL supports this nonsense, back in the day his markup was simple, and even non-techies could write it. He now seems determined to take markup out of the reach of the common man.

    Even Google is in on the act. I would have thought Google were all about smart software, but the losing rel=”nofollow” stuff showed even they aren’t immune to stupid hacks.

    Like

  22. Robert:

    Thanks for the notice. I am glad you got a chance to see what we’ve been doing. The “semantic web will fail” meme has been around a long time. As you saw, Radar is not trying the semanticize the entire web. It’s a semantic app. It just so happens that like any good app, it tends to encourage ‘good behavior’ from the rest of the world that wants to interact with it, but the app stands on its own. That’s the main point the Semweb naysayers don’t get. Semantic web technologies work and are great for building useful apps – the world doesn’t need to speak the same language for us to be able to work together. But speaking the same language definitely reduces translation friction.

    I look forward to your being able to be a bit more revelatory in the next few months.

    Like

  23. Robert:

    Thanks for the notice. I am glad you got a chance to see what we’ve been doing. The “semantic web will fail” meme has been around a long time. As you saw, Radar is not trying the semanticize the entire web. It’s a semantic app. It just so happens that like any good app, it tends to encourage ‘good behavior’ from the rest of the world that wants to interact with it, but the app stands on its own. That’s the main point the Semweb naysayers don’t get. Semantic web technologies work and are great for building useful apps – the world doesn’t need to speak the same language for us to be able to work together. But speaking the same language definitely reduces translation friction.

    I look forward to your being able to be a bit more revelatory in the next few months.

    Like

  24. This is how I look at the semantic web: it’s all Legos.

    With the current web your individual blocks are things like posts, comments, bookmarks, etc. You have tools like RSS, XML and JSON to put the blocks together (ie: Yahoo Pipes)

    These blocks are big and they only come in a few basic colours. They are the Legos of the 1970s.

    With the semantic web it is like modern Legos. The blocks are smaller, and come in many more different sizes and shapes. It makes it easier to build more detailed and complex structures.

    Like

  25. This is how I look at the semantic web: it’s all Legos.

    With the current web your individual blocks are things like posts, comments, bookmarks, etc. You have tools like RSS, XML and JSON to put the blocks together (ie: Yahoo Pipes)

    These blocks are big and they only come in a few basic colours. They are the Legos of the 1970s.

    With the semantic web it is like modern Legos. The blocks are smaller, and come in many more different sizes and shapes. It makes it easier to build more detailed and complex structures.

    Like

  26. cdavies: Good luck with that. I’m looking forward to the robot where I can poo in it’s mouth, it goes away and processes it and then hands me back little nuggets of gold. Drop me an email when you’ve built it, okay. Until that point, I think I’ll stick with the Semantic Web idea.

    And, believe me, there are lots of fun engineering challenges one can have building the SemWeb – making massively scalable triple stores (something that Nova’s company is apparently working on), efficient scutters, finding the optimum distribution of granularity vs. centrality and much more.

    If Google can achieve this much with a set of source documents that are not particularly well marked up semantically, then imagine how cool the next evolution of Google could be…

    Like

  27. cdavies: Good luck with that. I’m looking forward to the robot where I can poo in it’s mouth, it goes away and processes it and then hands me back little nuggets of gold. Drop me an email when you’ve built it, okay. Until that point, I think I’ll stick with the Semantic Web idea.

    And, believe me, there are lots of fun engineering challenges one can have building the SemWeb – making massively scalable triple stores (something that Nova’s company is apparently working on), efficient scutters, finding the optimum distribution of granularity vs. centrality and much more.

    If Google can achieve this much with a set of source documents that are not particularly well marked up semantically, then imagine how cool the next evolution of Google could be…

    Like

  28. Some interesting comments.

    A lot of information is already broken up into little pieces. The goal would be to easily get data in/out from services like eBay, Google, or even MySpace. Once relationships are defined (either algorithmically or with a little user input), it should be possible to conduct full social searches to return stuff like Mom’s schedule, Dad’s doctors appointment, John Doe’s resume, for-sale listings from my 6th degree, or what my family thought of a certain restaurant.

    The best part? All of that information resides on various services; the key is putting it all together. Schedules might reside on Google Calendar, Zoho, or some PDA application. Listings might reside on eBay or Craigslist. Resumes might reside on MySpace, Facebook, or Monster.

    Most services hate aggregation of data since it takes eyeballs away from their service… On the other hand, those who are generating the content would NEED those services in order to continue to generate content.

    It sounds great, but who knows how it will end up.

    Like

  29. Some interesting comments.

    A lot of information is already broken up into little pieces. The goal would be to easily get data in/out from services like eBay, Google, or even MySpace. Once relationships are defined (either algorithmically or with a little user input), it should be possible to conduct full social searches to return stuff like Mom’s schedule, Dad’s doctors appointment, John Doe’s resume, for-sale listings from my 6th degree, or what my family thought of a certain restaurant.

    The best part? All of that information resides on various services; the key is putting it all together. Schedules might reside on Google Calendar, Zoho, or some PDA application. Listings might reside on eBay or Craigslist. Resumes might reside on MySpace, Facebook, or Monster.

    Most services hate aggregation of data since it takes eyeballs away from their service… On the other hand, those who are generating the content would NEED those services in order to continue to generate content.

    It sounds great, but who knows how it will end up.

    Like

  30. Pingback: Nodalities
  31. Another problem with the Semantic Web is that it asks content creators to specify facts in a structured way, and then operates on those facts. There’s a problem with this: trust. If the content creators create the facts, how can they be verified? Some centralised system? Who’s going to agree to that?

    This is one of the reasons that e.g. Google doesn’t put much stock in the semantic web.

    Like

  32. Another problem with the Semantic Web is that it asks content creators to specify facts in a structured way, and then operates on those facts. There’s a problem with this: trust. If the content creators create the facts, how can they be verified? Some centralised system? Who’s going to agree to that?

    This is one of the reasons that e.g. Google doesn’t put much stock in the semantic web.

    Like

  33. Barry Kelly: the same way they are in real life. If someone puts their phone number on their website, and you phone it and get someone else, what do you do? If you buy a city guide that tells you that there is a certain museum on one street and it’s on another, what do you do? If you go to a shopping site and there isn’t a phone number, what do you do?

    Well, whatever your answer is – it’s basically the same with the SemWeb. There’s a whole group of cool and interesting technology – SSL, GPG, OpenID, FOAF – that can be used in combination with good old-fashioned human judgment and human rule-making. If someone advertises something on their website and it doesn’t work, they may be breaking – say – a law or contract. If they advertise something in a Semantic document (say, a series of RDF triples) and it isn’t true, then you can use the same sort of legal instruments.

    The W3C/TimBL vision of the Semantic Web is not holy. Trust is a question higher up on the layer cake. Adding small amounts of semantics at this stage does a lot of good – which is why things like microformats, eRDF and GRDDL are important.

    Like

  34. Barry Kelly: the same way they are in real life. If someone puts their phone number on their website, and you phone it and get someone else, what do you do? If you buy a city guide that tells you that there is a certain museum on one street and it’s on another, what do you do? If you go to a shopping site and there isn’t a phone number, what do you do?

    Well, whatever your answer is – it’s basically the same with the SemWeb. There’s a whole group of cool and interesting technology – SSL, GPG, OpenID, FOAF – that can be used in combination with good old-fashioned human judgment and human rule-making. If someone advertises something on their website and it doesn’t work, they may be breaking – say – a law or contract. If they advertise something in a Semantic document (say, a series of RDF triples) and it isn’t true, then you can use the same sort of legal instruments.

    The W3C/TimBL vision of the Semantic Web is not holy. Trust is a question higher up on the layer cake. Adding small amounts of semantics at this stage does a lot of good – which is why things like microformats, eRDF and GRDDL are important.

    Like

  35. Robert,

    I second the comments from Danny and would like to add the following:

    1. Data is Data, and it the foundation of everyting on the Web (as it is everything in our daily existence)
    2. Data has always wanted to be free
    3. The Semantic Data Web is about Open Access to Data on the Web
    4. There are no secrets in the Web Realm (any variant from 1.0 – X.X.X)
    5. Decomposing Web Pages into granular items of referencable data is what the Semantic Web has always been about
    6. Hyperlinks no longer point solely to (X)HTML documents, they can also point to granular bits of Data exposed by Data Sources (that include traditional Web Pages)
    7. When a hyperlink points to a Document as blurb it is a URL. When it points to a Document or any other Data Source with granular Data Access in mind it is a URI
    8. Everything you, I, or anyone else has posted on the Web will ultimately be accessible via single URI (because URIs can point to other URIs; meaning Data can point to other pieced of Data)
    9. Computers have worked this way since the beginning of time; we are simply extending the concept of the Computer. To quote Sun: The Network is the Computer. The Web is a Network that connnect People to Data and Data to People
    10. The Web is a subsystem of the Network Operating system called the Internet 🙂

    Like

  36. Robert,

    I second the comments from Danny and would like to add the following:

    1. Data is Data, and it the foundation of everyting on the Web (as it is everything in our daily existence)
    2. Data has always wanted to be free
    3. The Semantic Data Web is about Open Access to Data on the Web
    4. There are no secrets in the Web Realm (any variant from 1.0 – X.X.X)
    5. Decomposing Web Pages into granular items of referencable data is what the Semantic Web has always been about
    6. Hyperlinks no longer point solely to (X)HTML documents, they can also point to granular bits of Data exposed by Data Sources (that include traditional Web Pages)
    7. When a hyperlink points to a Document as blurb it is a URL. When it points to a Document or any other Data Source with granular Data Access in mind it is a URI
    8. Everything you, I, or anyone else has posted on the Web will ultimately be accessible via single URI (because URIs can point to other URIs; meaning Data can point to other pieced of Data)
    9. Computers have worked this way since the beginning of time; we are simply extending the concept of the Computer. To quote Sun: The Network is the Computer. The Web is a Network that connnect People to Data and Data to People
    10. The Web is a subsystem of the Network Operating system called the Internet 🙂

    Like

  37. My introduction to GRDDL occurred during a
    TBL
    keynote
    at the May 2005 Bio-IT World event in Boston.

    This turned out to be a serendipitous introduction as I had been
    wrestling
    with various metadata challenges
    .

    GRDDL is useful because it allows relationships to be extracted from XML-based
    representations, and high-quality metadata to be developed along the way. These
    relationships, cast in RDF, are semantically richer and more expressive than
    their XML counterparts.

    In a further phase of semantic refinement, the RDF-based representation can be
    transformed into one based on the Web Ontology Language (OWL). From this vantage
    point of an
    informal
    ontology
    “… Web pages will no longer be just pages, or posts. They’ll all
    be split up into little objects, stored in a database (a massive, scalable one
    at that) and then your words can be displayed in different ways.”

    Because this remains a highly geeky domain, it’s wonderful to see the dialog
    that this blog post has generated.

    Like

  38. My introduction to GRDDL occurred during a
    TBL
    keynote
    at the May 2005 Bio-IT World event in Boston.

    This turned out to be a serendipitous introduction as I had been
    wrestling
    with various metadata challenges
    .

    GRDDL is useful because it allows relationships to be extracted from XML-based
    representations, and high-quality metadata to be developed along the way. These
    relationships, cast in RDF, are semantically richer and more expressive than
    their XML counterparts.

    In a further phase of semantic refinement, the RDF-based representation can be
    transformed into one based on the Web Ontology Language (OWL). From this vantage
    point of an
    informal
    ontology
    “… Web pages will no longer be just pages, or posts. They’ll all
    be split up into little objects, stored in a database (a massive, scalable one
    at that) and then your words can be displayed in different ways.”

    Because this remains a highly geeky domain, it’s wonderful to see the dialog
    that this blog post has generated.

    Like

  39. Some very very good points of view here Robert. Not trying to do the impossible and change anyone’s mind, but I am pretty sure the Internet is not going to remain in its current configuration forever.

    If we think about this with a kind of logic that applies a historical innovation index of some kind, html and other languages might be compared to any innovation or language.

    Take linear A compared to linear B. No one has yet gained a full understanding of Linear A, yet it served the most advanced civilization of the bronze age for 2000 or more years.

    Linear B was translated when the Rosetta stone was deciphered and it bears some similarity to linear A. For those that don’t know Linear B was the early language of the Greeks (those guys our culture and other stuff came from).

    In a similar way html, XML and the others have served us well, but is it time to look deeper? Hell, mathematics cannot even give anyone an exact measurement of a circle or any derivative of pi. It does not mean it is useless, but it does suggest that there is something (quark-tile theory of inter-dimensional mathematics etc.) that could reveal greater discoveries.

    So, given my introduction to AI, fuzzy logic and semantic search offered By Riza Berkan of hakia search, I am leaning toward semantics. Just another opinion of course, but the law of probability and history assures us of this. We no longer use punch cards to tabulate bank data on computers the size of whole floors (1975), Windows 95 is no longer our preferred OS, we watch LCD high def TV’s rather than black and white(1965) and Google has been around (in computer years) as long as the transistor radio. 🙂

    Like

  40. Some very very good points of view here Robert. Not trying to do the impossible and change anyone’s mind, but I am pretty sure the Internet is not going to remain in its current configuration forever.

    If we think about this with a kind of logic that applies a historical innovation index of some kind, html and other languages might be compared to any innovation or language.

    Take linear A compared to linear B. No one has yet gained a full understanding of Linear A, yet it served the most advanced civilization of the bronze age for 2000 or more years.

    Linear B was translated when the Rosetta stone was deciphered and it bears some similarity to linear A. For those that don’t know Linear B was the early language of the Greeks (those guys our culture and other stuff came from).

    In a similar way html, XML and the others have served us well, but is it time to look deeper? Hell, mathematics cannot even give anyone an exact measurement of a circle or any derivative of pi. It does not mean it is useless, but it does suggest that there is something (quark-tile theory of inter-dimensional mathematics etc.) that could reveal greater discoveries.

    So, given my introduction to AI, fuzzy logic and semantic search offered By Riza Berkan of hakia search, I am leaning toward semantics. Just another opinion of course, but the law of probability and history assures us of this. We no longer use punch cards to tabulate bank data on computers the size of whole floors (1975), Windows 95 is no longer our preferred OS, we watch LCD high def TV’s rather than black and white(1965) and Google has been around (in computer years) as long as the transistor radio. 🙂

    Like

  41. Streambase, Numenta, Cycorp and Freebase all seem to be doing interesting nextgen projects. When the underlying emerging technologies of these companies gets mashed up the fun really begins. Think machines learning from video feeds.

    Like

  42. Streambase, Numenta, Cycorp and Freebase all seem to be doing interesting nextgen projects. When the underlying emerging technologies of these companies gets mashed up the fun really begins. Think machines learning from video feeds.

    Like

  43. Robert, there’s no need for the Semantic Web to get everything you described in this post. I’ll note that there’s probably more than you can discuss here, but based on what you have listed you can do that today w/a technology called Readware (readware.com). Not boasting about it, just saying that what you’ve described is already possible.

    Like

  44. Robert, there’s no need for the Semantic Web to get everything you described in this post. I’ll note that there’s probably more than you can discuss here, but based on what you have listed you can do that today w/a technology called Readware (readware.com). Not boasting about it, just saying that what you’ve described is already possible.

    Like

  45. the Web is filled with wonderful information but finding that information isn’t always easy. And when you find something good, it often comes with baggage. If you want to know how to make an ipod costume for your dog you might enjoy my website, but you might also be annoyed by my sense of humor or maybe you hate liberals. You should be able to learn how to dress up your dog and turn him into an Apple marketing machine without also having to read my godless anti-Technorati tirades, shouldn’t you? The answer is a qualified maybe.

    Read the rest at:
    http://www.elainevigneault.com/2007/04/08/semantic-web-and-the-future-of-the-internet.html

    Like

  46. the Web is filled with wonderful information but finding that information isn’t always easy. And when you find something good, it often comes with baggage. If you want to know how to make an ipod costume for your dog you might enjoy my website, but you might also be annoyed by my sense of humor or maybe you hate liberals. You should be able to learn how to dress up your dog and turn him into an Apple marketing machine without also having to read my godless anti-Technorati tirades, shouldn’t you? The answer is a qualified maybe.

    Read the rest at:
    http://www.elainevigneault.com/2007/04/08/semantic-web-and-the-future-of-the-internet.html

    Like

  47. Robert,

    Glad you’re feeling better. BTW, I don’t think that Radar will be alone in opening up the Web 3.0 space. Recently we’ve seen a MUVE game that is entirely composed with semantic agents in a multi-user 3D immersive environment based on Croquet.

    Also, there are a number of plays that are pushing well beyond the threshold of keyword searching. Some now fully combine semantics and linguistics.

    fourth quarter 2007 should be interesting

    Mills Davis.

    Like

  48. Robert,

    Glad you’re feeling better. BTW, I don’t think that Radar will be alone in opening up the Web 3.0 space. Recently we’ve seen a MUVE game that is entirely composed with semantic agents in a multi-user 3D immersive environment based on Croquet.

    Also, there are a number of plays that are pushing well beyond the threshold of keyword searching. Some now fully combine semantics and linguistics.

    fourth quarter 2007 should be interesting

    Mills Davis.

    Like

  49. Well, there is some interesting technology out there to make this happen. (See otnsemanticweb.oracle.com for an example.) The key is not having to rely on human operators for metadata tagging.

    Like

  50. Well, there is some interesting technology out there to make this happen. (See otnsemanticweb.oracle.com for an example.) The key is not having to rely on human operators for metadata tagging.

    Like

  51. Pingback: Web 2.0 is Dead

Comments are closed.