Most of the time the feeds at Planet RDF reference isolated items with general interest. Other times, though, the thoughts featured strike sparks against each other, leading to a chain reaction whereby everyone jumps in and Things Happen.
Starting a few days ago, people have been referencing two stories, both of which I find very interesting. The first is Kendall Clark’s SPARQL: Web 2.0 Meet the Semantic Web; the second is Ian Davis Internet Alchemy Crises.
Kendall brings up what’s missing in Web 2.0 is a common query language and it just so happens SPARQL is a common query language, backed up by a common data model (RDF) and syntax (RDF/XML). He suggests that the Web 2.0 folks provide an RDF wrapper to their data, and both groups can then benefit from the same query language, which will make things a whole lot simpler:
So what, really, can SPARQL do for Web 2.0? Imagine having one query language, and one client, which lets you arbitrarily slice the data of Flickr, delicious, Google, and yr three other favorite Web 2.0 sites, all FOAF files, all of the RSS 1.0 feeds (and, eventually, I suspect, all Atom 1.0 feeds), plus MusicBrainz, etc.
And this leads us to Ian Davis and a cognitive crises he underwent at the DC2005 (DC as in Dublin Core), as relates to a pissy-ant, pick-a-une problem of dc:creator:
Danbri referred us to work he had done after the last DC meeting in 2004 on a SPARQL query to convert between the two forms. Discussion then moved onto special case processing for particular properties, along the lines of “if you see a dc:creator property with a literal value then you should insert a blank node and hang the literal off of that”. Note that I’m paraphrasing, no-one actually said this but it was the intent.
That’s when my crisis struck. I was sitting at the world’s foremost metadata conference in a room full of people who cared deeply about the quality of metadata and we were discussing scraping data from descriptions! Scraping metadata from Dublin Core! I had to go check the dictionary entry for oxymoron just in case that sentence was there! If professional cataloguers are having these kinds of problems with RDF then we are f…
Ian then recommended paring down RDF into an implementation subset, which focuses primarily on RDF, as it is used to define relationships. This means jettisoning some of the more cumbersome elements of the model — those that tend to send traditional XMLers screaming from the room:
What if we jilted the ugly sisters of rdf:Bag, rdf:Alt and rdf:Alt and took reification out back and shot it? How many tears would be shed?
What if we junked classes, domains and ranges? Would anyone notice? The key concept in RDF is the relationship, the property.
The end result would be an RDF-Lite: a proper subset of RDF that can be upwardly compatible with the model as a whole, though the converse would not be true. If this subset were formalized, then libraries could be created just for this it that would be significantly less complex, and correspondingly leaner, than libraries needed for the full featured RDF.
This, then, leads back to Kendall’s interest in seeing if Web 2.0 couldn’t be wrapped, morphed, or bridged on to RDF and thus allow us to assume one specific data model, and more importantly, one specific query language for use with all metadata easily and openly available on the web–not just the RDF bits. If a simple subset of RDF could be derived, it could be trivial to map any use of metadata into RDF. More importantly, since the capabilities of the technology is never the issue, those generating the disparate bits of XML or otherwise metadata might actually be willing to go this extra step.
True, an RDF-Lite would not have the same inferential power as the fully aspected RDF model, but frankly, most of our general web-based uses of RDF aren’t using this power anyway. And if we can make RDF tastier to the general web developer, we’re that much closer to an RDFalized web. To Kendall, an RDFalized Web 2.0 could be a powerful thing:
How powerful? Well, imagine being able to ask Flickr whether there is a picture that matches some arbitrary set of constraints (say: size, title, date, and tag); if so, then asking delicious whether it has any URLs with the same tag and some other tag yr interested in; finally, turning the results of those two distributed queries (against totally uncoordinated datasets) into an RSS 1.0 feed. And let’s say you could do that with two if-statements in Python and three SPARQL queries.
Pretty damn cool.
Well, not necessarily. What Kendall describes is something already relatively easy to access through Web services. And, as we’re finding, how tags are used with Flickr differs rather dramatically than how tags are used within delicious, and so on. I do agree that being able to do something like all of this with a couple of statements and SPARQL queries would be nifty; but the technology is still going to be limited based on a common understanding of the data being manipulated. Even with something as simple as tags, we have different understandings of what the term means across different applications.
I don’t necessarily agree across the board with Ian, either. For instance, you can take my blank nodes (bnodes to use popular terminology) only if you pry them from my cold dead APIs, but his general points are good. My own recent work has been focusing more on using RDF for its ability to map the relationships, and less on its participation in grander semantic schemes (though the data is available for any person/bot interested in such).
More, I’ve been exploring the capabilities of using RDF as a lightweight, portable, self-contained database–one to a unit, with unit being weblog page. I’ve been steadily pulling bits of metadata out of MySQL and embedding them into an RDF document, which then drives some of this site’s functionality.
There is a line between taking advantage of MySQL’s caching, versus managing my own with RDF but I’m finding that not only is a hybrid solution quite workable: it is a very effective solution for data that is meant to be open, unrestricted, and consumed by many agents.
The best aspect of all is that because of two specific aspects of RDF–ease of capturing a relationship, and the use of a URI to map the relationships correctly–it’s trivial for me to just ‘throw’ more metadata into the pot, and not have to worry about modifying existing tables in my database, or re-arranging a hierarchy and run into possible namespace collision in a straight XML document. I’m also not constrained by being dependent purely on primitive keyword-value pairs, a limitation that makes it difficult for me to make multiple statements about the same noun-object pairs.
It is all becoming very, very fun, and I am busy ripping the guts out of my current weblog tool implementation in order to incorporate the hybrid data store.
All of this effort, though, presupposes one thing: that I have a small subset of classes to manage the RDF bits, and to meet this, I experimented around with RAP (a PHP RDF library) until I had a trimmed, core set of functionality that, by happenstance, would meet Ian’s criteria for RDF-Lite. There isn’t a SPARQL implementation yet, but I know that this is on the way, and when released, I will use it to replace my use of the existing RDQL implementation.