Categories
RDF Semantics Web

Semantic web lite: same great taste, less reified

Most of the time the feeds at Planet RDF reference isolated items with general interest. Other times, though, the thoughts featured strike sparks against each other, leading to a chain reaction whereby everyone jumps in and Things Happen.

Starting a few days ago, people have been referencing two stories, both of which I find very interesting. The first is Kendall Clark’s SPARQL: Web 2.0 Meet the Semantic Web; the second is Ian Davis Internet Alchemy Crises.

Kendall brings up what’s missing in Web 2.0 is a common query language and it just so happens SPARQL is a common query language, backed up by a common data model (RDF) and syntax (RDF/XML). He suggests that the Web 2.0 folks provide an RDF wrapper to their data, and both groups can then benefit from the same query language, which will make things a whole lot simpler:

So what, really, can SPARQL do for Web 2.0? Imagine having one query language, and one client, which lets you arbitrarily slice the data of Flickr, delicious, Google, and yr three other favorite Web 2.0 sites, all FOAF files, all of the RSS 1.0 feeds (and, eventually, I suspect, all Atom 1.0 feeds), plus MusicBrainz, etc.

And this leads us to Ian Davis and a cognitive crises he underwent at the DC2005 (DC as in Dublin Core), as relates to a pissy-ant, pick-a-une problem of dc:creator:

Danbri referred us to work he had done after the last DC meeting in 2004 on a SPARQL query to convert between the two forms. Discussion then moved onto special case processing for particular properties, along the lines of “if you see a dc:creator property with a literal value then you should insert a blank node and hang the literal off of that”. Note that I’m paraphrasing, no-one actually said this but it was the intent.

That’s when my crisis struck. I was sitting at the world’s foremost metadata conference in a room full of people who cared deeply about the quality of metadata and we were discussing scraping data from descriptions! Scraping metadata from Dublin Core! I had to go check the dictionary entry for oxymoron just in case that sentence was there! If professional cataloguers are having these kinds of problems with RDF then we are f…

Ian then recommended paring down RDF into an implementation subset, which focuses primarily on RDF, as it is used to define relationships. This means jettisoning some of the more cumbersome elements of the model — those that tend to send traditional XMLers screaming from the room:

What if we jilted the ugly sisters of rdf:Bag, rdf:Alt and rdf:Alt and took reification out back and shot it? How many tears would be shed?

What if we junked classes, domains and ranges? Would anyone notice? The key concept in RDF is the relationship, the property.

The end result would be an RDF-Lite: a proper subset of RDF that can be upwardly compatible with the model as a whole, though the converse would not be true. If this subset were formalized, then libraries could be created just for this it that would be significantly less complex, and correspondingly leaner, than libraries needed for the full featured RDF.

This, then, leads back to Kendall’s interest in seeing if Web 2.0 couldn’t be wrapped, morphed, or bridged on to RDF and thus allow us to assume one specific data model, and more importantly, one specific query language for use with all metadata easily and openly available on the web–not just the RDF bits. If a simple subset of RDF could be derived, it could be trivial to map any use of metadata into RDF. More importantly, since the capabilities of the technology is never the issue, those generating the disparate bits of XML or otherwise metadata might actually be willing to go this extra step.

True, an RDF-Lite would not have the same inferential power as the fully aspected RDF model, but frankly, most of our general web-based uses of RDF aren’t using this power anyway. And if we can make RDF tastier to the general web developer, we’re that much closer to an RDFalized web. To Kendall, an RDFalized Web 2.0 could be a powerful thing:

How powerful? Well, imagine being able to ask Flickr whether there is a picture that matches some arbitrary set of constraints (say: size, title, date, and tag); if so, then asking delicious whether it has any URLs with the same tag and some other tag yr interested in; finally, turning the results of those two distributed queries (against totally uncoordinated datasets) into an RSS 1.0 feed. And let’s say you could do that with two if-statements in Python and three SPARQL queries.

Pretty damn cool.

Well, not necessarily. What Kendall describes is something already relatively easy to access through Web services. And, as we’re finding, how tags are used with Flickr differs rather dramatically than how tags are used within delicious, and so on. I do agree that being able to do something like all of this with a couple of statements and SPARQL queries would be nifty; but the technology is still going to be limited based on a common understanding of the data being manipulated. Even with something as simple as tags, we have different understandings of what the term means across different applications.

I don’t necessarily agree across the board with Ian, either. For instance, you can take my blank nodes (bnodes to use popular terminology) only if you pry them from my cold dead APIs, but his general points are good. My own recent work has been focusing more on using RDF for its ability to map the relationships, and less on its participation in grander semantic schemes (though the data is available for any person/bot interested in such).

More, I’ve been exploring the capabilities of using RDF as a lightweight, portable, self-contained database–one to a unit, with unit being weblog page. I’ve been steadily pulling bits of metadata out of MySQL and embedding them into an RDF document, which then drives some of this site’s functionality.

There is a line between taking advantage of MySQL’s caching, versus managing my own with RDF but I’m finding that not only is a hybrid solution quite workable: it is a very effective solution for data that is meant to be open, unrestricted, and consumed by many agents.

The best aspect of all is that because of two specific aspects of RDF–ease of capturing a relationship, and the use of a URI to map the relationships correctly–it’s trivial for me to just ‘throw’ more metadata into the pot, and not have to worry about modifying existing tables in my database, or re-arranging a hierarchy and run into possible namespace collision in a straight XML document. I’m also not constrained by being dependent purely on primitive keyword-value pairs, a limitation that makes it difficult for me to make multiple statements about the same noun-object pairs.

It is all becoming very, very fun, and I am busy ripping the guts out of my current weblog tool implementation in order to incorporate the hybrid data store.

All of this effort, though, presupposes one thing: that I have a small subset of classes to manage the RDF bits, and to meet this, I experimented around with RAP (a PHP RDF library) until I had a trimmed, core set of functionality that, by happenstance, would meet Ian’s criteria for RDF-Lite. There isn’t a SPARQL implementation yet, but I know that this is on the way, and when released, I will use it to replace my use of the existing RDQL implementation.

Categories
RDF

What’s in a name

Danny Ayers has the start of a great RDF 101 titled RDF, Bottom Up, with promises of more to come.

I’ve been working on something similarly named, but quite different in tone for some time now. Just so’s you know, when I every finally get around to putting this online, I didn’t steal the name from Danny. But then, by the time I finally get around to publishing it, it will probably be the next decade and you’ll have forgotten.

Categories
Semantics

Photos, flickr, and back doors

By accident, I discovered that I was in violation of Flickr’s Terms of Use today. According to the TOS, Flickr is not a image hosting service. I’m not sure how it differs from an ‘image hosting’ service, other than I needed to include a link back to the photo flickr page for every photo embedded in a page here. Which explains why Flickr photo pages are starting to dominate search engine results, especially if people use meaningful photo titles.

The link back isn’t a problem within my weblog posts, but my the page generated by my photo metadata application, as well as Tinfoil, my photo album, have not been including a link back to the Flickr page. Unfortunately, this means that I’m going to have to change the code of the data collection element of my photo application. Which also means I’m going to have to re-run this for every page where I’ve included photos.

My fault for not checking the TOS more carefully. However, while I’m in the process of making this change, another one I’m making is to define a set for all photos that have been embedded in a weblog post, and add a tag linking the photo with that post. I also created a program that will use the Flickr API and download a local copy of each image. It will then update my post entries to point to the locally named copy, as compared to the one on Flickr. It’s my ‘backdoor’, just in case I decide I want to host my photos locally. No matter how much you like a centralized service–and I like Flickr–you should always have a backdoor.

I’m adding the metadata directly into the image itself, including the new longitude and latitude values, using the geotagging format. This way, this information follows the image no matter where I store it. I also add title, creator, description, keywords, and so on. When I do, Flickr pulls this data out and uses it to create the Flickr title, description, and tags.

(The geotagging format consists of three keywords or tags: “geotagged”, “geo:lon=value”,”geo:lat=value”. This is becoming standard format, and photos tagged with this are automatically pulled into other, external applications, such as my use with GoogleMaps.)

My photo application can pull this data out of the original image (whether stored locally, or found using the Flickr web services), utilizing a handy image metadata library for PHP.

I could keep the metadata stored in the image and just output this when the image metadata page is accessed. However, I store all of the data for each image as RDF, associated with the URL of the page itself, added to any other metadata I have for the page (much of this generated automatically using the same functionality used to drive the syndication feed in use for the site). The data is then available for my use, and accessible by any tool that can consume RDF/XML, such as Piggy-Bank.

I used to store this data in the database, but I’m now looking at trying something new. I figure if I have to re-do all the data, I might as well experiment.

Categories
Semantics Web

The business of algorithms

Recovered from the Wayback Machine.

Algorithms are big business. Recently I’ve seen several jobs where the company wants someone who is “…good with algorithms”. Microsoft is competing with Google is competing with Yahoo to hire the best algorithm wranglers (which evidently, according to the article, does not mean women). IBM is releasing it’s unstructured data architecture (UIMA), including it’s concept-based search algorithms into open source by year end. Even within weblogging the debate, and the race, is on to find the best algorithms to mine us, otherwise known as the higher income people without lives.

Suddenly, the hip and cool kids on the block can “do” algorithms.

With all this interest, though, is a lot of confusion and misunderstandings, starting with but not limited to, the very concept of algorithm– a concept which is now taking on such mystical properties that those who can “do” algorithms are being vested with an almost god-like prescience. It is time, and past time, to put the brakes on the hyperbole surrounding algorithms.

Starting with the basics: what is an algorithm.

What is an algorithm

An algorithm is nothing more than the description of the steps necessary in order to reach a goal. The goal may be something as simple as baking a cake, or as complex as mapping the gene sequence of humans; however, the concept of algorithm doesn’t change with each goal–only the steps.

You have three apples, and someone asks you for one, how do you know how many apples you have left? Seriously, this is not a joke–simple addition, multiplication, division, and subtraction are algorithms, and each is represented by specific equations that introduce specialized operators. To better see this, sometimes you need to remember what it was like to learn math, and then program this knowledge into a computer.

For instance, add two number: 14 and 17. No, not by memorization — by working out the steps. First you line up the rightmost digits, or the ones column, and perform addition on the numbers in this column: 4 and 7. The act of addition is taking one number, breaking it down into units, and then adding these units to another number: 4 + 1 is 5; 5 + 1 is 6; 6 + 1 is 7, and so on. You know this; you remember your first exposure to a calculating device–your fingers. So add 4 and 7, just like you did when you were younger.

Start with 4, turn down your left thumb, that’s 5. Turn down your left index finger, that’s 6. Turn down your left middle finger, that’s 7. Turn down your left ring finger, that’s 8. Turn down your pinky, and that’s 9. Then switch to the other hand, and continue. Turn down your right thumb, that’s 10. Finally, turn down your right index finger and that makes 11. Stop at this point, because you’ve turned down 7 digits. Of cource, we could have started with 7 and added 4, but chances are when you were younger you started with the number on the left and added the number on the right (though this may change based on culture and language).

So now you have 11. That was an amazing accomplishment. Do it enough times, and you remember the result and you don’t have to turn down fingers when you’re asked to change some figures during, say, a board meeting.

But now you have a problem: you have a value in the ones column, but you also have a value in the tens column. So what you do is ‘carry’ that number over to the tens column, and add it to the other numbers that were already there–leading to addition on three numbers: 1 and 1 and 1. Luckily, the numbers are small or we’d probably have to start removing our shoes.

Combine all these steps into sequence of actions, and you have a very complicated, multi-step algorithm. Extend these same basic steps, and you can also do subtraction, multiplication, and division. In fact, once you managed the algorithm for addition, you had the basic skills necessary in order to work with any algorithm. It is only a few short steps from addition to something like Newton’s Method. The only barrier to taking these few short steps is interest and intimidation: interest, because not everyone really wants to learn how to do Newton’s Method; and intimidation because after a while, it’s a lot easier to define new equations and new operators to represent higher-level algorithms, and our first exposure to these sends many of us running for the door.

Aside from the intimidating equations, that’s all an algorithm really is: a formalization of the steps necessary in order to reach a goal. So when I’m asked on an interview if I can “do” algorithms, in my mind I hear: can you add 17 and 14 without having to take off your shoes? I can then reply without hesitation that yes, I can.

I am now ready to work at Google.

Well, not quite.

Pattern Matching, Hypothesis, and Proofs

Any of us can follow an algorithm if we’re interested and patient enough. But it takes something more to be able to derive the algorithm in the first place.

In his book, “Vision”, David Marr writes about the steps he and his fellow researchers took to discovery a computational theory of vision. The first was to create a representational framework, a hierarchical framework of vision, from the simplest edge detection to more complex visual processes. They then searched for existing algorithms that matched the observed and representational behavior:

These ideas suggest that in order to detect intensity changes efficiently, one should search for a filter that has two salient characteristics. First and foremost, it should be a differential operator, taking either a first or second spatial derivitive of the image. Second, it should be capable of being tuned to act at any desired scale, so that large filters can be used to detect blurry edges, and small ones to detect sharply focused file detail in the image.

Marr and his co-researcher Hildreth, eventually came up with the Laplacian of Gaussian, also known as the Marr Filter, or Marr-Hidreth filter.

Marr and Hildreth were able to derive their filter, their algorithm, because of training in math and neuroscience, as well as new research in the fields of artificial intelligence and vision. The training provided the tools they needed: a catalog of existing algorithms, as well as the necessary protocols; the research then provided the necessary new data.

If you use Photoshop’s Unsharp Mask, you can judge for yourself the success of Marr’s efforts. The point is that Marr and Hildreth established a goal, observed behavior, and then went shopping to find which algorithms came closest to matching the observed behavior — in this case a combination of algorithms: applying a Guassian filter to blur the image and remove structure, and the Laplacian to detect the ‘edges’ or differences of intensity that remains.

So now if I had the opportunity to work with the late Dr. Marr, and he asked me if I can ‘do’ algorithms, in my mind I hear: “Ohmigod, what am I doing here. Do you think anyone will notice if I slip out?”

Okay, but we’re not trying to invent artificial intelligence here

Now that we’ve gone from adding two apples to programming human sight, we’ll focus on algorithms located somewhere inbetween.

Though I wouldn’t have the background to work directly with Dr. Marr, this isn’t to say I can’t work with algorithms. Anytime I write code I make use of, or even create algorithms. When I work with data, either in RDF format or as relational data, I am using algorithms. As Dr. Marr was an ‘expert’ in computer vision and neuroscience, I’m, equally, an expert in my field of interest.

Most of us work with algorithms, though we may not be aware of the fact. When we follow a recipe for Beef Wellington, the instructions for putting together a model airplane, or to knit a complex baby blanket pattern, we’re following algorithms. And if we create a new recipe, knitting pattern, or computational theory of human behavior, we’re creating new algorithms–usually derived from existing ones, if possible. (It’s easier to work with existing, proven, algorithms then have to go through formal proofs with new ones.)

In other words: there is no ‘algorithm’ gene that some people have, and others don’t. One doesn’t need a PhD to work with algorithms; the ability to work with algorithms is pretty much universal.

Cool. So where was I? Oh yes, the business of algorithms.

Blogorithms

I had to do it before someone else did it. It was only a matter of time.

Now that weblogging has established its credibility (i.e. can be used to make money) and there are millions of us (”over 14 million served daily”), the interest in creating algorithms to make use of all the rich, seductive unstructured data we generate is very strong. Understandably so.

However, unlike previous research projects such as Dr. Marr’s, current weblogging effort seems to focus on the algorithms rather than the goal. Because of this, we’re measuring every last bit about ourselves, but not coming up with anything useful. By focusing on the tools rather than the end point we’re mixing search with popularity, marketing with discovery, and then we’re throwing in a little structured data–just to make things interesting.

For instance, looking at mixing search with popularity:

The Technorati 100, Blogdex, and Daypop Top 40 are all representatives of the same general type of algorithm: notification of update, extract out the links, increment the count for any matched link, with the top n number of link holders placed on the list, ordered by number of links. Though the data is treated differently–Technorati persists the number of links per page, while Daypop and Blogdex are only interested in reflecting ‘fresh’ links–the concept, and hence, the algorithm is the same: when a link to a specific page is encountered, add a link to the source to a list, and increment a counter. Then adjust the list accordingly.

None of this activity has anything to do with search. Each tool may also grab data for a search component, but the algorithms for popularity are not specific to search. Where the two get confused is when popularity is used as a factor of search.

Adjusting search results based on popularity is to combine two different algorithms, but there is no rhyme or reason for doing so. The fact that one page is more ‘popular’ than another does not make it a better authority. It’s not the same as something like Google’s PageRank, because PageRank is not a measure of popularity.

Ian Rogers wrote a very nice writeup on the original PageRank algorithm, breaking down the formula into the various steps. Summarizing, PageRank is based on incoming links, but it is the value of the links that helps push up the PageRank, and this value is dependent on how many outgoing links a page has.

This sounds like popularity but it isn’t. The whole purpose of the PageRank algorithm is to approximate a random surfer, and the probability that they would end up at the page after randomly clicking through so many pages. According to the Sergey Brin and Lawrence Page’s original page:

PageRank can be thought of as a model of user behavior. We assume there is a “random surfer” who is given a web page at random and keeps clicking on links, never hitting “back” but eventually gets bored and starts on another random page. The probability that the random surfer visits a page is its PageRank. And, the d damping factor is the probability at each page the “random surfer” will get bored and request another random page. One important variation is to only add the damping factor d to a single page, or a group of pages. This allows for personalization and can make it nearly impossible to deliberately mislead the system in order to get a higher ranking.

The PageRank in use today is not the same one just described; the one in use today is said to feature over 100 different variables. It’s not surprising that the computation has changed, but we have to suppose that the reasoning remains: it’s purpose is not to calculate popularity, but to capture the behavior of the random surfer. The only problem is, webloggers are not random surfers.

Weblogs combine the threaded chat behavior of a bulletin board system, with the separate domains of more traditional web pages. As such, the genre creates a threaded web linking behavior that must play havoc with the traditional search engine paradigms. We don’t link just because of interest or as reference; we also link based on a personal attachments, likes/dislikes, as part of self or other promotion, and a host of other reasons, none of which had anything to do with the traditional web linking of long ago.

danah boyd discussed this in a recent post, after reviewing 400 weblogs for linking patterns. She wrote:

Linking Patterns:

The Top 100 tend to link to mainstream media, companies or websites (like Wikipedia, IMDB) more than to other blogs (Boing Boing is an exception).

Blogs on blogging services rarely link to blogs in the posts (even when they are talking about other friends who are in their blogroll or friends’ list). It looks like there’s a gender split in tool use; Mena said that LJ is like 75% female, while Typepad and Moveable Type have far fewer women.

Bloggers often talk about other people without linking to their blog (as though the audience would know the blog based on the person). For example, a blogger might talk about Halley Suitt’s presence or comments at Blogher but never link to her. This is much rarer in the Top 100 who tend to link to people when they reference them.

Content type is correlated with link structure (personal blogs contain few links, politics blogs contain lots of links). There’s a gender split in content type.

When bloggers link to another blog, it is more likely to be same gender.

As danah mentioned, 400 weblogs is too few to extrapolate any global behavior, but we’ve seen one or more of these ourselves–in particular not linking to someone but giving the person’s name; or not even directly mentioning a name (a behavior that is becoming more common).

Whether there are gender differences in linking has been the subject of much debate. danah found in her examination of 400 weblogs that gender linked to like gender more often than not. If this is true, and women account for about 50% of the weblogs, then we should expect more weblogs by women in the popularity lists. That we don’t shows that we need to continue our observation before we can begin to derive algorithms related to weblogging and popularity, and weblogging and search.

As for marketing and discovery, and the thin vein of structured data (microformats, syndication feeds, FOAF, et al) that runs through all of this unstructured mess–this will be good for a follow on topic someday.

Onward

Mary Hodder had originally listed out several different metrics for consideration when it comes to developing an algorithm and asked if the approach she proposed was the right one:

So this is my first post think about making an open source algorithm. And I’m wondering, is this a useful approach? I think it could be worthwhile, done right, and I put it out there to the blogging community to determine what is best here. As I said, after seeing what people who want to work with smaller topic communities are doing, it may be in blogger’s interest to think about how this might be done so that is it more in keeping with the desires and views of the blogosphere.

The approach–reviewing all the different metrics, looking at representative data, searching for repetitive behaviors–are all good, and, equally, all for nought unless the purpose for the effort is clearly understood. This is is worth repeating: an algorithm is nothing more than a formalization of the steps necessary to meet a goal. And replacing Technorati 100 because it ’sucks’ is not a particuarly good goal. So my suggestion to Mary would be to establish the end points for this effort, first.

And now rumors abound that Technorati is being sold, possibly to a major search company. Well, that’s one way to get rid of the Technorati 100–convert it to the Yahoo 100. Semi-serious joking aside, if we know one thing by now about all of this, it’s that the unstructured data that weblogging sits on is this year’s hottest commodity–second only in value to the algorithms used to mine it.

Categories
Semantics Web

Snapshot in semantic time

Danny AyersI’m guessing Shelley’s comment was where the ‘lower-case semantic web’ thing originated.

You betcha. And I’m going after royalties, baby. I figure a dime for every mention, and my taxes will be paid and groceries bought. For a month.

Yup, I just hope that Technorati has lots of money. Lots and lots of money. Corante, too. And Marc Canter! He oughta be worth a car payment, at least. HeYAH. In the dough now, children. In the dough.

I also wish more folk would take the time to pull together the threads in a meaningful way like Peter Van Dicjk did with the early semantic web discussions. Creating a tool to make it simple to create this type of page would be an excellent programming project. A project that could be used to… generate even more money!

money money money…MONEY. money money money…MONEY!

Nobody can’t say I ain’t got my priorities straight. No siree Bob.

update

The links to my writings in the paper haven’t survived the many weblog tools moves I’ve made. I’ve since found the posts, published below:

A Semantic Conversation

Deconstructing the Syllogistic Shirky