Categories
RDF

Following on the theme

Recovered from the Wayback Machine.

Following on the theme of how we can have a lovely time in London, thanks to RDF and the seman…ooops! Semantic Web, rdfdata.org has pointed to another set of RDF data related to travel: OpenGuides .

According to the site, OpenGuides is a network of free, community-maintained “wiki” city guides to which anyone can contribute. More importantly, the organization promises to ensure openness of the data by providing RDF/XML for each travel node.

Now this is both great, and a challenge. You see this is a wiki. By ‘node’, in wiki parlance, this means you get RDF/XML for the pertinent page information. So for London, what you’ll get is:

< ?xml version=”1.0″?>
<rdf :RDF
xmlns:rdf=”http://www.w3.org/1999/02/22-rdf-syntax-ns#”
xmlns:dc=”http://purl.org/dc/elements/1.1/”
xmlns:dcterms=”http://purl.org/dc/terms/”
xmlns:foaf=”http://xmlns.com/foaf/0.1/”
xmlns:wiki=”http://purl.org/rss/1.0/modules/wiki/”
xmlns:chefmoz=”http://chefmoz.org/rdf/elements/1.0/”
xmlns:wn=”http://xmlns.com/wordnet/1.6/”
xmlns:geo=”http://www.w3.org/2003/01/geo/wgs84_pos#”
xmlns:os=”http://downlode.org/rdf/os/0.1/”
xmlns=”http://www.w3.org/2000/10/swap/pim/contact#”
>

<rdf :DDescription rdf:about=””>
<dc :title>The Open Guide to London: Home</dc>
<dc :date>2004-10-18T21:51:10</dc>
<dcterms :modified>2004-10-18T21:51:10</dcterms>
<dc :contributor>Earle</dc>
<dc :source rdf:resource=”http://london.openguides.org/index.cgi?id=Home” />
<wiki :version>70</wiki>
<foaf :topic rdf:resource=”#obj” />
</foaf></dc></rdf>

<geo :SpatialThing rdf:ID=”obj” dc:title=”Home”>ies –>
<dc :subject>Wiki Info</dc>

<!– address and geospatial data –>
<city>London</city> <country>United Kingdom</country>

<!– contact information –>

</></></geo>

(Sorry for the smiley in the code – an annoying, buggy, piece of clever coding on the part of the WordPress developers inserted it. I don’t use smileys. I detest smileys. No offence FOAF people.)

Of course there is more to London than this. However, you have to access each wiki page, and then access each RDF/XML file to get that pertinent bit of information.

To be effective, one would have to build a bot trained to a wiki architecture (where links may or may not go to something that exists), and that can consume any and all related RDF/XML files. It can then be turned loose at a specific wiki; to return to its owner, engorged with lots of juicy, and fully fleshed, data.

Num.

In other words, you would need a wiki-aware Smushbot.

Print Friendly, PDF & Email