Categories
Semantics

Walking in Simon’s Shoes

The editor for my book, Practical RDF, was Simon St. Laurent, well known and respected in XML circles. Some might think it strange that a person who isn’t necessarily fond of RDF and especially RDF/XML, edit a book devoted to both, but this is the way of the book publishing world.

Simon was the best person I’ve worked with on a book, and I’ve worked with some good people. More importantly, though, is that Simon wasn’t an RDF fanatic, pushing me into making less of the challenges associated with RDF, or more of its strengths. Neither of us wanted a rah-rah book, and Practical RDF is anything but.

I’ve thought back on many of the discussions about RDF/XML that happened here and there this last year. Simon’s usually been on the side less than enthusiastic towards RDF/XML, along with a few other people who I respect, and a few who I don’t. Mine and others’ blanket response has usually been in the nature of, “RDF/XML is generated and consumed by automated processes and therefore people don’t have to look at the Big Ugly”. This is usually accompanied by a great deal of frustration on our part because if people would just move beyond the ‘ugliness’ of RDF/XML, we could move on to creating good stuff.

(I say ‘good stuff’ rather than Semantic Web because the reactions to this term are best addressed elsewhere.)

However, the situation isn’t really that simple, or that easily dismissed, If pro-RDF and RDF/XML folks like myself are ever going to see this specification gain some traction, we need to walk a mile in the opponent’s shoes and acknowledge and address their concerns specifically. Since I know Simon the best, I’ve borrowed his shoes to take a closer look at RDF/XML from his perspective.

Simon has, as far as I know, three areas of pushback against RDF: he doesn’t care for the current namespace implementation; he’s not overly fond of the confusion about URI’s; and he doesn’t like the syntax for RDF/XML, and believes other approaches, such as N3, are more appropriate. I’ll leave URIs for another essay I’m working on, and leave namespaces for other people to defend. I wanted to focus on concerns associated directly with RDF/XML, at least from what I think is Simon’s perspective (because, after all, I’m only borrowing his shoes, not his mind).

The biggest concern I see with RDF/XML from an XML perspective is its flexibility. One can use two different XML syntaxes and still arrive at the same RDF model, and this must just play havoc with the souls of XML folks.

As an example of this flexibilty, most implementations of RDF/XML today are based on RSS 1.0, the RDF/XML version of the popular syndication format. You can see an example of this with the RSS 1.0 file for this weblog.

Now, the XML for RSS 1.0 isn’t all that different from the XML for that other popular RSS format, RSS 2.0 from Userland — seen here. Both are valid XML, both have elements called channel and item, and title, and description and so on, and both assume there is one channel, but many items contained in that channel. From an RSS perspective, it’s hard to see why any one would have so much disagreement with using RDF/XML because it really doesn’t add much to the overhead for the syndication feed. In fact, I wrote in the past about using the same XML processing for RSS 1.0, as you would for RSS 2.0.

However, compatibility between the RDF/XML and XML versions of RSS is much thinner than my previous essay might lead one to believe. In fact, looking at RSS as a demonstration of the “XMLness” of RDF/XML causes you to miss the bigger picture, which is that RSS is basically a very simple, hierarchical syndication format that’s quite natural for XML; its very nature tends to drive out the inherent XML behavior within RDF/XML, creating a great deal of compability between the two formats. Compatibility that can be busted in a blink of an eye.

To demonstrate, I’ve simplified the index.rdf file down to one element, and defined an explicit namespace qualifier for the RSS items rather than use the default namespace. Doing this, the XML for item would look as follows:

<rss:item rdf:about=”http://rdf.burningbird.net/archives/001856.htm”>
<rss:description></rss:description>
<rss:link>http://rdf.burningbird.net/archives/001856.htm <dc:subject>From the Book</dc:subject>
<dc:creator>shelleyp</dc:creator>
<dc:date>2003-09-25T16:28:55-05:00</dc:date>
</rss:item>

Though annotating all of the elements with the rss namespace qualier does add to the challenge of RSS parsers that use simple pattern matching, because ‘title’ must now be accessed as ‘rss:title’, but the change still validates as valid RSS using the popular RSS Validator, as you can see with an example.

Next, we’re going to simplify the RDF/XML for the item element by using a valid RDF/XML shortcut technique that allows us to collapse simple, non-repeating predicate elements, such as title and link, into attributes of the resource they’re describing. This change is reflected in the following excerpt:

<rss:item rdf:about=”http://rdf.burningbird.net/archives/001856.htm”
rss:title=”PostCon”
rss:link=”http://rdf.burningbird.net/archives/001856.htm”
dc:subject=”From the Book”
dc:creator=”shelleyp”
dc:date=”2003-09-25T16:28:55-05:00″ />

Regardless of the format used, the longer more widely used approach now and the shortcut, the resulting N-Triples generated are the same, and so is the RDF model. However, from an XML perspective, we’re looking at a major disconnect between the two versions of the syntax. In fact, if I were to modify my index.rdf feed to use the more abbreviated format, it wouldn’t validate with the same RSS Validator I used earlier. It would validate as proper RSS 1.0, and proper RDF/XML, and valid XML — but it sings a discordant note with existing understanding of RSS, RSS 1.0 or RSS 2.0.

More complex RDF/XML vocabularies that are less hierarchical in nature stray further and further away from more ‘traditional’ XML even though technically, they’re all valid XML. In addition, since there are variations of shortcuts that are proper RDF/XML syntax, one can’t even depend on the same XML syntax being used to generate the same set of triples from RDF/XML document to RDF/XML document. And this ‘flexibility’ must burn, veritably burn, within the stomach of XML adherents, conjuring up memories of the same looseness of syntax that existed with HTML, leading to XML in the first place.

It is primarily this that leads many RDF proponents as well as RDF/XML opponents into preferring N3 notation. There is one and only one set of N3 triples for a specific model, and one, and only one RDF model generating the same set of N3 triples.

Aye, I’ve walked a mile in Simon’s shoes and I’ve found that they’ve pinched, sadly pinched indeed. However, I’ve also gained a much better understanding of why the earnest and blithe referral to automated generation and consumption of RDF/XML, when faced with criticism of the syntax, isn’t necessarily going to appease XML developers, now or in the future. The very flexibility of the syntax must be anathema to XML purists.

Of course, there are arguments in favor of RDF/XML that arise from the very nature of the flexibility of the syntax. As Edd Dumbill wrote relatively recently, RDF is failure friendly, in addition to being extremely easy to extend with its built-in understanding of namespace interoperability. And, as a data, not a syntax person, I also find the constructs of RDF/XML to be far more elegant and modular, more cleanly differentiated, than the ‘forever and a limb” tree structure of XML.

But I’m not doing the cause of RDF and RDF/XML any good by not acknowledging how easy is it to manipulate the XML in an RDF/XML document, legitimately, and leave it virtually incompatible with XML processes working with the same data.

I still prefer RDF/XML over N3, and will still use it for all my application, but it’s time for different arguments in this particular debate, methinks.

Categories
Healthcare

Image of a different kind

Lovely storm rolling through – more like ones we get in the Spring than the Fall.

In addition to the very welcome encouragement in the comments to my previous posting, I’m also getting some good discussion about photos and resolution necessary for publication. Among them, a suggestion for a PhotoShop plug-in that might help me salvage some of my existing photos for publication. It would be nice to do so, because there are some that I really like and are going to difficult to recreate with my film camera. Any and all suggestions, extremely welcome.

I spent the morning with a different type of photography – I had my MRI today rather than last Monday. The session was postponed from the earlier time because of a mixup in the sedatives, i.e. I didn’t get a chance to get the prescription for Valium filled from the doctor. Since the MRI is a closed one, I was strongly urged to get the sedatives if I have even the slightest tendency to claustrophobia.

Today, after taking two Valium, I felt I was ready to face the Machine.

I just know there are people reading this who have had MRIs and probably had no problems and slept through the thing. I wish I could say that you would have been proud of me, and that I was a brave little soldier, but I have to admit that Stavros is not the only Wonder Chicken around. I was okay until I was pushed into that long, dark, tiny tube and sedatives or not, I yelled, “Take me out! Take me out!”

The technician was wonderful, talked with me about what to expect, gave me a panic button and turned the lights in the tube on high. I laid back down and closed my eyes tight and this time I was able to stay in the tube.

An MRI isn’t a quick snapshot like an X-Ray – mine took 45 minutes, and several images were captured based on different magnetic frequencies. With each, the machine would measure my respiratory rate and then match it. As I breathed in, it would stop; when I breathed out, it would make The Noise.

And what noise – even with headphones playing my favorite radio station, the sound shakes your bones and you find yourself clenching your teeth, hands, and various other body parts. I now know why the doctor told me not to drink much before going in.

I was okay until the second to the last image. There was a longish time between pictures, and the silence was actually worse then the vibration. I hollered out, “Are we done?”

No answer.

“Are we through?”

No answer.

“ARE WE FINISHED, PLEASE!”

No such luck, two more to go. I breathed faster, thinking to hurry it along. Instead of:

****in**** beeeeeep ****in**** beeeeeep ****in**** beeeeeep

The pattern became:

*in*beep*in*beep*in*beep*in*beep*in*beep*in*beep

I got a chance to see the last set of images before I left. Question: are we the shape we are because this is the optimum package to hold all those odd organs? Or are the organs odd because of our shape? Regardless, it’s rather interesting to see what you look like from the inside out.

As for the test, no worries. Routine stuff, now out of the way and I can focus on photography that’s much more interesting – taking photos of my world, from the inside out. However, I’m in a writing mood, a major writing mood, so be ready for words coming your way.

Categories
Critters

TRO For all horse meat plants set to same date

horses

Update on Front Range Equine Rescue et al v. Vilsack et al:

Responding to a filing yesterday, Judge Armijo agreed to set the expiration date for the TRO for Rains Natural Meats to the same date as the other two plants: October 31, 2013. By that time, Judge Armijo will have a decision in the case.

Rains Natural Meats has asked the court to include it in the bond set by Magistrate Judge Scott. In the meantime, the USDA has filed a Supplemental Administrative Record covering Rains. I have issued a FOIA for the associated documents. I am particularly interested in reading the communications related to not needing a wastewater permit from the Missouri DNR.

You can see all of these documents at Docs at Burningbird.

There was also a hearing in the Missouri court case related to Missouri DNR being prohibited from issuing wastewater permits for horse meat plants. I don’t have access to these court documents, but can guess from the docket filings (available on Case Net) that the purpose of the hearing was to expedite a decision on this case, too

Categories
Photography

Your photos are beautiful but…

Recovered from the Wayback Machine.

Thank you for sending your query to ____________ magazine. Your photographs are beautiful! The magazine has not published photo essays in the past, but that may change in the future.

I was thrilled when the managing editor of a magazine, known for the beautiful photography it uses to annotate its stories, wrote the words, Your photographs are beautiful! to me in an email response to a proposal I sent. And though this didn’t lead immediately to a gig, the editor is passing the proposal on to the editor in chief for consideration. It’s from tiny acorns such as these that little oak trees of hope blossom.

First, though, I have to build up my photo library.

I’ve finished putting my photo albums together and I have some regrets that most of the photos, taken with a digital camera, can’t be used in publications because of their low resolution. Some, but not much.

The problem with film photography is that the costs can be prohibitive, especially if you use professional film and development. With the digital camera, there were no costs involved and I felt free to experiment, try new things, explore new territory. By doing this, I was able to find not only the type of photography I enjoy – journalistic photography, not what is known as ‘art’ photography – but also to post examples online and get excellent, brilliant, and spot on feedback from readers, as I wrote about a few days ago.

In the meantime, I’m using my low resolution photos when I send out ideas to publications. However, I am not just sending queries about possible photography assignments; I’m also sending ideas and suggestions for stories, essays, and articles to technical, fictional, travel, nature, and community-based publications. This is in addition to two book ideas I’m putting together – one on technology and one that’s social/political/cultural flavored. I’m fairly sure the technical one will get a nibble, and I have hopes for the other.

If you can’t tell from this flurry of activity, I’ve stopped trying to find a fulltime computing gig. If I can find small jobs, short term contracts or gigs working at home (or abroad, which would be even better), I’ll grab them – but my days as a full time technology architect working for a single company are over. I reached burn out in California, and it shows in the interviews. My resume is too good not to have a job; it’s not the resume or my knowledge or my experience – it’s been me.

Before you all howl “Don’t quit your day job!”, be aware that I’m am looking for employment, but right now, I’m focused on temporary and seasonal work, and whatever I can grab short term. Since I no longer have to worry about bill payments other than my car and health insurance, I can get by on earning smaller amounts of money – I don’t have to go just for the high priced architect jobs.

(Anyone want a damn good technical architect or senior level software developer for a short term assignment, at basement prices? Throw in a few rolls of film, and I’m yours.)

Dorothea wrote today that she doesn’t have a lot of patience with the do-your-dream crowd. I can see her point, you have to be practical. No one is going to take care of you, you have to take care of yourself. But when you’re pushing 50 (49 in a few weeks), sometimes your dreams are the only thing that keeps you going.

I know about doing what needs to be done – when you’ve ironed ties for a living, you can hack most anything. It’s been a while since I fried hamburgers or stocked shelves, but if I must, I will. Hopefully a new book and some articles will preclude having to pursue this option, knock on squishy white bread buns. However, regardless of what I do to pay for Zoe’s kitty kibbles, I am a writer. Nothing’s going to change this but going to sleep some day and not waking up again.

My only regret is that I’m too old to get good tips as a bar maid. Darn it.

Categories
RDF

PostCon

Recovered from the Wayback Machine.

The RDF vocabulary used throughout the examples for Practical RDF is PostCon, example here, a Post Content information dataset. The plan was that I would finish the book and then finish a Java implementation of PostCon, the application, using PostCon, the vocabulary, before the book hit the street.

What I wasn’t counting on was that I wouldn’t have a Tomcat server to run the Java application on when it was finished. I am running my own server, but it’s shared by other folks and at this point in time, a Tomcat server would be too much for it.

I also wasn’t counting on how tired I was once the book was finished. When you’ve worked on a book for two years, through four major rewrites trying to keep up with changing specifications and attitudes and tools, you get tired. I got tired.

However, PostCon the application begs to be created, and PostCon the vocabulary begs to be used.

So, what is PostCon? PostCon is a vocabulary that records information about a web resource, its movement, whether it’s been replaced, and why, and so on. It’s also an application that will maintain a history of your web content in a form that can be used to redirect HTTP requests when a resource is moved; track history of changes without the necessity of a complex change control system; and provide intelligent error handling when a resource is removed permanently. You can see the early prototype in action with this link.

The application has a user interface that allows one to query the PostCon record for a resource, add to it or modify it, and then persist the changes. Additionally, the application has a web services interface that can be utilized from other applications, such as weblog tools like the one I’m using for this page. Since the information about the change is persisted in a file (RDF/XML) rather than a database, other tools could access this information, such as webbots trying to new resources, or checking to see if a resource is still viable.

The vocabulary is based on RDF, and serialized using RDF/XML, so other vocabularies can be plugged in, simply and easily. Information about the creator is maintained in the PostCon vocabulary and this can be tied to the creator’s FOAF file. If the web resource is a weblog page, trackback information can be used to add PostCon related items for the specific page. For that matter, comments can also be added as part of the history of the resource – after all, a commented weblog posting is different than the posting by itself.

The long and short of it is that I’m returning to working on PostCon, but rather than work on it in the background, I’m going to implement the pieces and document them here in this weblog. This will not only give me incentive to get off my butt and get this done, but it should also, I hope, give me some decent feedback if I’m pursuing a less than efficient implementation strategy.

To start, I’m going to review the PostCon vocabulary one more time, to see how I want to modify it considering new efforts with Pie/Echo/Atom (to be called Atom it seems – thanks to Morbus Iff cutting through the crap – yay Morbus). Next, I’ll implement simple pages that can be used to read in and modify the RDF/XML files for a specific resource. I’ll be implementing these in PHP so that they can be accessed from my server. Later I may translate these to Java and JSP.

Next, I’m creating a second RDF vocabulary, this one to be used by an event queue system. When a resource is moved or removed, not only will the front end update the associated RDF/XML file for the document, it will also update an event queue RDF/XML file, which will then track the actions to be performed on the server side. I prefer this rather than having the front end pages implement file destruction or movement because it’s easier to secure a completely server-side application, then one that’s half front-end, half server.

In addition, by separating this layer of activity out, the application that will take the event queue information and do the actual work can be replaced depending on server-side languages supported, OS, that sort of thing.

I’ll create two versions of the application that processes the event queue – one in Java, one in Perl. The Java application won’t need a Tomcat server (no front end), and I don’t want to focus on just one langauge for this component of the entire system.

The final phase of implementing PostCon will be creating web services that can perform all of the functionality of the front-end interface functionality created in PHP. I’d like to implement these in Python and Perl. Perl because I want to try integrating this into a test copy of Movable Type; and Python because I want to improve my Python skills.

The code will be kept simple, and clean, with no frills. In addition, it’s pure open source, and can be copied, modified, and hopefully improved. When I’m finished, I’ll load all of the code to Source Forge.

I have other things to do, so I’m not going to be whipping this out over the next week, but it should be finished within the next month – knock on wood.