Categories
Semantics

Deconstructing the Syllogistic Shirky

Recovered from the Wayback Machine.

Clay Shirky published a paper titled, The Semantic Web, Syllogism, and Worldview and made some interesting arguments. However, overall, I must agree with Sam Ruby’s assessment: Two parts brilliance, one part strawman. Particularly the strawman part.

First, Clay makes a point that syllogistic logic, upon which hopes for the Semantic Web are based, requires context and therein lies the dragons. He uses an example the following syllogism:

– The creator of shirky.com lives in Brooklyn
– People who live in Brooklyn speak with a Brooklyn accent

From this, we’re to infer that Clay, who lives in Brooklyn, speaks with a Brooklyn accent. Putting this into proper syllogistic form:

People who live in Brooklyn speak with a Brooklyn accent
The creator of shirky.com lives in Brooklyn
Therefore, the creator of shirky.com speaks with a Brooklyn accent

Leaving off issues of qualifiers (such as all or some) , the point Clay makes is that context is required to understand the truth behind the generalization made with people living in Brooklyn and speaking with an accent:

Any requirement that a given statement be cross-checked against a library of context-giving statements, which would have still further context, would doom the system to death by scale.

Clay believes that generalities such as the one given require context beyond the ability of the medium, the Semantic Web, to support. He then goes on to say that we can’t disallow generalizations because the world tends to think in generalities.

I agree with Clay that people tend to think in generalities and that context is an essential component of understanding what is meant by these generalities. But Clay makes a mistake in believing that the proponents of the Semantic Web are interested in promoting a web that would be able to deduce such open-ended generalities as this; or that we are trying to create a version of Artificial Intelligence on the web. I can’t speak for others, but for myself, I have never asserted that te Semantic Web is Artificial Intelligence on the web (which I guess to show that machines aren’t the only ones capable of miscontruing stated assertions).

Clay uses examples from a few papers on the Semantic Web as demonstrations of what we’re trying to accomplish, including a book buying experience, an example of trust and proof, and an example of membership based on event. However, in all three cases, Clay has done exactly what he’s told the Semantic Web folks we’re guilty of: disregarded the context of all three examples. As Danny Ayers writesShirky is highly selective and misleading in his quotes.

In the first paper, Sandro was demonstrating a book buying experience that sounds overly complex. As Clay wrote:

This example sets the pattern for descriptions of the Semantic Web. First, take some well-known problem. Next, misconstrue it so that the hard part is made to seem trivial and the trivial part hard. Finally, congratulate yourself for solving the trivial part.

The example does seem as if the trivial is made overly complex (and, unfortunately, invokes imagery of the old and tired RDF makes RSS too complex debate), but the truth is that Sandro was basing his example on the premise of how would you buy a book online if you didn’t know the existence of an online bookstore. In other words, Sandro was demonstrating how to uncover information without a starting point. Buying a book online may not have been the best example, but the concept behind it, the context as it were, is fundamental to today’s web; it’s also the heart of tomorrow’s Semantic Web, and the basis behind today’s search engine functionality, with their algorithmic deduction of semantics.

As for Sean Palmer’s example, which makes an assertion about one person loving another and then uses a proof language to demonstrate how to implement a trust system, Clay writes:

Anyone who has ever been 15 years old knows that protestations of love, checksummed or no, are not to be taken at face value. And even if we wanted to take love out of this example, what would we replace it with? The universe of assertions that Joe might make about Mary is large, but the subset of those assertions that are universally interpretable and uncomplicated is tiny.

I agree with Clay that many assertions made online don’t necessarily have a basis in fact, and no amount of checksum technology will make these assertions any more true. However, the point that Sean was making isn’t that we’re making statements about the truth of the assertion — few Semantic Web people will do this. No the checksum wasn’t to assert the truth of the statement, but to verify the identity of the origination of the statement. This latter is something that is very doable and core to the concept of a web of trust — not that your statement is true, because even in courts of law we can’t always deduce this; but that your statement was made by you and was not hearsay.

In other words, the example may not have been the best, but the concept is solid.

Finally, as to Aaron Swartz’s example of the salesman and membership in a club, Clay writes:

This is perhaps perhaps the high water mark of presenting trivial problems as worthy of Semantic intervention: a program that can conclude that 102 is greater than 100 is labeled smart. Artificial Intelligence, here we come.

Again, this seems like a trivial example — math is all we need to determine membership based on count of items sold. However, the point Aaron was making was that in this case it was count, in other cases membership could be inferred because of other actions, and by having a consistent and shared inferential engine behind all of these membership tests, we do not have to develop the technology to handle each individual case — we can use the same model, and hence the same engine, for all forms of inferences of membership.

Again, without the context behind the example the meaning is lost, and just the words of the example as republished in Clay’s paper (and I wonder how many of the people reading Clay’s paper also read the other three papers he represents) seem trivial or overly pedantic. With context, this couldn’t be farther from the truth.

Following these arguments, Clay derives some conclusions that I’ll take one at a time. First he makes a point that meta-data can be untrustworthy and hence can’t be relied on. I don’t think any Semantic Web person will disagree with him, though I think that untrustworthy is an unfortunate term, with its connotations of deliberate acts to deceive. But Clay is, again, mixing web of trust and Semantic Web, and the two are not necessarily synonymous (though I do see the Web of Trust being a part of the Semantic Web).

I use poetry as an example of my interest in the Semantic Web. As an example. I want to find poems that use a bird to represent freedom. I search on “bird as metaphor for freedom” and I find several poems people have annotated with their interpretation that the bird in the poem represents freedom. There is no inherent ‘truth’ in any of this — only an implicit assumption based on a shared conceptual understanding of ‘poetry’ and ‘subjectivity’. The context is that each person’s opinion of the bird as metaphor for freedom is based on their own personal viewpoint, nothing more. After reviewing the poems, I may agree or not. The fact that the Semantic Web helped me find this subset of poems on the web does not preclude me exercising my own judgement as to the people’s interpretations.

Clay also makes a statement that There is simply no way to cleanly separate fact from fiction, and this matters in surprising and subtle ways…. As example he uses a syllogism about Nike and people:

– US citizens are people
– The First Amendment covers the rights of US citizens
– Nike is protected by the First Amendment

Well, the syllogism is flawed, but disregarding that, the concept of the example is again mixing web of trust with the Semantic Web, and that’s an assumption that isn’t warranted by what most of us are saying about Semantic Web.

Clay also mentions that the Semantic Web has two goals: to get people to use meta-data and the other is to build a global ontology that pulls all this data together. He applauds the first while stating that the second is …audacious but doomed.

Michelangelo was recorded as having said:

My work is simple. I cut away layer after layer of marble until I uncover the figure encased within.

To the Semantic Web people there is no issue about building a global ontology — it already exists on the web today. Bit by bit of it is uncovered every time we implement yet another component of the model using a common, shared semantic model and language. There never was a suggestion that all metadata work cease and desist as we sit down on some mountaintop somewhere and fully derive the model before allowing the world to proceed.

FOAF, RSS, PostCon, Creative Commons — each of these is part of the global ontology. We just have many more bits yet to discover.

Clay’s most fundamental pushback against the Semantic Web works seems to be covered in the section labeld “Artificial Intelligence Reborn”, where he writes:

Descriptions of the Semantic Web exhibit an inversion of trivial and hard issues because the core goal does as well. The Semantic Web takes for granted that many important aspects of the world can be specified in an unambiguous and universally agreed-on fashion, then spends a great deal of time talking about the ideal XML formats for those descriptions. This puts the stress on the wrong part of the problem — if the world were easy to describe, you could do it in Sanskrit.

Likewise, statements in the Semantic Web work as inputs to syllogistic logic not because syllogisms are a good way to deal with slippery, partial, or context-dependent statements — they are not, for the reasons discussed above — but rather because syllogisms are things computers do well. If the world can’t be reduced to unambiguous statements that can be effortlessly recombined, then it will be hard to rescue the Artificial Intelligence project. And that, of course, would be unthinkable.

Again, I am unsure of where Clay derived his thinking that we’re trying to salvage the old Artificial Intelligence work from years ago. Many of us in the computer sciences knew this was a flawed approach almost from the beginning. That’s why we redirected most of our efforts into the more practical and doable expert systems research.

The most the proponents of the Semantic Web are trying to do is show that if this unannotated piece of data on the web can be used in this manner, how much more useful can it be if we attach just a little bit more information about it?

(And use all of this to then implement our plan for world domination, of course; but then we don’t talk about this except on the secret lists.)

Contrary to the good doctors’ AKMA and Weinberger and their agreement with Clay, as to worldview and its defeat of any form of global ontology, what they don’t take into account is that each worldview of data is just another facet of the same data; each provides that much more completeness within that global ontology.

What we do know about the Soviet view of literature? Its focus was on Marxism-Leninsim. What do know about Dewey’s view of literature? That Christianity is first among the religions. The two facts, the two bits of semantic information, do not preclude each other. Both form a more complete picture of the world, as a whole. If they were truly incompatible, people in the world couldn’t have had both viewpoints in the same place, the earth, at the same time. We would have imploded into dust.

I do agree with Clay when he talks about Friendster and much of the assumption of ‘friendship’ based on the relationships described within these networks. We can’t trust that “Friend of” is an agreed on classification between both ends of the implied relationship. However, the Semantic Web makes no assumption of truth in these assertions. Even the Web of Trust doesn’t — it only validates the source of the assertion, not the truth of the assertion.

Computers in the future are not going to come with built-in lie detectors.

I also agree, conditionally, with Clay when he concludes:

Much of the proposed value of the Semantic Web is coming, but it is not coming because of the Semantic Web. The amount of meta-data we generate is increasing dramatically, and it is being exposed for consumption by machines as well as, or instead of, people. But it is being designed a bit at a time, out of self-interest and without regard for global ontology. It is also being adopted piecemeal, and it will bring with it with all the incompatibilities and complexities that implies.

Much of the incompatibilties could be managed if we all followed a single model when defining and recording meta-data, but I do agree that meta-data is coming about based on specific need, rather than global plan.

But Clay’s reasoning is flawed if be believes that this isn’t the vision shared by those of us who work towards the Semantic Web.

Categories
Semantics

Walking in Simon’s Shoes

The editor for my book, Practical RDF, was Simon St. Laurent, well known and respected in XML circles. Some might think it strange that a person who isn’t necessarily fond of RDF and especially RDF/XML, edit a book devoted to both, but this is the way of the book publishing world.

Simon was the best person I’ve worked with on a book, and I’ve worked with some good people. More importantly, though, is that Simon wasn’t an RDF fanatic, pushing me into making less of the challenges associated with RDF, or more of its strengths. Neither of us wanted a rah-rah book, and Practical RDF is anything but.

I’ve thought back on many of the discussions about RDF/XML that happened here and there this last year. Simon’s usually been on the side less than enthusiastic towards RDF/XML, along with a few other people who I respect, and a few who I don’t. Mine and others’ blanket response has usually been in the nature of, “RDF/XML is generated and consumed by automated processes and therefore people don’t have to look at the Big Ugly”. This is usually accompanied by a great deal of frustration on our part because if people would just move beyond the ‘ugliness’ of RDF/XML, we could move on to creating good stuff.

(I say ‘good stuff’ rather than Semantic Web because the reactions to this term are best addressed elsewhere.)

However, the situation isn’t really that simple, or that easily dismissed, If pro-RDF and RDF/XML folks like myself are ever going to see this specification gain some traction, we need to walk a mile in the opponent’s shoes and acknowledge and address their concerns specifically. Since I know Simon the best, I’ve borrowed his shoes to take a closer look at RDF/XML from his perspective.

Simon has, as far as I know, three areas of pushback against RDF: he doesn’t care for the current namespace implementation; he’s not overly fond of the confusion about URI’s; and he doesn’t like the syntax for RDF/XML, and believes other approaches, such as N3, are more appropriate. I’ll leave URIs for another essay I’m working on, and leave namespaces for other people to defend. I wanted to focus on concerns associated directly with RDF/XML, at least from what I think is Simon’s perspective (because, after all, I’m only borrowing his shoes, not his mind).

The biggest concern I see with RDF/XML from an XML perspective is its flexibility. One can use two different XML syntaxes and still arrive at the same RDF model, and this must just play havoc with the souls of XML folks.

As an example of this flexibilty, most implementations of RDF/XML today are based on RSS 1.0, the RDF/XML version of the popular syndication format. You can see an example of this with the RSS 1.0 file for this weblog.

Now, the XML for RSS 1.0 isn’t all that different from the XML for that other popular RSS format, RSS 2.0 from Userland — seen here. Both are valid XML, both have elements called channel and item, and title, and description and so on, and both assume there is one channel, but many items contained in that channel. From an RSS perspective, it’s hard to see why any one would have so much disagreement with using RDF/XML because it really doesn’t add much to the overhead for the syndication feed. In fact, I wrote in the past about using the same XML processing for RSS 1.0, as you would for RSS 2.0.

However, compatibility between the RDF/XML and XML versions of RSS is much thinner than my previous essay might lead one to believe. In fact, looking at RSS as a demonstration of the “XMLness” of RDF/XML causes you to miss the bigger picture, which is that RSS is basically a very simple, hierarchical syndication format that’s quite natural for XML; its very nature tends to drive out the inherent XML behavior within RDF/XML, creating a great deal of compability between the two formats. Compatibility that can be busted in a blink of an eye.

To demonstrate, I’ve simplified the index.rdf file down to one element, and defined an explicit namespace qualifier for the RSS items rather than use the default namespace. Doing this, the XML for item would look as follows:

<rss:item rdf:about=”http://rdf.burningbird.net/archives/001856.htm”>
<rss:description></rss:description>
<rss:link>http://rdf.burningbird.net/archives/001856.htm <dc:subject>From the Book</dc:subject>
<dc:creator>shelleyp</dc:creator>
<dc:date>2003-09-25T16:28:55-05:00</dc:date>
</rss:item>

Though annotating all of the elements with the rss namespace qualier does add to the challenge of RSS parsers that use simple pattern matching, because ‘title’ must now be accessed as ‘rss:title’, but the change still validates as valid RSS using the popular RSS Validator, as you can see with an example.

Next, we’re going to simplify the RDF/XML for the item element by using a valid RDF/XML shortcut technique that allows us to collapse simple, non-repeating predicate elements, such as title and link, into attributes of the resource they’re describing. This change is reflected in the following excerpt:

<rss:item rdf:about=”http://rdf.burningbird.net/archives/001856.htm”
rss:title=”PostCon”
rss:link=”http://rdf.burningbird.net/archives/001856.htm”
dc:subject=”From the Book”
dc:creator=”shelleyp”
dc:date=”2003-09-25T16:28:55-05:00″ />

Regardless of the format used, the longer more widely used approach now and the shortcut, the resulting N-Triples generated are the same, and so is the RDF model. However, from an XML perspective, we’re looking at a major disconnect between the two versions of the syntax. In fact, if I were to modify my index.rdf feed to use the more abbreviated format, it wouldn’t validate with the same RSS Validator I used earlier. It would validate as proper RSS 1.0, and proper RDF/XML, and valid XML — but it sings a discordant note with existing understanding of RSS, RSS 1.0 or RSS 2.0.

More complex RDF/XML vocabularies that are less hierarchical in nature stray further and further away from more ‘traditional’ XML even though technically, they’re all valid XML. In addition, since there are variations of shortcuts that are proper RDF/XML syntax, one can’t even depend on the same XML syntax being used to generate the same set of triples from RDF/XML document to RDF/XML document. And this ‘flexibility’ must burn, veritably burn, within the stomach of XML adherents, conjuring up memories of the same looseness of syntax that existed with HTML, leading to XML in the first place.

It is primarily this that leads many RDF proponents as well as RDF/XML opponents into preferring N3 notation. There is one and only one set of N3 triples for a specific model, and one, and only one RDF model generating the same set of N3 triples.

Aye, I’ve walked a mile in Simon’s shoes and I’ve found that they’ve pinched, sadly pinched indeed. However, I’ve also gained a much better understanding of why the earnest and blithe referral to automated generation and consumption of RDF/XML, when faced with criticism of the syntax, isn’t necessarily going to appease XML developers, now or in the future. The very flexibility of the syntax must be anathema to XML purists.

Of course, there are arguments in favor of RDF/XML that arise from the very nature of the flexibility of the syntax. As Edd Dumbill wrote relatively recently, RDF is failure friendly, in addition to being extremely easy to extend with its built-in understanding of namespace interoperability. And, as a data, not a syntax person, I also find the constructs of RDF/XML to be far more elegant and modular, more cleanly differentiated, than the ‘forever and a limb” tree structure of XML.

But I’m not doing the cause of RDF and RDF/XML any good by not acknowledging how easy is it to manipulate the XML in an RDF/XML document, legitimately, and leave it virtually incompatible with XML processes working with the same data.

I still prefer RDF/XML over N3, and will still use it for all my application, but it’s time for different arguments in this particular debate, methinks.

Categories
RDF

PostCon

Recovered from the Wayback Machine.

The RDF vocabulary used throughout the examples for Practical RDF is PostCon, example here, a Post Content information dataset. The plan was that I would finish the book and then finish a Java implementation of PostCon, the application, using PostCon, the vocabulary, before the book hit the street.

What I wasn’t counting on was that I wouldn’t have a Tomcat server to run the Java application on when it was finished. I am running my own server, but it’s shared by other folks and at this point in time, a Tomcat server would be too much for it.

I also wasn’t counting on how tired I was once the book was finished. When you’ve worked on a book for two years, through four major rewrites trying to keep up with changing specifications and attitudes and tools, you get tired. I got tired.

However, PostCon the application begs to be created, and PostCon the vocabulary begs to be used.

So, what is PostCon? PostCon is a vocabulary that records information about a web resource, its movement, whether it’s been replaced, and why, and so on. It’s also an application that will maintain a history of your web content in a form that can be used to redirect HTTP requests when a resource is moved; track history of changes without the necessity of a complex change control system; and provide intelligent error handling when a resource is removed permanently. You can see the early prototype in action with this link.

The application has a user interface that allows one to query the PostCon record for a resource, add to it or modify it, and then persist the changes. Additionally, the application has a web services interface that can be utilized from other applications, such as weblog tools like the one I’m using for this page. Since the information about the change is persisted in a file (RDF/XML) rather than a database, other tools could access this information, such as webbots trying to new resources, or checking to see if a resource is still viable.

The vocabulary is based on RDF, and serialized using RDF/XML, so other vocabularies can be plugged in, simply and easily. Information about the creator is maintained in the PostCon vocabulary and this can be tied to the creator’s FOAF file. If the web resource is a weblog page, trackback information can be used to add PostCon related items for the specific page. For that matter, comments can also be added as part of the history of the resource – after all, a commented weblog posting is different than the posting by itself.

The long and short of it is that I’m returning to working on PostCon, but rather than work on it in the background, I’m going to implement the pieces and document them here in this weblog. This will not only give me incentive to get off my butt and get this done, but it should also, I hope, give me some decent feedback if I’m pursuing a less than efficient implementation strategy.

To start, I’m going to review the PostCon vocabulary one more time, to see how I want to modify it considering new efforts with Pie/Echo/Atom (to be called Atom it seems – thanks to Morbus Iff cutting through the crap – yay Morbus). Next, I’ll implement simple pages that can be used to read in and modify the RDF/XML files for a specific resource. I’ll be implementing these in PHP so that they can be accessed from my server. Later I may translate these to Java and JSP.

Next, I’m creating a second RDF vocabulary, this one to be used by an event queue system. When a resource is moved or removed, not only will the front end update the associated RDF/XML file for the document, it will also update an event queue RDF/XML file, which will then track the actions to be performed on the server side. I prefer this rather than having the front end pages implement file destruction or movement because it’s easier to secure a completely server-side application, then one that’s half front-end, half server.

In addition, by separating this layer of activity out, the application that will take the event queue information and do the actual work can be replaced depending on server-side languages supported, OS, that sort of thing.

I’ll create two versions of the application that processes the event queue – one in Java, one in Perl. The Java application won’t need a Tomcat server (no front end), and I don’t want to focus on just one langauge for this component of the entire system.

The final phase of implementing PostCon will be creating web services that can perform all of the functionality of the front-end interface functionality created in PHP. I’d like to implement these in Python and Perl. Perl because I want to try integrating this into a test copy of Movable Type; and Python because I want to improve my Python skills.

The code will be kept simple, and clean, with no frills. In addition, it’s pure open source, and can be copied, modified, and hopefully improved. When I’m finished, I’ll load all of the code to Source Forge.

I have other things to do, so I’m not going to be whipping this out over the next week, but it should be finished within the next month – knock on wood.

Categories
Books RDF Writing

A kinder, gentler Slashdot…and friends

Today Practical RDF was reviewed at Slashdot, a fact I found out when some kind souls warned me of the fact so that I might prepare for the hordes marching in. However, Slashdot book reviews usually don’t generate the server stress that other Slashdot articles can, and the server was able to handle the additional load with ease. This now makes the second time I’ve been slashdotted and lived to tell the tale. Thirds the charm, they say.

It was a nice review, and I appreciated the notice and the kind words. In fact, I’ve had very positive reviews across the board for the book, which is very gratifying for me and for Simon St. Laurent, the lead editor. I’ll probably earn ten cents for every hour I spent on the book, but at least I can feel satisfaction that it’s helping folks and the writing is respected and seen as a quality effort. That’s pretty damn important for a writer – worth more than bucks.

Well, bucks are nice, too.

Speaking of Simon and the book, I was reminded that I owe some articles on RDF and Poetry, and a view of RDF from inside the XML clan, and a few other odds and ends. Hopefully this nice little push will energize me again and I can get these written. It’s been a while since I’ve delighted in the act of writing.

I also wanted to thank the folks for the thoughtful comments in the Tin Can Blues posting. I must also admit I lied in the posting – horrors! – but the lie was unintentional. I forgot that when I worked at Express Scripts earlier this summer that one of the people I worked with started weblogging just as I was leaving. I still remember the shock I received coming around a corner and seeing him read my weblog. As to the question whether your writing changes when you meet those who read it, I remember that for two weeks after that incident, I focused almost exclusively on photography and technology.

There is no ‘right’ or ‘wrong’ answer to the issue of meeting webloggers in the flesh. I think it really is up to the person, and the opportunities, as many of you noted. For myself, several St. Louis webloggers and others passing through the community have invited me to events, cook outs, coffee, and beers, and all of the people are terrific folks, and I know would be a real treat in person. But it’s not easy for me to mix my worlds.

Ultimately for all the chatter I’ve indulged in online, I have become somewhat of a reclusive person; uncomfortable with larger gatherings (i.e. more than three people), quiet at any events other that professional ones. I love to speak at conferences, but I find corners to inhabit when I’m finished. This person in this weblog – assertive, outgoing, and anything but shy – is the real me; but so is the physical person who runs from parties and get togethers, and I just don’t know how to reconcile the two.

I do know that my not meeting people in the flesh doesn’t diminish my genuine affection for the people I’ve met and come to admire, respect, and like through this virtual medium, and maybe that’s all that matters.

(Po-ll-y-a-nn-a!! This sounds good, but I don’t think it’s that simple. I can see a time when friends met online but never in person become less tangible than the ones whom we’ve pressed the flesh with, in one way or another. Our presence will begin to thin as it stretches to meet always and continuously across the void; touching through the mists, our essence flows around the shadows cast by the real, becoming increasingly transparent – true ghosts in the machine.

Or maybe I’m just tired. And maudlin. Time for new topics…)

Speaking of people I’ve not pressed flesh with, Liz writes about Google search hits, mentioning the phrases she now ‘owns’, such as “introvert extrovert”. I checked my stats and find that I own or partially own several phrases including ‘parable’ (number two), Shelley (number one), and ‘love sentences’ (number two).

I thought it was funny that DorotheaLiz, and I have part ownership of the word ‘frustration’ – Dorothea at sixth, me at eight, and Liz at ninth. See what all of you guys are doing to us?

The most problematic phrase I own is ‘baby squirrels’. Yup, search on baby squirrels and there I am, Kicking the Baby Squirrels, Again. I get a lot of visitors for ‘baby squirrels’.

I also own the number two position for the phrase ‘virtual friends’. I’d rather own ‘real friends’ but that’s owned by cats.

PS Nobody make AKMA laugh for the next week.

Categories
Semantics

Slashdot review of book

Dorothea just sent me a heads up that Practical RDF has been reviewed at Slashdot. All in all, a nice review and I appreciate the author, Brian Donovan writing it.

In fact, all the reviews I’ve read — at O’Reilly, Amazon, Programming Reviews, and elsewhere — have been very positive. Most of the criticism has been on the book organization, with some people wanting more coverage of the specs, some wanting more coverage of the tools, some wanting less coverage of the XML. However, add all of the comments together and the organization probably fit the audience about as good as it could considering the topic.

Though this site is linked from the review, I’m not getting many hits, not compared to the traffic I received for Parable of the Languages. Probably a good thing because I’m maintaining this server, and slashdot proofing is a tough SysAdmin job.

Thanks Dorothea, for pointing out the review.

PS Thanks also to Rev Matt for heads up, as well as nice note in the /. thread.

Original archived at Wayback Machine