Categories
Semantics

Deconstructing the Syllogistic Shirky

Recovered from the Wayback Machine.

Clay Shirky published a paper titled, The Semantic Web, Syllogism, and Worldview and made some interesting arguments. However, overall, I must agree with Sam Ruby’s assessment: Two parts brilliance, one part strawman. Particularly the strawman part.

First, Clay makes a point that syllogistic logic, upon which hopes for the Semantic Web are based, requires context and therein lies the dragons. He uses an example the following syllogism:

– The creator of shirky.com lives in Brooklyn
– People who live in Brooklyn speak with a Brooklyn accent

From this, we’re to infer that Clay, who lives in Brooklyn, speaks with a Brooklyn accent. Putting this into proper syllogistic form:

People who live in Brooklyn speak with a Brooklyn accent
The creator of shirky.com lives in Brooklyn
Therefore, the creator of shirky.com speaks with a Brooklyn accent

Leaving off issues of qualifiers (such as all or some) , the point Clay makes is that context is required to understand the truth behind the generalization made with people living in Brooklyn and speaking with an accent:

Any requirement that a given statement be cross-checked against a library of context-giving statements, which would have still further context, would doom the system to death by scale.

Clay believes that generalities such as the one given require context beyond the ability of the medium, the Semantic Web, to support. He then goes on to say that we can’t disallow generalizations because the world tends to think in generalities.

I agree with Clay that people tend to think in generalities and that context is an essential component of understanding what is meant by these generalities. But Clay makes a mistake in believing that the proponents of the Semantic Web are interested in promoting a web that would be able to deduce such open-ended generalities as this; or that we are trying to create a version of Artificial Intelligence on the web. I can’t speak for others, but for myself, I have never asserted that te Semantic Web is Artificial Intelligence on the web (which I guess to show that machines aren’t the only ones capable of miscontruing stated assertions).

Clay uses examples from a few papers on the Semantic Web as demonstrations of what we’re trying to accomplish, including a book buying experience, an example of trust and proof, and an example of membership based on event. However, in all three cases, Clay has done exactly what he’s told the Semantic Web folks we’re guilty of: disregarded the context of all three examples. As Danny Ayers writesShirky is highly selective and misleading in his quotes.

In the first paper, Sandro was demonstrating a book buying experience that sounds overly complex. As Clay wrote:

This example sets the pattern for descriptions of the Semantic Web. First, take some well-known problem. Next, misconstrue it so that the hard part is made to seem trivial and the trivial part hard. Finally, congratulate yourself for solving the trivial part.

The example does seem as if the trivial is made overly complex (and, unfortunately, invokes imagery of the old and tired RDF makes RSS too complex debate), but the truth is that Sandro was basing his example on the premise of how would you buy a book online if you didn’t know the existence of an online bookstore. In other words, Sandro was demonstrating how to uncover information without a starting point. Buying a book online may not have been the best example, but the concept behind it, the context as it were, is fundamental to today’s web; it’s also the heart of tomorrow’s Semantic Web, and the basis behind today’s search engine functionality, with their algorithmic deduction of semantics.

As for Sean Palmer’s example, which makes an assertion about one person loving another and then uses a proof language to demonstrate how to implement a trust system, Clay writes:

Anyone who has ever been 15 years old knows that protestations of love, checksummed or no, are not to be taken at face value. And even if we wanted to take love out of this example, what would we replace it with? The universe of assertions that Joe might make about Mary is large, but the subset of those assertions that are universally interpretable and uncomplicated is tiny.

I agree with Clay that many assertions made online don’t necessarily have a basis in fact, and no amount of checksum technology will make these assertions any more true. However, the point that Sean was making isn’t that we’re making statements about the truth of the assertion — few Semantic Web people will do this. No the checksum wasn’t to assert the truth of the statement, but to verify the identity of the origination of the statement. This latter is something that is very doable and core to the concept of a web of trust — not that your statement is true, because even in courts of law we can’t always deduce this; but that your statement was made by you and was not hearsay.

In other words, the example may not have been the best, but the concept is solid.

Finally, as to Aaron Swartz’s example of the salesman and membership in a club, Clay writes:

This is perhaps perhaps the high water mark of presenting trivial problems as worthy of Semantic intervention: a program that can conclude that 102 is greater than 100 is labeled smart. Artificial Intelligence, here we come.

Again, this seems like a trivial example — math is all we need to determine membership based on count of items sold. However, the point Aaron was making was that in this case it was count, in other cases membership could be inferred because of other actions, and by having a consistent and shared inferential engine behind all of these membership tests, we do not have to develop the technology to handle each individual case — we can use the same model, and hence the same engine, for all forms of inferences of membership.

Again, without the context behind the example the meaning is lost, and just the words of the example as republished in Clay’s paper (and I wonder how many of the people reading Clay’s paper also read the other three papers he represents) seem trivial or overly pedantic. With context, this couldn’t be farther from the truth.

Following these arguments, Clay derives some conclusions that I’ll take one at a time. First he makes a point that meta-data can be untrustworthy and hence can’t be relied on. I don’t think any Semantic Web person will disagree with him, though I think that untrustworthy is an unfortunate term, with its connotations of deliberate acts to deceive. But Clay is, again, mixing web of trust and Semantic Web, and the two are not necessarily synonymous (though I do see the Web of Trust being a part of the Semantic Web).

I use poetry as an example of my interest in the Semantic Web. As an example. I want to find poems that use a bird to represent freedom. I search on “bird as metaphor for freedom” and I find several poems people have annotated with their interpretation that the bird in the poem represents freedom. There is no inherent ‘truth’ in any of this — only an implicit assumption based on a shared conceptual understanding of ‘poetry’ and ‘subjectivity’. The context is that each person’s opinion of the bird as metaphor for freedom is based on their own personal viewpoint, nothing more. After reviewing the poems, I may agree or not. The fact that the Semantic Web helped me find this subset of poems on the web does not preclude me exercising my own judgement as to the people’s interpretations.

Clay also makes a statement that There is simply no way to cleanly separate fact from fiction, and this matters in surprising and subtle ways…. As example he uses a syllogism about Nike and people:

– US citizens are people
– The First Amendment covers the rights of US citizens
– Nike is protected by the First Amendment

Well, the syllogism is flawed, but disregarding that, the concept of the example is again mixing web of trust with the Semantic Web, and that’s an assumption that isn’t warranted by what most of us are saying about Semantic Web.

Clay also mentions that the Semantic Web has two goals: to get people to use meta-data and the other is to build a global ontology that pulls all this data together. He applauds the first while stating that the second is …audacious but doomed.

Michelangelo was recorded as having said:

My work is simple. I cut away layer after layer of marble until I uncover the figure encased within.

To the Semantic Web people there is no issue about building a global ontology — it already exists on the web today. Bit by bit of it is uncovered every time we implement yet another component of the model using a common, shared semantic model and language. There never was a suggestion that all metadata work cease and desist as we sit down on some mountaintop somewhere and fully derive the model before allowing the world to proceed.

FOAF, RSS, PostCon, Creative Commons — each of these is part of the global ontology. We just have many more bits yet to discover.

Clay’s most fundamental pushback against the Semantic Web works seems to be covered in the section labeld “Artificial Intelligence Reborn”, where he writes:

Descriptions of the Semantic Web exhibit an inversion of trivial and hard issues because the core goal does as well. The Semantic Web takes for granted that many important aspects of the world can be specified in an unambiguous and universally agreed-on fashion, then spends a great deal of time talking about the ideal XML formats for those descriptions. This puts the stress on the wrong part of the problem — if the world were easy to describe, you could do it in Sanskrit.

Likewise, statements in the Semantic Web work as inputs to syllogistic logic not because syllogisms are a good way to deal with slippery, partial, or context-dependent statements — they are not, for the reasons discussed above — but rather because syllogisms are things computers do well. If the world can’t be reduced to unambiguous statements that can be effortlessly recombined, then it will be hard to rescue the Artificial Intelligence project. And that, of course, would be unthinkable.

Again, I am unsure of where Clay derived his thinking that we’re trying to salvage the old Artificial Intelligence work from years ago. Many of us in the computer sciences knew this was a flawed approach almost from the beginning. That’s why we redirected most of our efforts into the more practical and doable expert systems research.

The most the proponents of the Semantic Web are trying to do is show that if this unannotated piece of data on the web can be used in this manner, how much more useful can it be if we attach just a little bit more information about it?

(And use all of this to then implement our plan for world domination, of course; but then we don’t talk about this except on the secret lists.)

Contrary to the good doctors’ AKMA and Weinberger and their agreement with Clay, as to worldview and its defeat of any form of global ontology, what they don’t take into account is that each worldview of data is just another facet of the same data; each provides that much more completeness within that global ontology.

What we do know about the Soviet view of literature? Its focus was on Marxism-Leninsim. What do know about Dewey’s view of literature? That Christianity is first among the religions. The two facts, the two bits of semantic information, do not preclude each other. Both form a more complete picture of the world, as a whole. If they were truly incompatible, people in the world couldn’t have had both viewpoints in the same place, the earth, at the same time. We would have imploded into dust.

I do agree with Clay when he talks about Friendster and much of the assumption of ‘friendship’ based on the relationships described within these networks. We can’t trust that “Friend of” is an agreed on classification between both ends of the implied relationship. However, the Semantic Web makes no assumption of truth in these assertions. Even the Web of Trust doesn’t — it only validates the source of the assertion, not the truth of the assertion.

Computers in the future are not going to come with built-in lie detectors.

I also agree, conditionally, with Clay when he concludes:

Much of the proposed value of the Semantic Web is coming, but it is not coming because of the Semantic Web. The amount of meta-data we generate is increasing dramatically, and it is being exposed for consumption by machines as well as, or instead of, people. But it is being designed a bit at a time, out of self-interest and without regard for global ontology. It is also being adopted piecemeal, and it will bring with it with all the incompatibilities and complexities that implies.

Much of the incompatibilties could be managed if we all followed a single model when defining and recording meta-data, but I do agree that meta-data is coming about based on specific need, rather than global plan.

But Clay’s reasoning is flawed if be believes that this isn’t the vision shared by those of us who work towards the Semantic Web.