Categories
Specs Web

Joel Spolsky: Crap is good

Recovered from the Wayback Machine.

Joel Spolksy just spent several thousand words and accompanying diagrams saying one thing: we did things crappy in the past, and we should continue doing things crappy in the future because crap is easy.

Where do I start?

This upcoming battle will be presided over by Dean Hachamovitch, the Microsoft veteran currently running the team that’s going to bring you the next version of Internet Explorer, 8.0.

At a minimum Microsoft can go off and do its own thing in total isolation, and in the long run, Microsoft will end up being the loser. The more I work with SVG and the new CSS, the more I find that I can develop using the new technologies, and the page still works for IE but I don’t have to make it look the same for IE. As long as the page is clean, legible, and accessible via IE, it doesn’t have to look the same for IE as it does for the Big Three (Firefox, Safari, and Opera).

So I’d say that Hachamovitch is a player, but only to the extent that Microsoft wants to be a part of a larger community.

In practice, with the web, there’s a bit of a problem: no way to test a web page against the standard, because there’s no reference implementation that guarantees that if it works, all the browsers work. This just doesn’t exist.

Question: can you see this page?

There is no practical way to check if the web page you just coded conforms to the spec.

Question: can you see this page?

There are validators, but they won’t tell you what the page is supposed to look like, and having a “valid” page where all the text is overlapping and nothing lines up and you can’t see anything is not very useful. What people do is check their pages against one browser, maybe two, until it looks right. And if they’ve made a mistake that just happens to look OK in IE and Firefox, they’re not even going to know about it.

I’m trying to untangle this one mentally and failing. What Spolsky seems to be saying is that standards don’t matter, because people don’t test in all browsers, and standards somehow make lines not even up. Or something.

He can’t possibly be saying that standards break the web. Can he?

Actually, he can.

Standards are a great goal, of course, but before you become a standards fanatic you have to understand that due to the failings of human beings, standards are sometimes misinterpreted, sometimes confusing and even ambiguous.

The precise problem here is that you’re pretending that there’s one standard, but since nobody has a way to test against the standard, it’s not a real standard: it’s a platonic ideal and a set of misinterpretations, and therefore the standard is not serving the desired goal of reducing the test matrix in a MANY-MANY market.

DOCTYPE is a myth.

A mortal web designer who attaches a DOCTYPE tag to their web page saying, “this is standard HTML,” is committing an act of hubris. There is no way they know that. All they are really saying is that the page was meant to be standard HTML. All they really know is that they tested it with IE, Firefox, maybe Opera and Safari, and it seems to work. Or, they copied the DOCTYPE tag out of a book and don’t know what it means.

There’s at least four separate thoughts in these few seemingly related paragraphs. First: there really are no standards, because standards are a thing of the mind. Second, because standards are a thing of the mind, one can’t test pages against a standard. One such standards thing is DOCTYPE, which really doesn’t exist because no one knows what it does, and people just copy it, anyway. Therefore…

I must admit to getting lost at this point. Who’s on first?

And so if you’re a developer on the IE 8 team, your first inclination is going to be to do exactly what has always worked in these kinds of SEQUENCE-MANY markets. You’re going to do a little protocol negotiation, and continue to emulate the old behavior for every site that doesn’t explicitly tell you that they expect the new behavior, so that all existing web pages continue to work, and you’re only going to have the nice new behavior for sites that put a little flag on the page saying, “Yo! I grok IE 8! Give me all the new IE 8 Goodness Please!”

And indeed that was the first decision announced by the IE team on January 21st. The web browser would accommodate existing pages silently so that nobody had to change their web site by acting like the old, buggy IE7 that web developers hated.

A pragmatic engineer would have to come to the conclusion that the IE team’s first decision was right. But the young idealist “standards” people went nuclear.

It’s been a long time since I’ve been called a “young idealist”. I wonder how Sam Ruby likes being called a young idealist? I’m surprised Spolsky didn’t pat us all on the heads, offer us a cookie. But wait, it gets better…

Almost every web site I visited with IE8 is broken in some way. Websites that use a lot of JavaScript are generally completely dead. A lot of pages simply have visual problems: things in the wrong place, popup menus that pop under, mysterious scrollbars in the middle. Some sites have more subtle problems: they look ok but as you go further you find that critical form won’t submit or leads to a blank page.

Fancy that…this young idealist’s web sites both worked with IE8, right out of the box. In fact, the only problem I’ve had with IE8 is with Netflix and that’s because of the ActiveX controls and nothing to do with standards.

I think we’ll find that most web sites don’t break with IE8, or if they do, they’re just as likely break with Firefox 3b, and Opera 9.5b, and the latest WebKit. There’s a reason you have a long beta period for a browser–to give people time to make any necessary fixes in order to have the browser work with the page once the browser is released out of beta.

True, there are sites that will continue to break with IE8 once it’s released. If you want to find them, go to the geocities.com web sites, and search on muscle cars. Better yet: “Unicorn rainbow pony”. Heck, even most of them will *probably work.

Some of those pages can’t be changed. They might be burned onto CD-ROMs. Some of them were created by people who are now dead. Most of them created by people who have no frigging idea what’s going on and why their web page, which they paid a designer to create 4 years ago, is now not working properly.

So the web has to stop because a web site has been burned on a CD, or the person who created the site is dead? Isn’t that equivalent to saying, “No, you can’t have blu-ray, because I still have VHS tapes”? Or maybe more in line with, “No, you can’t have that vaccine because there are people in the world who think the plague is caused by evil spirits, and we have to halt our practice of medicine until they catch up.”

You know, it is OK to let old pages break. There is nothing so valuable online today that we have to halt all further progress of the web because of the off chance a page won’t be viewable in a modern browser. If it were truly that valuable, it wouldn’t be that vulnerable.

Leaving aside vapid, sexist twaddles such as, Mmhmm. All you smug idealists are laughing at this newbie/idjit. The consumer is not an idiot. She’s your wife. So stop laughing (speaking of which, it doesn’t matter where the quote arises, Joel, only your use of it to prove a point), Spolsky’s whole pitch is basically a race for the bottom. Crap has happened in the past, and therefore we should continue supporting crap in the future. Not only support old crap, but encourage new crap because, frankly, people are too stupid to learn how to do things right. She’s your wife, indeed.

In response to Spolsky’s writing, Sam Ruby wrote, If people want web browsers that work with actual web sites, they still have three choices. Three good, solid choices, created by three organizations populated by people who don’t believe we have to be stuck with muscle cars, unicorns, rainbows, and ponies forever.

*Do scroll down the page and look at the comment annotating the page view counter.

Categories
Specs

XHTMLate WordPress comments

Recovered from the Wayback Machine.

I’ve pulled the plug-in. It cleaned out the comment text, but not the name, URL, and email of the person. The email isn’t an issue, as WP ensures the email is clean; the URL and the name, however, are still an issue. A new comment isn’t the problem; edited comments are.

Frankly, if you’re going to serve your pages up as XHTML, your best bet is to moderate comments so you can catch every variation of something that can go wrong. Either that, or get rid of comments, which is also an option.

I’ll post a new version, once I’ve checked those fields, and completed a few other odds and ends.

Categories
Specs

Accessibility, Microformats, and rule by mob

Recovered from the Wayback Machine.

Bob DuCharme has a guest post by the Chief Technology Strategist for the Commonwealth of Massachusetts, Sarah Bourne, on accessibility issues associated with microformats. She mentions both the abbr and include design patterns that others, most commonly Joe Clark, have brought up in the past.

Ms. Bourne also has an interesting note to make on the nature of the microformat effort:

I suspect that the problems with microformats lie in the fact that they are being developed by a voluntary group instead of an established standards body. The community structure certainly leads to quicker decisions, but they are not as well vetted with a broader audience. Conflicts may not appear until their decisions have been put into practice.

Standards by general consensus rarely works out. For instance, the HTML5 working group has 504 members. How the heck can you get anything accomplished when you have 504 members? What happens with a group this size is either nothing happens, or a few of the more vocal, and assertive, members end up dominating the group–in which case you don’t really have a standards working group: you have George and Jane, and the backup singers.

Update Ms. Bourne actually linked to Isofarro not Joe Clark. Isofarro features Joe’s micropatronage badge prominently in the header. I thought the site was Joe’s once, myself.

Categories
Burningbird Specs

IE8, XHTML, and what am I going to do with my site?

Recovered from the Wayback Machine.

I thought it interesting and even odd how few people have remarked on the fact that Ray Ozzie began the opening keynote of a conference focused specifically at developers by talking about ads.

My source for things geek, Planet Intertwingly, has had very few entries devoted to IE8. I imagine people either don’t care or are trying things out. Or perhaps they’re at ETech or on their way to SxSW. What a way to filter your audience: schedule the conferences at the same time. What sad irony that Ozzie next spoke of the Yahoo deal, as Yahoo itself was launching its latest, greatest tech initiative, which was then overshadowed by Microsoft’s rolling out of the IE8 public beta.

Not to be outdone, Apple has something today probably about its SDK. All we’re missing is something from Adobe, but it preferred to dance alone.

To return to IE8. One doesn’t have to tax one’s imagination to read the purpose behind the ‘advances’ in IE8. All of the new functionality is focused on Microsoft’s new “cloud” agenda, including client data storage support for offline working, and back button navigation. According to the “readiness” document I linked yesterday:

Internet Explorer 8 provides a simplified yet powerful programming model for AJAX development that spans browser, webpage, and server interaction. As a result, it is easier for you to build webpages that have much better end-user experiences, are more functional, and have better performance. APIs are based on the W3C HTML 5.0 or Web Applications Working Group standards. Enhancements or novel intellectual property for AJAX will be made available for standardization before the Internet Explorer 8 release.

The thing is, HTML5 is most definitely a work in progress. What Microsoft has done is cherry picked what it wanted, implemented it, threw in its own stuff and then glossed it over by either attaching it’s own bizarre “open source” license, or tossed the non-critical bits into the public domain.

The proprietary bits aside, it is typical for vendors to start implementing standards before they’re finalized, as a test and a validation. Just as typically, though, the other members of the standards group are usually aware of such plans. I am curious to hear what other members of the HTML5 working group think of IE8 and the HTML5 bits.

As for me, not hard to see that I’m unhappy. I have a choice now: do I continue to serve this site using the XHTML MIME type, in which case it will never be accessible by IE (because I now believe Microsoft will never support the XHTML MIME type); or do I “break” my site by adding back content negotiation?

I wrote previously that I had a plan I was going to implement if Microsoft didn’t support XHTML with IE8. In the back of my mind, I really thought the company would. Not to do so is the company saying that, for all its talk about standards and openness, it will implement only those standards that support its own agenda, and no others. While I expected this attitude, I didn’t expect Microsoft to be so obvious about it.

I really didn’t expect Microsoft to blow off XHTML, and now that it has, I have some work to do on my sites to follow through on my fallback plan. I’m not doing anything earth shattering, or probably all that interesting to most folks (since, seemingly, standards take a back seat to ads for today’s new web developer). I’m just dealing with the situation.

I’m also investigating Drupal, as a content tool–either alone or perhaps with WordPress. I’ve been interested in Drupal since I started looking through the site and the code base. I became more interested when Maki mentioned the SVG Toolkit for Drupal, and Elaine talked about how improved it is. Then Ian Davis at Nodalities mentioned Drupal’s RDF and semantic web commitment yesterday, and that’s all she wrote for me.

The Drupal folks seem more committed to supporting standards, all standards, than the WordPress folk. And when I read something about Drupal, I read about the technology; I don’t read about ads or mergers. This focus on technology appeals to me right now.

Categories
RDF Semantics Specs Web

Semantic web: dull as dishwater edition

Recovered from the Wayback Machine.

Mathew Ingram has decided that the problem with the semantic web is that it’s as boring as dry toast. Of course, by Mathew’s standard, all the stuff that makes the web work is also boring as hell. It’s probably a good thing, then, that some people looked beyond the need for immediate titaliation when it comes to the tech underlying this environment, or Mathew’s audience for his opinions would be his immediate family members, and perhaps those neighbors not quick enough to run away when seeing him approach.

He also writes:

It’s all about plumbing and widgets and data standards, all of which have names like FOAF and TOTP and SIOC and whatnot. It’s right off the dork-o-meter. The Lone Gunmen from The X-Files would have a hard time getting interested in this stuff, let alone anyone who isn’t married to their slide rule or their pocket protector.

Now, taking Mathew’s complaints of No glitter! No glitter! Mama, Mama, where’s my glitter! seriously, I decided to put my slide rule down for a sec and see if I couldn’t respond to his one statement about no one knowing what this all means.

First, there was the web. The web was dumb, but it was hyperlinked.

Then, there was search. Search followed hyperlinks, scraped pages, massaged keywords and tested the strength of the links. The web was still dumb, but number crunching helped generate some smarts. Think of your favorite dog. Yeah, that smart.

Next, there was the semantic web. The semantic web says, You and I can derive understanding from this blob of text on this page, but applications can’t. Applications can pull keywords and run algorithms, but can only approximate what this blob of text is all about. What if we add a little information to this blob of text so that applications don’t have to crunch numbers or make guesses as to what we mean?

How do we add a little information? A hundred different ways. We can use microformats, or RDFa, or RDF, or whatever the HTML5 people cook up for us. With this little bit of extra information, applications can access a web page list that’s created with UL/LI elements, but instead of having to look at the text in the list and try to guess what the list is all about, it can read that little bit of data and know that the list consists of recommended books. Perhaps they can take that little list of books and use another application to look up these books at Amazon. Or at their library. Or better yet, click a button and load all the books into our Kindle. (Assuming that Mathew doesn’t subscribe to the Steve Jobs school of, “We don’t read, we aint’ got no books, gimme the vids”, school of thought.)

The little bit of information might, instead, be an address for an event, triggering the browser to add that event information to a desktop calendar application.

It could be information about people we know and how we know them, so that when we move from Facebook, which is today’s darling, to MyPowerBase, we can tell MyPowerBase to add all people who we have defined as friends, but not those defined as just contacts.

If the information is embedded in a photo–wow, information embedded in a photo, how dull–when we upload the photo to a site like Flickr, it could automatically be added to a map, with all the other photos from the same location. It can be pulled up on a search someday, when we ask the web to show us all photos for St. Louis, or for a certain block in St. Louis. Perhaps it can even help us find photos that are licensed Creative Commons so we can steal them.

I might write about a product or company, and the little bit of information I add to my post might help others who are thinking of doing business with the company, or buying that product. Sure, search engines can scrape the content and try and gleam useful bits based on keywords such as the product or company name, but we’ve all had enough really strange search results to know how far search can go, no matter how brainy the algorithm.

Someday, I’ll be able to write about movies and add just a little bit of extra information, and we can do the same for movies. Or music. Or cooking recipes (“give me all recipes on the web that use apricot jam and bourbon, but I don’t want chicken”). Or even poetry, though don’t mention poetry around Sir Tim–it makes him peevish.

Mathew is very addicted to FriendFeed, which allows him to pull in all the activities of his friends in various places. I bet if we scratched the surface of this application, a lot of the data that makes the application tick comes courtesy of the semantic web dorks.

I could go on and on, but I’ve already been away from my slide rule too long. Instead I’ll end with the best for last: because all of these different ways of adding that tiny little bit of useful information to blocks of text or photos or video files or what have you are based on agreed upon specifications, we can use applications to merge this data and use it for something new; something we haven’t thought of yet. See, now that’s when it really gets exciting because rather than coming up with an idea and then taking five years to get enough data to test it, we’ll already have the data, at no extra effort or cost.

Maybe I’ve been cooped up in my cube with my computers and code for too long, but that strikes me as kind of interesting. In a dorky sort of way.