Best of luck finding a suitable employer

Roger Johansson

I believe having more content creators and ”authors”, i.e. web designers and web developers, in the HTML Working Group would be good. Unfortunately I think it’s hard to find web professionals who can spare the time unless they get paid to participate. I know I can’t.

When I mentioned something virtually identical to Ian Hickson, he told me

I am sorry you feel that you need to be compensated for your participation in the standards community, and wish you the best of luck in finding a suitable employer.

Can’t wait to read Ian Hickson telling Roger Johansson, “…best of luck in finding a suitable employer.”

HTML4 is to markup

In an interview at WebScienceMan titled, XHTML Users: Grow up!, the interviewee, Sitepoint’s Tommy Olsson answers a question as to whether he likes XHTML with, Grow up! 🙂 Seriously, XHTML is long dead, due to a decade of horrible abuse. Not even the bleached bones remain..

Mr. Olsson believes that we should be using HTML 4, strict HTML 4, because HTML5 is still a bit of whimsy, and XHTML is a pile of dead bones. As I wrote in comments, HTML 4 is to markup, like 8-track is to music.

Separating presentation from semantics

After all these years, we have finally reached the point where we’ve separated page organization from presentation, and now we’re about to embark on the same mistakes again, but this time with presentation and semantics.

I’ve been following the issues associated with the vocabindex Drupal module, including one where the person submitting the bug stated the vocabindex use of UL was incorrect. We’re supposed to, MXT writes, use definition lists rather than unordered lists for any lists of terms with associated definition.

At first glance, it does seem as if the vocabindex module is using the unordered list incorrectly. After all, look at any of my category pages (such as the one for the Semantic Web)—what you see is a list of “terms” and their associated definitions. An obvious candidate for definitions lists.

Look more closely, though. In my sidebar menu I list the vocabindex terms as links to web page URIs, but there is no definition attached to any item. The description, if one is given, is, instead, added as a title attribute to the item and displayed only when the item has cursor focus. Yet, it’s the same data. Does this mean, then, I’m somehow not properly displaying my menu items? Should I have a huge sidebar, with the item description given underneath?

More importantly, whether I have a given text description for each item is purely optional, some do, some don’t. Yet the items in the list have meaning without any associated description. In fact, each item in the list is really nothing more than a label for a bucket to hold content. I could just as easily use foobarsillyputty as labels, except that I’m trying to use “meaningful” labels in order to enable you all to better find past content.

In the absolutely ancient W3C page where lists are covered a list of ingredients is given as an example of an unordered list:

  • 1 cup sugar
  • 1 cup oil
  • 2 cups flour

However, I don’t see that anyone would have a problem with adding parenthetical information to this list in order to further clarify the items:

  • 1 cup sugar (light brown granulated by C & H)
  • 1 cup oil (canola or corn, but not olive)
  • 2 cups flour (white or mixed white and whole wheat)

This is really nothing more than what the vocabindex is doing with the vocabulary index terms within both the index pages and my sidebar menu: a listing of items and a (parenthetical) description to clarify what that item is. However, it’s only when the description is displayed as a “tooltip” that one sees the item as a clarification phrase, only. When presented in the index pages, it “looks” like a definition list, and so we want to have it marked up this way— mixing up semantics and presentation.

A definition list is assumed to have two pieces of information; the term or phrase and an associated definition. Even when not present, the definition is still assumed to be forthcoming at some point— not having it is the exception, not the rule. An unordered list is just that: a list of items. They can be a list of items to buy at the store, select from in a form, or click on in a web page. There’s no assumption that any additional information is necessary for the item. If there was, Drupal would make this information mandatory rather than optional. The application and associated developers would definitely discourage the use of the items in a tag cloud or other format where only the term is given because the term, by itself, would be meaningless.

Yet we look at how the terms are portrayed in a page like the vocabindex page I linked above, and that’s enough for us to say we should use one form of markup over another because it’s more “semantical”. Further exploration online at other sites who attempt to define the differences between unordered lists and definition lists shows the same thing: if we see two pieces of data, we’re assuming a definition list, because that’s what it looks like—not what it is.

The HTML5 document adds another key element to the discussion of definition lists by stating that definition lists are name-value pairs. From this can we deduce, then, that the name has no meaning without the value. That’s my interpretation: that a definition list is the proper semantic markup only when data is defined within a context of names and associated values, both of which are meaningless without the other. If, however, *HTML5 allows us to list the names without values, or the values without the names, then the HTML5 document is imprecise, and we should just use whatever we want to use— semantics can not be derived from imprecision.

Currently, in Drupal, vocabulary terms are discrete labels, nothing more. Any description is for clarification not definition, and isn’t essential to the meaning of the term. Forget how the vocabindex pages “look”, and focus on what the data means. If we can’t do that, then this whole semantic markup thing is a bit of a farce, really.

*Confirmed: it doesn’t

Harmony

Harmony is a very good thing.

For some time now, the ECMAScript working groups have been split into two camps: one supporting ECMAScript 4, another ECMAScript 3.1. The former was a more radical leap forward in ECMAScript (JavaScript), while the latter favored more incremental progress.

AjaxianJohn ResigSimon Willison, and a host of others are referencing an email by Brendan Eich to the lists for both efforts about a new, combined effort dubbed “Harmony”.

Eich writes:

It’s no secret that the JavaScript standards body, Ecma’s Technical Committee 39, has been split for over a year, with some members favoring ES4, a major fourth edition to ECMA-262, and others advocating ES3.1 based on the existing ECMA-262 Edition 3 (ES3) specification. Now, I’m happy to report, the split is over.

The Ecma TC39 meeting in Oslo at the end of July was very productive, and if we keep working together, it will be seen as seminal when we look back in a couple of years. Before this meeting, I worked with John Neumann, TC39 chair, and ES3.1 and ES4 principals, especially Lars Hansen (Adobe), Mark Miller (Google), and Allen Wirfs-Brock (Microsoft), to unify the committee around shared values and a common roadmap. This message is my attempt to announce the main result of the meeting, which I’ve labeled “Harmony”.

Executive Summary

The committee has resolved in favor of these tasks and conclusions:

1. Focus work on ES3.1 with full collaboration of all parties, and target two interoperable implementations by early next year.

2. Collaborate on the next step beyond ES3.1, which will include syntactic extensions but which will be more modest than ES4 in both semantic and syntactic innovation.

3. Some ES4 proposals have been deemed unsound for the Web, and are off the table for good: packages, namespaces and early binding. This conclusion is key to Harmony.

4. Other goals and ideas from ES4 are being rephrased to keep consensus in the committee; these include a notion of classes based on existing ES3 concepts combined with proposed ES3.1 extensions.

The rest of the email then gives the details.

As one can read in comments out and about, not everyone is pleased by this new accord, as they don’t see that the new effort represents enough progress. However, without accord from all the major browser developers, there is no progress: only variations of pretty chaos.

I must admit, being somewhat conservative — or perhaps, after having worked with JavaScript since the first glimmerings over 12 years ago, exhausted with dealing with browser differences — that I’m happy we’re going for simpler changes, implemented broadly. This is a good thing.

Zooming with SVG

I wanted to highlight a comment Bruce Rindahl made to an earlier post (with his permission):

http://www.lrcwe-data.com/DeepZoom.svg is a modification of a project I have been working on. I hacked together some code to mimic the animation effect of the Hard Rock site. First some credits.

The original work was funded by the State of Colorado and Division of Water Resources. Earlier work by the Urban Drainage and Flood Control District.Everything was based on work done by (and GPL’d) at www.carto.net.

The animation was done using SMIL attributes in the SVG standard. It requires the Adobe SVG plugin. It will run on say a Windows 2000 machine in IE6 using a 5-year old plugin […]. The imagery is over 60 GB (almost 2 orders of magnitude more than Hard Rock) served up via an Open standard WMS server (Mapserver) via a Python based cache program (TileCache). Vector data is displayed via SVG using an open source database (PostgreSQL/PostGIS). Without the animation it works in Opera, Safari, Firefox3, and IE with the plugin. The animation works in Opera but has a bug in it which makes it jump – I am tracking that down. Safari should handle it soon. The animation can be jumpy at times but part of that is the coordinate system I am using. The Adobe SVG plugin struggles with numbers that high. Everything is controlled via JavaScript.

Bottom line for me – meh. The animation is cute for a while but it is not worth the extra overhead. Keeping all the downloaded photos just so the user always sees an image is not worth the memory required. The default display is just like Google maps which most users are familiar with. I agree with Mr. Ellis that the development tools are not there to match what is available for Flash but then who would build them when Microsoft won’t support it?

Unlike my earlier silly little example, Bruce’s work is impressive, especially when you consider that most of the technology and specifications used in the effect have been out for years, and it works with Adobe’s now old SVG plug-in in IE6. His work isn’t based on one packaged framework, either, and pulls together previous GPL’d work, as well as several open source technologies into one, cohesive whole.

Bruce also makes an important point: why do the effect? It takes up bandwidth, requires extra work, and what does the functionality provide, other than a few useful implementations like Google Maps? If you want to know why something like JPEG 2000 hasn’t been implemented in most browsers, and only brokenly with Safari (because of a broken implementation in the Mac OS X), it’s because few people are asking for this functionality. In addition, as Bruce notes, why would people expend effort when we know the work won’t work with IE?

I see work like Bruce’s and others as a worthwhile effort, regardless of Microsoft’s unwillingness to meet its earlier commitment to standards. Adobe’s SVG plug-in is not the only SVG plug-in, as Examotion is actively working on an SVG plug-in, which should handle what Adobe does, and more.

More importantly, the more we use these technologies, the more tools created, leading to more sophisticated works, leading to better tools, and so on. We have to prime the pump before we can draw water.

As for whether the effect is useful is really in the eye of the beholder. For instance, I don’t see that the previously linked Hard Rock Cafe’s application to be useful, especially given how limited the access is to the application. However, I’d like to have the pieces that go into the effect, of which Bruce has demonstrated an open source equivalent. (About the only thing missing is JPEG 2000 for advanced image compression. )

By having the pieces in place, we’ll be ready when we do get ideas of how these effects can be combined into something useful, or innovative.


Bruce also supplied a link to a variation of the application that works with Firefox, Opera, and Safari.