Categories
Standards SVG XHTML/HTML

Even the mistakes are fun

Anne Van Kesteren:

A new survey reveals that at least Microsoft and IBM think the HTML charter does not cover the canvas element.

I have to wonder, when reading the survey results, how much the people who voted actually used either SVG or the Canvas element. I covered both SVG and the Canvas Element in the book, but I focused more on SVG. Comparing the two–SVG and Canvas–is like comparing the old FONT element with CSS.

The Canvas element requires scripting. The SVG element doesn’t, even for animation if you use the animate elements. In addition, mistakes in SVG can be fun, as I found when I missed a parameter value in mistaken animation. A couple lines of markup. No script. Both Opera and Safari do an excellent job with the animation elements. I’m expecting Firefox to join this group in the next year.

If you use scripting, you can access each element in the SVG document as a separate element. You can’t do that with Canvas.

I still don’t think the Canvas element should be part of a new HTML 5, whatever the grand plans. However, since all but IE supports the Canvas element, it would be foolish to drop it. A better option would be to consider the Canvas element a bitmapped version of SVG and create a separate group to ensure it grows in a standard manner.

I did like what David Dailey wrote in the survey results:

I have considerable ambivalence about <canvas> as I have noted previously. If we were designing HTML 5 from the ground up , SVG and canvas ought to share syntax and ought not to duplicate so much functionality. <canvas> brings a few needed things with it, though it seems rather a bit of poor planning on the part of the advocates of <canvas> that has gotten us to this point. Those historically frustrated with W3C chose to ignore SVG and now seem to want W3C to ignore SVG in favor of a lesser technology. At the same time, <canvas> does enable client-side image analysis by giving the developer access to pixel values, and that alone allows for some tolerance of what otherwise seems to be a curious decoupling of reason from politics. Does it re-invent the wheel? — only about 95% of it is redundant with 20% of SVG.

As for all the discussion about semantic API…years ago I, and others, made a fight for a model and associated XML vocabulary, RDF, we said would stand the test of time and hold up under use. The road’s been rough, and few people are going to defend reification, but RDF fuels the only truly open social graph in existence. Five years ago. That was about the time when everyone believed that all we’d need for semantics was RSS. Including Microsoft.

Categories
HTML5 XHTML/HTML

Marathon 2.0

I must admit to being confused about Molly Holzschlag’s recent posts, including the latest. Today she writes, in clarification of her post where she calls for a moratorium on new standards work:

Perhaps there is a better solution than pausing standards development. If so, I’d like to know what you think it might be. One thing is absolutely key and that is there is no way we are going to empower each other and create the Web in the great vision it was intended to be if we do not address the critical issue of education. And stability. And these things take time. It requires far better orchestration than I personally have been able to figure out, and while the W3C, WHAT WG, WaSP and other groups have made numerous attempts to address some of these concerns, we have failed. We haven’t done a good job so far to create learning tools and truly assist the working web designer and developer become informed and better at what he or she can do. We haven’t done a good job sitting down at the table together and coming up with baseline strategies for user agents and tools.

I don’t keep up with the daily effort of the WHAT WG group, because I’m not really a designer by trade. I do keep up with specifications once they’re released, and am acutely aware of the necessity of valid markup, and not using worst practices (I promise to stop using STRIKE, for instance). I’m also aware of accessibility issues, though I find it frustrating how little we can do since many screen readers just aren’t capable of dealing with dynamic web pages.

I do try to keep up with the JavaScript effort. Mozilla is usually very good about providing readable documentation of new advances, and though it is typically ahead of the game, at least I’m aware of what’s coming down with the road. The same with what’s happening with CSS, PHP, RDF, and other technologies and/or specifications I use in my development.

If there are perceived barriers in acquiring the necessary knowledge to work with the newer specification, it can be because people heavily involved with some of these efforts can come across as arrogant, impatient, and even intolerant–the ‘elitist’ that Molly refers to. Over time, though, such ‘elitism’ usually gets worn away. I used to think the people associated with RDF were elitist, but I’ve watched in the last few years as folks interested in RDF/OWL/semantic web fall over their own feet rushing to increase understanding of, and access to, the concepts, specifications, and implementations. Express even a mild interest in RDF and *whoosh*, like the debris left by a flood, you’ll be inundated with helpful suggestions and encouragement.

Issues of arrogance and elitism aside, the concept of halting effort on specifications while waiting for the rest of the world to catch up just doesn’t make sense. Yes, it can be overwhelming at times–CSS, HTML, XHTML, XML, RDF, DOM, ECMASCript, PHP, Ruby, etc, etc etc. So much to absorb, so little time. But that’s not going to change by halting work on improving and extending specifications.

We do need to have more consistency among the user agents, such as the browsers. But we have browsers now that don’t implement, properly, specifications that have been around for years. In fact, it is because of this that we have this alphabet soup, as we try to remember which browser handles which piece of which HTML specification, correctly. Don’t even get me started on how user agents handle JavaScript. Or CSS.

I don’t know much about the intimate details of the HTML5 process, other than the whole point of the effort was to bring about a common point on which we could all intersect–authors and developers in what we use, user agents in how the implement the the specifications. Once this place of mutual agreement is then reached, we can continue to move forward, each at our own pace. It doesn’t make sense, though, for all to stop moving forward because some developer in Evansville, Illinois, or Budapest, Hungary, is still holding on to their tables.

Consider a marathon. In marathons, all the participants have to agree on the rules, and have to make sure they’re following the same course. But once the rules are defined and the course is laid out, then it’s up to the individual participants to do what’s necessary to complete the course. Some people put in more time and training and they complete the marathon sooner than others who can’t put as much time in, or who perhaps don’t have the same level of physical conditioning. Most of the people that participate, though, don’t care that they aren’t first or second or even in the first hundred. Most people have their own personal goals, and many are happy just to finish.

Think, then, how all participants would react if those putting on, say, the Boston Marathon, were to tell the participants that those in the front needed to slow down, or stop, so that those in the back could catch up?

The web is like a marathon. The specifications define the rules, and the implementations define the course. It is up to the individuals to determine how fast they want to run the course.

Molly says, because a developer in Evansville, Illinois or Budapest, Hungary is still using HTML tables for layout that the web is ‘broken’. I think what she’s really saying, though, is that the web works too well. There is a bewildering wealth of technology we can pick and choose from, and it can be both intimidating and exhausting trying to stay aware of all of it, much less stay proficient in any of it. It also seems like we’re surrounded by people who know it all.

They don’t, though. No one knows it all. The same as no one runner wins every marathon. None of us can know it all, and none of us can afford to be intimidated by those who seem to know it best.

No matter what we do with web specifications and new technologies, there will always be those who push to be first; the expert, the most knowledgeable–the ‘leader’ if you will. Then there is the rest of us, doing our best. This state of affairs is not broken, it’s just the way it is. It’s OK, too, because we don’t need to finish the race at the same time. What we web developers and designers need is what the marathon runners need–a set of rules by which we all participate, and a consistent course on which to run.

And here I got all this way without once mentioning Microsoft and IE.

Categories
XHTML/HTML

Comments

Recovered from the Wayback Machine.

I realize that perhaps my choice of serving up XHTML instead of HTML through WordPress seems audacious, but if you want to point out potential problems, can you send me an email? Rather than put something in the comments to ‘demonstrate’ the problem? Believe it or not, I am open to suggestions and am not adverse to receiving advice or help. I also give credit to the person when I receive either.

update

I have to ask myself if I want to spend the hours, no make that days, necessary in order to serve this site as XHTML. One has to be detective as much as tech in order to hunt the problems and kill them one by one. Perhaps this is why the W3C decided to abandon hope on XHTML and focus on HTML5.

I do know that the average person doesn’t care, and frankly, I’m not sure if the average tech is exactly overjoyed, either.

I can either turn off XHTML, which is tempting. Or I can turn off comments. For now, all comments are moderated until I decide. And until I finish with the book today and can focus on the site.

Categories
XHTML/HTML

Atom XHTML patch

Created my first formal Drupal patch, providing an XHTML content type to the Atom module. Once I walked through the module, and the procedure to apply a patch, it really wasn’t all that difficult to both create the patch, and submit it for review. And the feed validates, though with a warning because of the object element in the last post.

I was surprised at how easy it was to modify the administration form for the module— a couple of lines of code, and that’s it.

Categories
Specs XHTML/HTML

Ambiguous Specifications do not make Good Technology

Recovered from the Wayback Machine.

There is a belief that if it weren’t for the fact that the earliest versions of HTML were unstructured–full of proprietary idiosyncrasies and ill-formed markup indulged by too-loose browsers–the web wouldn’t have grown as fast as it did. Somehow, we’ve equated growth with bad and imprecise specifications rather than the more logical assumption that the growth was due to interest in an exciting new medium.

As such, we’ve carried forward into this new era in web development an almost mythical belief in bad specifications. If we wish to have growth, we think to ourselves, we mustn’t hinder the creative spirit of the users by providing overly rigorous specifications. Because of this belief, we’re still battling ill-formed, inaccessible web pages created by a legion of web page designers who picked up some pretty bad habits: namely the use of deprecated attributes and proprietary elements, as well as the use of HTML tables for everything. Well, everything that isn’t covered by the use of non-standard and proprietary Javascript–use of which results in the annoying messages that one needs a specific browser, or worse, a specific operating system in order to see this Wonderful Thing. Go away until you’re properly equipped.

What we’re finding now with web page designers today, whether they’re amateur or professional, is that it’s just as easy to learn how to do things the right way, as the wrong. What’s important is to provide good, clear documentation, as well as good, clean examples. Contrary to some expectations, adherence to standards, and precise specifications have not killed the spirit of creativity.

In the end, rather than aid the growth of the web, bad specifications slowed it down as a new generation of web pages had to be created out of the ashes generating by burning the old.

Learning how to do things right has such rewards, too. It’s knowing that your page looks good in all operating systems and most browsers; that people can easily navigate your site; that there are a hundred new tools and toys you can use now because you’re using precise and structured markup. Being able to validate a page isn’t a matter of dumping a fairly useless sticker into a sidebar; it means being able to drop in a Google map, or add in-place editing, or automatically pull your calendar out of the page, or any number of wonderfully useful and fun innovations.

We still continue with this belief, though, that to standardize or embed precision into a specification is to stifle the creative juices of the consumer of the specification: whether they be developer, designer, or end-user. Why? What can possibly lead anyone to believe that you can create good technology out of a bad specification?

Some would point to RDF and say that this is a case of a very precise specification that has not led to quick adoption. However, it isn’t surprising that there isn’t billions of RDF/XML documents scattered here and there, and it has nothing to do with the precision of the specification. Some folk didn’t, and still don’t, like the look of the externalized syntax of RDF; others felt that semantics should arise from existing elements; and still others just don’t see the need for it, and won’t until you give them an application that demonstrates this directly for them.

Oh, there’s some pieces of the RDF model we might do without, but precision is not one of them. I look at the precision of the specification of RDF with nothing but relief. I know that the work I do now with RDF follows a model that’s been carefully defined, intimately documented, and rigorously tested. I can trust the model, and know that the documents I create with RDF today will parse just as successfully as documents I’ll create five years from now; more importantly, knowing without a doubt it will mix with your data modeled in RDF.

That’s why I look with some confusion at the backlash against efforts to clarify the RSS 2.0 specification. There is no doubt–none whatsoever–that the RSS 2.0 specification, as currently written, is ambiguous; from what we’re hearing now, in comments and email lists, it is being kept deliberately so. I don’t understand this. This would be no different than to ask Microsoft not to follow standardized use of CSS in the new IE 7.x. Why on earth would anyone want this?

I am just a simple woman who works in technology. Perhaps one of you can explain it to me in such a way that I can understand.

I wrote on the ambiguity in RSS 2.0 as regards to enclosures here, and actually had to modify Molly Holzschlag’s weblog software (WordPress) because her posts with enclosures would cause tools such as Bloglines to break. These are two very popular tools; hence, the ambiguity in RSS 2.0 specification does cause problems. This is a proven fact that no amount of marketing and cheerleading can obfuscate.

Throw as much money as you want at it; write the most glowing reviews; get prestigious names to exult its beauty and power; seek to crush a non-existent enemy if you must–it is still not ‘good technology’. It may have damn good marketing, and lots of dough invested in it, and even have widespread use–but it is not good technology.

I am puzzled as to how anyone, particularly those who work in technology, could say otherwise. I await enlightenment.