Categories
RDF Technology

Accessibility, Microformats, and RDF as the Bezoar stone

Recovered from the Wayback Machine.

Really nice writeup on the conflict between Microformats use of abbr with hCalendar and accessibility:

The datetime-design-pattern is a way to show a readable date (such as “March 12, 2007 at 5 PM, Central Standard Time”) to humans and a machine-readable date (such as the ISO 8601 formatted “20070312T1700-06”) to the Microformat parsers. When crossed with the abbr-design-pattern, the result is this.

<abbr class=”dtstart” title=”20070312T1700-06″>
March 12, 2007 at 5 PM, Central Standard Time
</abbr>

As you may have guessed from the previous examples, screen readers expanding the abbreviation will try to read the title element. JAWS helpfully attempts to render numeric strings into human-readable numbers, so “1234” is spoken “one-thousand two-hundred thirty-four” instead of “one two three four.” Given a title value of “20070312T1700-06”, JAWS and Window Eyes both try to read an ISO date string never intended to assault human ears:

Twenty million seventy-thousand three-hundred twelve tee seventeen-hundred dash zero six. (JAWS 8 on IE7: MP3, Ogg)

I particularly liked this article because it provides details as to exactly how the concept in question is being rendered in screenreaders. You’re not left to guess, based on some vague, “Doesn’t work with screenreaders”. It really gives weight to the authors’, Bruce Lawson and James Craig, concerns.

I can’t figure out, though, why RDF always gets slammed whenever discussions of this nature arise:

Some have proposed using custom attribute namespaces for Microformat data, but the Microformats group is strongly opposed to this, and for a simple and valid reason. Microformats are intended to be “simple conventions for embedding semantic markup in human-readable documents.” Opening the floodgates to custom DTDs and namespaces would quickly raise the complexity level of Microformats to that of RDF, greatly reducing its adoption and therefore its relevance.

Here I was, tripping along on a well presented argument defining a tricky problem when, bammo: it could have been worse, it could have been RDF.

It’s as if RDF has become the bezoar stone of metadata–people invoke RDF to draw out all the evil.

“Ohmigod, an asteroid is going to hit the earth and we’re all going to die!”

“It could have been worse. It could have been RDF.”

“You’re right. Whew! I was really worried for a moment.”

First update

We’re going to be coming at you with …AAAAARRRRGGGGGHHHH!… custom DTDs! The horror!!!

Damn near stopped my heart with that one. You want to be more careful, Tom.

Second Update

Here is the first entry of the microformats discussion thread on this item. It gets quite interesting as the thread progresses.

I’m not making any editorial comment on the thread. Nope, not a word. Not a single word. I’m just going to sit back and play with my triples.

Categories
JavaScript

Ajax security: FUD or fact?

from_future_import has a post stating that Fortify’s recent Ajax alarm is more FUD than fact. Money quote in this one:

And MOST importantly the exploit is only applicable to JSON that also happens to be valid JavaScript code.

Was it FUD or fact? A bit of both. The benefit of the paper is the fact that unlike other discussions on these issues, it was written in plain English, diagrammed, and not meant to be understood only by insiders. Perhaps if more Ajax developers would adopt the same approach to documenting issues, concerns, and examples, documents such as that given out by Fortify wouldn’t get the audience.

Or we could all use XML, only (she says as she ducks and runs…)

While I was in the neighborhood, I picked up a couple of other links in comments:

Practical CSRF and JSON Security
An ArsTechnica post on the original article.

(Thanks to Michael Bernstein for link)

Categories
Web

Find your exit points

The first time I stayed in a hotel was when I was 12 and I and my brother met my father for holiday in Hawaii. We’d stayed in motels before–this was the era of the auto vacations–but never a multi-story hotel, where you accessed your room using an elevator.

When we got to our room, my Dad took us out into the hallway and pointed out the Exit sign. He told us that if a fire happened, we should not use the elevator. Instead, we should look for the Exit signs and follow them out of the building.

Since that one trip, I briefly pause at my door and locate the nearest exit before entering my room in hotels.

That trip was also the first time I flew on a plane. It was wonderful–scary and exciting. When the stewardess talked about what to do in case of an crash landing, I paid attention. To this day, I still pay attention–not because I don’t know what to do (butt, meet lips), but because it’s rude to ignore this poor soul who has to go through the motions. Shades of fatalism aside, I do check to see what is the closest exit when I find my seat. Old habits are hard to break.

My check for the exit bleeds over into my use of web services. No matter how clever a service, I never use it if it doesn’t have an exit strategy.

Recently, I took a closer look at the possibility of using Feedburner for serving up my feed. Now that I’ve moved my photos offsite to Amazon’s S3 service, the feeds are now the most massive bandwidth use. With my new austerity program of minimizing resource use, the use of Feedburner is attractive: let it serve up the feeds, with its much more efficient use of bandwidth.

My first thought, though, was: what’s the exit strategy? After all, it’s easy for me to redirect my feeds (all but the RSS 1.0) to Feedburner: I can adjust my .htaccess file to redirect traffic for all requests that don’t come from the Feedburner web bot. But what happens if I decide to bail on Feedburner?

This question was asked of the Feedburner staff last year, and the organization responded with an exit plan. It’s a month long process where you can redirect from Feedburner back to whatever feed URI you want. At the end of that time, all aggregators should have an updated feed URI–all without people having to manually edit feed subscriptions.

As such, I’m trying the service out, see how it goes. I know that if I decide I don’t like it, I can bail. If the worst case scenario happens, with Feedburner going belly up, then people know where to find my weblog and will have to manually edit their feeds. That’s also an exit, albeit more like jumping out a window than walking down stairs.

When I used Flickr, the API was what sold me on the service more than anything. When I decided to not use Flickr, the first thing I did was use an existing application to export a dump of all the original images, to ensure I had a copy of each. If I wanted to, I could also export the metadata and comments. I then ran an application to make an image capture of all the photos I had linked in my web pages, saving the photos locally still using the image names that Flickr generated.

I created a program that then converted all Flickr, as well as other photo URIs, to using one local URI: http://burningbird.net/photos/. This is redirected using the .htaccess to Amazon S3. If I decide to stop using Amazon the exit strategy is very simple: run an API call and pull down the images into one location; stop redirecting to that service and either host the images locally, or redirect to another storage service.

I use Bloglines, but I can easily export my subscriptions as OPML. Though it lacks much as a markup vocabulary, OPML is becoming ubiquitous as a way of managing feed subscriptions. I can then use this file to import subscriptions into Newsgator, or even a desktop hosted tool, like NetNewsWire.

I won’t use a hosted web service like Typepad or weblogs.com. It’s too easy for them to decide that you’re ‘violating’ terms of service, and next thing you know, all your weblog entries are gone. I saw this with wordpress.com in the recent events that caused so much discussion: in fact, I would strongly recommend against using wordpress.com because of this–the service is too easily influenced by public opinion.

I don’t use either my Yahoo or Gmail mail accounts. Regardless of whether I can get a copy of my email locally, if I decide to not use either account I have no way of ‘redirecting’ email addresses from either of these to the email address I want to use. (Or if there is a way, I’m not aware of it.) Getting a copy of my data is not an exit strategy–it’s an export strategy. An exit strategy is one where you can blow off the service and not suffer long-term consequences. A ‘bad’ email address is definitely a long-term consequence*.

Instead, I have a domain, burningbird.net, which I use for everything. I will always maintain this domain. My email address listed in the sidebar, will always be good.

There was a lot of discussion about Yahoo Pipes recently. Pipes is an interesting innovation, and excellent use of the Canvas object–my hat’s off to the creators for their UI design. However, the service has one major drawback: it’s a hosted solution. If you want to ‘export’ your Pipe, you can’t. There’s no way to generate, say a PHP application, from the Pipe, which creates the web service requests for you that can be run locally. No matter how good and interesting the service–there’s currently no exit strategy.

Anytime you find yourself saying, or even thinking, how ‘dependent’ you are on a service, you should immediately look for the exit strategy. If there isn’t one, decrease your dependency. The web is an ephemeral beast; the path of least resistance is 404, not 200. All pages will end someday. The same can be said for services.

Where are you vulnerable? What’s your exit strategy?

*An option for email is to use a local email address, and forward all email to Yahoo or GMail.

Categories
JavaScript

Ajax vulnerability

Ajax developers should check out a report on Ajax vulnerabilities in several Ajax libraries, and download the extensive advisory. The advisory details the vulnerabilities, and how to protect against.

It’s always a bit risky to put out such details, but I, as a developer, really appreciate such because it allows me to better understand how to protect against security risks. Much of the discussion of the vulnerabilities in this advisory isn’t necessarily new, but it does cover newer issues, vulnerabilities in popular libraries, as well as overall issues.

Money quote:

An application can be mashup-friendly or it can be secure, but it cannot be both.

Categories
JavaScript

Baseline library

I’ve only downloaded it and started playing, but I like the idea of a JavaScript library based purely on implementing standards. Small and lightweight, Dean Edwards’ base2.DOM provides a good baseline for development without worrying about interesting proprietary extensions and recalcitrant browsers.

Not that I’m naming names.

Edwards’ library does provide support for the older 5.x versions of IE. Those are browsers I won’t support anymore. I realize there are people using version 9 of the Mac OS or other equipment still loaded with the 5.x versions, and that not supporting old browsers limits their access to applications. However, as long as any JavaScript-enabled application has a non-script enabled alternative that provides the same functionality, I’d rather just turn off any script effects for such instead of adding enormous amounts of code to deal with the idiosyncrasies. Thank goodness for the concept of progressive enhancement…and nice, small footprint JS libraries.

As for the Edwards’ self-deprecating reference to base2.DOM not being a documented library, JavaScript libraries don’t have to be documented when they’re small, use meaningful naming standards, and are easy to read. Note to Dojo: this doesn’t mean you.