Categories
RDF

Update: Yahoo search

I had made an assumption that Yahoo Search was using the RDF/XML embedded with the CC license information to build its search results; Mike Linksvayer, though, was kind enough to clarify in comments that the company is using the CC license links, only, to capture this information.

This is disappointing, as I feel that there is more about the CC licensed objects that Yahoo could provide and doesn’t because it’s only after the links. That’s about the same as running a mine for rubies and tossing aside the diamonds you find.

Mike also mentioned about the use of RDF-A to bypass problems with embedded RDF/XML. Trying to define yet another new syntax when there’s an option already available doesn’t make sense. The RDF/XML Syntax document stresses the use of <LINK> for linking to a separate RDF/XML document with whatever metadata is defined for the resource. This is a good approach, and I’m not sure why folks are resistent to this. It’s not as if the extra documents will take up a lot of space; for dynamic systems, such as many of the ones we’re using today for weblogging, commerce, and so on, the document can be generated on demand.

A scenario for use with CC could be that when the CC license is generated, the person is told to create a file and copy in the generated RDF/XML. Then to take this LINK and add it to the header of the page. If they also want to add a icon and a link to the license in human readable format, then copy this link and put it into the page.

Is this that much more complicated for the people? Yes and No.

No in that people who host their own sites could probably do this without much problem; especially if tools start providing ways of editing pages on the site. However, for hosted sites, this is a problem – and will continue to be a limitiation of these types of sites. Now, a smart hosted site will be one that eventually gets that they need to provide some mechanism to allow for this type of activity. But until then, yes this is a limitation.

But CC could solve this for the hosted sites, by hosting the license files themselves and giving the person the link to the file to put into their document. Even with a weblogging tool, you could do this just by embedded a tag for the individual file name as the name of the metadata file into the header.

Eventually, we need ways of merging data for many uses into these pages. One way would be to provide the RDF/XML document URI to these tools, and the tools would then read in the existing RDF/XML and add the additional statements. Another would be for tools to provide a way of reading in a block of RDF/XML, pull out the individual statements, and then merge into those that already exist.

There’s code everywhere to do this type of data merging, and best of all: it’s RDF/XML, which means you don’t have to worry about namespaces and collision.

All we would need, then, is nice search bots that grab this and pull all this info into a nicely consumable spot. With API that returns individual data query results, or RDF/XML.

Yahoo! Yahoo! *knock knock knock* Opportunities knocking. Don’t blow it.

Categories
Technology Weblogging

Update

Just a quick FYI in how this is going:

I need to integrate fulltext in the application. This allows people to view a single page in a multi-page posting.

I’m still trying to get the RDF meta-data component finished, using RAP (RDF API for PHP). Some troubles with data updates.

Still hunting down SQL statements that have been embedded in the process files, and isolating them in the backend.

Few other odds and ends. I had thought about not worrying about multi-blog support, but I think I will add this in, after all. I think all in all it would be easier to add it in from the beginning then to try and incorprate after the tool’s been used.

Lot’s of work. Most of it fun. Really like the metadata thing, and consider the discussion about datablogging timely, too. It’s not going to be that polished, though, because the metadata functionality will be an add on, whereby people provide a vocabulary and the functionality enables it for each post. But I agree with Danny: this is the perfect use for RDF/XML.

Categories
Technology

The importance of degrading gracefully

When I first went to work at the dot-com, Skyfish, too many years ago, I was faced with an application that had a partial interface, little back end development, and that had cost the investors 1.5 million dollars I believe it was (might have been 2 million — hard to keep track in those days). Where did the money go was the question, and the answer was: it went on building lots of graphics and a really fancy Dynamic HTML (or DHTML or HTML and script) page navigation.

Well, the first thing we had to do, after firing the design company, was pull the DHTML out and revert to plain links for the menu. Why? Because DHTML menus don’t degrade gracefully.

I first heard about DHTML from Scott Isaacs at an invitation-only author introduction to the technology held at Microsoft’s campus. I remember when he–or actually, I think it was the program manager who did the demo–clicked on a name in a web page, and the space beneath opened up and more content was shown.

I was floored. I was astounded. I was in love. Up to that point, the only dynamic component of a web page was BLINK or an animated GIF, and neither of these was particularly helpful from a professional stand point.

In the months that followed, as the technology of CSS-P, as it was known then, was released, I spent an amazing amount of time working with it; I was sure it was going to revolutionize everything we knew about web page design. In fact, it was at that time that I became heavily involved with ASP, and between the two–DHTML for the front and ASP for the back–I felt that this was it: we could close the book on innovation, tell the other contenders they could go home now.

Well, it wasn’t long before cracks in this little nirvana started to develop. Whatever cooperation existed between Microsoft and Netscape at the rollout of CSS-P died a rather painful death, and we started having to deal with a thing called “cross-browser compatibility”–making stuff work in multiple browsers.

Microsoft really drove the browser market, too. Too fast and too far, and sucking people in right and left away from the Netscape browser, Navigator, to IE. The frenzied pace splintered the browser market and left us with a legacy of non-standard, proprietary extensions that haunted us for years. And then when Microsoft had us, they dumped us.

Compatibility across browsers wasn’t the only problem; we also had to worry about making pages work across browser versions. I remember about two years ago, when someone using Navigator 4.x asked me to change something in my weblog and I said, enough is enough, I’m no longer supporting a browser that was released six years ago. Now, I feel that way about IE.

Compatibility issues aside, other problems started to pop up in regards to DHTML. Screen readers for the blind disabled JavaScript, and still do as far as I know (I haven’t tried a screen reader lately). In addition, security problems, as well as pop-up ads, have forced many people to turn off JavaScript–and keep it off.

(Search engines also have problems with DHTML based linking systems.)

The end result of all these issues–compatibility, accessibility, and security–is a fundamental rule of web page design: whatever technology you use to build a web page has to degrade, gracefully.

What does degrade, gracefully, mean? It means that a technology such as Javascript or DHTML cannot be used for critical functionality; not unless there’s an easy to access alternative.

For instance, the main use I make of JavaScript and DHTML in my weblog is the live preview and spell check for comments. Now, neither of these is critical for people wanting to leave comments, and this means my site meets one requirement of the fundamental rule: the page can degrade. However, I made a design error when I added the live preview text area and Check Spelling button, in that I didn’t degrade gracefully: the Check Spelling button still shows, as does the live preview area. If JS is not enabled, these should not show. (It’s now an item on my to-do list.)

Whatever holds for DHTML also holds for Ajax. Some of the applications that have been identified as Ajax-enabled are flickr and Google’s suite of project apps. To test how well both degrade, I turned off my JavaScript to see how they do.

Flickr was a delight and an example of a web page that not only degraded gracefully, if it were a dancer, it would be the prima ballerina. As an example of this, there is an option on each photo page for the photo owner to add tags. Clicking the option instantly opens up a set of text boxes to add new tags. This is using DHTML, and very handy.

However, when you turn JavaScript, and hence DHTML, off, this option isn’t there, but you can easily edit tags and other information by clicking the edit link below. Flickr used DHTML to enhance the user experience, but never once built a dependence on the technology to drive the user experience. More, if you access the page without JavaScript, you’ll never know that you’re ‘missing out’ by doing so. Lovely use of technology.

Google’s gmail, on the other hand, did degrade, but did not do so gracefully. If you turn off JavaScript and access the gmail page, you’ll get a plain, rather ugly page that makes a statement that the primary gmail page requires JavaScript, either turn this on, get a JS enabled browser, or go to the plain HTML version.

Even when you’re in the plain HTML version, a prominent line at the top keeps stating how much better gmail is with a Javascript enabled browser. In short, Google’s gmail degrades, by providing a plain HTML alternative, but it didn’t do so gracefully; not unless you call rubbing the customer’s nose in their lack of JS capability “graceful”.

You don’t even get this message with Google Suggest; it just doesn’t work (but you can still use it like a regular search page). As for Google Maps? Not a chance–it is a pure DHTML page, completely dependent on JavaScript. However, Mapquest still works, and works just as well with JS as without.

(Bloglines also doesn’t degrade gracefully — the subscription is based on a JavaScript enabled tree. WordPress, and hence Wordform, does degrade gracefully.)

If we’re going to get excited about new uses of existing technology, such as those that make up the concept of Ajax, then we should keep in mind the rule of degrading gracefully: Flickr is an example of a company that understands the principle of degrading gracefully; Google is an example of a company, that doesn’t.

Update: As Doug mentions in comments, flickr is dependent on Flash. If Flash is not installed, it does not degrade gracefully.

Sigh, there goes my prima ballerina.

Categories
People

Sheila, Radio diva

Sheila Lennon did her first podcast, related to an article her newspaper is doing on the legend, Sister Rosetta Tharpe. It’s a nice mix of voice, story, and music, and I have to admit, there might be something to all of this podcasting stuff.

I’ve had a couple of people ask where my podcasts are. All I can answer is they’re in the future. If you all want to buy me the podcast equipment I need to do the job decently, I’ll do a podcast. Otherwise you’ll just have to assume that I have a wonderful voice that will bring you to your knees.

No, seriously. A real pip. Eat your heart out Virginia Clark.

In the meantime, check out Sheila on Sister Tharpe. It’s a girl thang!

Categories
JavaScript

AJAX: The all-purpose cleaner

I’ve been hearing quite a bit about Ajax, the new wonder technology lately, and finally decided to take some time to check it out.

I followed the link to the Adaptive Path essay on Ajax, and started reading through the writing until I came to the diagram showing an “Ajax Engine”, as part of the new innovation. Being a geek like most other geeks, I then stopped reading and starting looking around for the download button for this new Ajax Engine. Not having found anything I could download, I returned to the essay and continued reading.

What I found is something that sounded strangely familiar. In fact, this ‘new’ technology sounded just like DHTML–Dynamic HTML–except that there was a reliance on a non-browser safe object called the XMLHttpRequest: an object to manage asynchronous XML access from the server from within web pages. An object invented by Microsoft for IE and implemented in Firefox and Safari, but not an objects that’s guaranteed to be cross-browser safe to use.

I admit, I was very confused by this time. I wrote on DHTML in 1998, in fact wrote the book whose cover you see in this post. I have hundreds of examples of DHTML, most several years old. I’ve even used the request object a time or two, but wasn’t necessarily overjoyed by it: I really didn’t like using proprietary objects when I couldn’t find the same functionality in other browsers.

In the FAQ attached to the essay, I did receive some clarification:

Q. Did Adaptive Path invent Ajax? Did Google? Did Adaptive Path help build Google’s Ajax applications?

A. Neither Adaptive Path nor Google invented Ajax. Google’s recent products are simply the highest-profile examples of Ajax applications. Adaptive Path was not involved in the development of Google’s Ajax applications, but we have been doing Ajax work for some of our other clients.

Q. Is Adaptive Path selling Ajax components or trademarking the name? Where can I download it?

A. Ajax isn’t something you can download. It’s an approach — a way of thinking about the architecture of web applications using certain technologies. Neither the Ajax name nor the approach are proprietary to Adaptive Path.

Q. Is Ajax just another name for XMLHttpRequest?

A. No. XMLHttpRequest is only part of the Ajax equation. XMLHttpRequest is the technical component that makes the asynchronous server communication possible; Ajax is our name for the overall approach described in the article, which relies not only on XMLHttpRequest, but on CSS, DOM, and other technologies.

I then read Dare Obasanjo’s excellent tap-tap that yes, the emperor is naked, and that Ajax really is Dynamic HTML with the use of the proprietary XMLHttpRequest object.

(Of course, I’ve used the term DHTML and Dynamic HTML, and in Dare’s write-up, even this is a renaming of an existing concept, so mea culpa in that regard.)

Wow, I didn’t know that I was ahead of the times when it comes to technology. I feel so super cool right now. To celebrate, I decided that I would go through all my burned CDs and recover my old DHTML applications in addition to old write-ups. As I find them, I’ll post them into posts labeled “Ajax” so that you know you’re supposed to jump up and down. (The old Well, if Google uses it then it must be OK thing.)

Starting with one of my favorites: The DHTML Menu Button from Hell, used to demonstrate the benefits of using DHTML to build menus. But if you don’t like that one, then how about Pick-A-Pair–a favorite script of porn sites the world over. Betcha can’t win the triple game.

Of course, this isn’t really Ajax, which requires the use of XMLHttpRequest. Let’s call it “Aja”, instead. Or perhaps “web safe Ajax”. But I’ll see if I can dig up something proprietary for you to see — back when these samples were built, years ago, I tended to focus on standards-based objects and cross-browser compatibility. I now kick myself for seeing that this wasn’t the way of the future.

You know, that’s why we women in technology never get ahead. We just don’t understand the right way of doing things.

update

Ooops, pointed to the wrong DHTML button example. I’ve updated the page to the right one.