Categories
JavaScript

Progressive Enhancement

The book title, Adding Ajax, should be synonymous with the concept of progressive enhancement in Ajax development, and I’ve gone through the earlier chapters and adjusted accordingly.

Progressive enhancement (or should that be Progressive Enhancement?) is the philosophy that you create web pages that don’t require any scripting at all, first; then add scripting effects in such a way that each degrades gracefully and doesn’t impact on the accessibility of the page.

Most web development should use progressive enhancement, because most sites aren’t creating Word in a web page, or yet another spreadsheet. Most web sites are providing information, stores, how-tos, whatever that aren’t dependent on scripting. The scripting is an enhancement.

Starting with the script and then adding accessibility comes off as clumsy. Starting with the basic application and adding scripting and Ajax effects is a much more elegant approach.

The one last step in all of this is how to make dynamic web page changes apparent to screen readers. Juicy Studio has documented approaches with JAWS, first in Making Ajax Work with Screen Readers and recently with Improving Ajax Applications for JAWS Users.

Categories
Technology Weblogging

Ella sings the Blues

WordPress 2.1 is code-named “Ella” after Ella Fitzgerald. Well, if that’s so, then Ella is singing the blues.

In the next couple of weeks, I planned on upgrading all of my weblogs to the new version. I’m also moving the few people who had tried out Wordform to WordPress 2.1, so they’re using a supported weblogging tool. However, I read in Mark Pilgrim’s weblog this morning that WordPress 2.1 still does not have a valid Atom 1.0 feed, I thought to myself, “No, that can’t possibly be. After all the work people did to create a patch? After all the promises of a valid Atom feed since version 1.6?”

It’s true, though. I downloaded the installation this AM and there it is: Atom 0.3.

There’s nothing like syndication to bring out the schoolyard in supposedly reasonable adults. I don’t think even the “vi versus emacs” wars compare.

I’m still porting all the weblogs to WordPress 2.1. I’ll be installing the Atom 0.3 feed on the ported Wordform weblogs, because I don’t want to put non-standard files on other people’s sites. However, I’m using my own edited Atom 1.0 feed for my sites. It works, but I do tire of having to have a backup of this so it’s not overwritten with every bug and design release.

Do I appreciate having this software? You bet. Do I appreciate the team working on it? You bet. But continuing to release an invalid feed syntax sounds a sour note, and the Ella I know and adore would never sing off-key.

update

Joe Gregorio on the phantom Atom 1.0 release.

Categories
Web

Wikipedia and nofollow

Recovered from the Wayback Machine.

That bastard Google warp-around nofollow rears its ugly little head again, this time with Wikipedia. Jimmy Wales, chief Pedian has issued a proclamation that Wikipedia outgoing links will now be labeled with ‘nofollow’, as a measure to prevent link spam.

seomoz.org seems to think this is a good thing:

What will be interesting to watch is how it really affects Wikipedia’s spam problem. From my perspective, there may be slightly less of an incentive for spammers to hit Wikipedia pages in the short term, but no less value to serious marketers seeking to boost traffic and authority by creating relevant Wikipedia links.

Philipp Jenson is far less sanguine, writing:

What happens as a consequence, in my opinion, is that Wikipedia gets valuable backlinks from all over the web, in huge quantity, and of huge importance – normal links, not “nofollow” links; this is what makes Wikipedia rank so well – but as of now, they’re not giving any of this back. The problem of Wikipedia link spam is real, but the solution to this spam problem may introduce an even bigger problem: Wikipedia has become a website that takes from the communities but doesn’t give back, skewing web etiquette as well as tools that work on this etiquette (like search engines, which analyze the web’s link structure). That’s why I find Wikipedia’s move very disappointing.

Nick Carr agrees writing:

Although the no-follow move is certainly understandable from a spam-fighting perspective, it turns Wikipedia into something of a black hole on the Net. It sucks up vast quantities of link energy but never releases any.

Seth Finkelstein notices something else: WIKIPEDIA IS NOT AN ANARCHY! THERE IS SOMEBODY IN CHARGE!

The rel=”nofollow” is a web extension I despise, and nothing in the time it was first released–primarily because of weblog comment spam–has caused me to change my mind. As soon as we saw it, we knew the potential existed for misuse and people have lived down to my expectations since: using it to ‘punish’ web sites or people by withholding search engine ranking.

Even when we feel justified in its use, so as to withhold link juice to a ‘bad’ site (such as the one recently Google bombed that had misleading facts about Martin Luther King) we’re breaking the web, as we know it. There should be no ‘good’ or ‘bad’ to an item showing up on a search list: if one site is talked about and linked more than another, regardless of the crap it contains, it’s a more topically relevant site. Not authoritative, not ‘good’, not ‘bad’, not definitive: topically relevant.

(Of course, if it is higher ranked because of Google bombing of its own, that’s a different story, but that’s not always the case.)

To return to the issue of Wikipedia and search engine ranking, personally I think one solution to this conundrum would be to remove Wikipedia from the search results. Two reasons for this:

First, Wikipedia is ubiquitous. If you’ve been on the web for even a few months, you know about it and chances are when you searching on a topic, you know to go directly to Wikipedia to see what it has. If you’ve been on the web long enough, you also know that you have to be skeptical of the data found, because you can’t trust the veracity of the material found on Wikipedia. I imagine that schools also provide their own, “Thou shalt not quote Wikipedia”, for budding young essayists.

Reason one leads to reason number two: for those folks new to this search thing, ending up on Wikipedia could give them the impression that they’ve ended up with a top-down authority driven site, and they may put more trust into the data than they should. After all, if they’re not that familiar with search engines, they certainly aren’t familiar with a wiki.

Instead of in-page search result entries, Google, Yahoo, MSN, any search engine should just provide a sidebar link to the relevant Wikipedia entry, with a note and a disclaimer about Wikipedia being a user-driven data source, and how one should not accept that this site has the definitive answer on any topic. Perhaps a link to a “What is Wikipedia?” FAQ would be a good idea.

Once sidebarred, don’t include Wikipedia in any search mechanism, period. Don’t ‘read’ its pages for links; and discard any links to its pages.

Wikipedia is now one of those rare sources on the web that has a golden door. In other words, it doesn’t need an entry point through a search engine for people to ‘discover’ it. If anything, its appearance in search engine results is a distraction. It would be like Google linking to Yahoo’s search result of a term, or Yahoo linking to Google’s: yeah, we all know they’re there but show me something new or different.

More importantly, Wikipedia needs to have “Search Engine General’s” warning sticker attached to it before a person clicks that link. If it continues to dominate search results, we may eventually get to the point where all knowledge flows from one source, and everyone, including the Wikipedia folks, know that this is bad.

This also solves the problem about Wikipedia being a Black hole, as well as the giving and taking of page rank: just remove it completely from the equation, and the issue is moot.

I think Wikipedia is the first non-search engine internet source to truly not need search engines to be discovered. As such, a little sidebar entry for the newbies, properly annotated with a quiet little “there be dragons here” warning, would eliminate the spam problem, while not adding to a heightened sense of distrust of Wikipedia actions.

One other thing worth noting is is seomoz.org’s note about a link in Wikipedia enhancing one’s authority: again, putting a relevant link to Wikipedia into the search engine sidebars, with a link to a “What is Wikipedia?” FAQ page, as well as the dragon warning will help to ‘lighten’ some of the authority attached to having a link in the Wikipedia. Regardless, I defer to Philipp’s assertion that Wikipedia is self-healing: if a link really isn’t all that authoritative, it will be expunged.

Categories
Web

Article pulled from Google’s database?

Post wasn’t pulled, just not propagated across all the data centers. Did I happen to mention I haven’t had a good night’s sleep for the last few days? Disregard the paranoia.

However, there is a silver lining. Thanks to Seth for pointing out this Google data center tool. Put in the search term, and then switch among the data centers.

Categories
JavaScript RDF

To JSON or not to JSON

Recovered from the Wayback Machine.

Dare Obasanjo may be out of some Ajax developers spheres….actually *I’m probably out of most Ajax developers spheres…but just in case you haven’t seen his recent JSON/XML posts, I would highly recommend them:

The GMail Security Flaw and Canary Values, which provides some sound advice for those happily exposing all their vulnerable applications to GET requests with little consideration of security. I felt, though, that the GMail example was way overblown for the consternation it caused.

JSON vs. XML: Browser security models. This gets into the cross-domain issue, which helped increase JSON’s popularity. Before you jump in with “But, but…” let me finish the list.

JSON vs. XML: Browser Programming Model on JSON being an easier programming model. Before you jump in with “But, but,…” let me finish the list.

XML has too many Architect Astronauts. Yeah, if you didn’t recognize a Joel Spolskyism in that title, you’re not reading enough Joel Spolsky.

In the comments associated with this last post, a note was made to the effect that the cross-domain solution that helped make JSON more popular doesn’t require JSON. All it requires is to surround the data returned in curly brackets, and use the given callback function name. You could use any number of parameters in any number of formats, including XML, as long as its framed correctly as a function parameter list.

As for the security issues, JSON has little to do with that, either. Again, if you’re providing a solution where people can call your services from external domains, you better make sure you’re not giving away vital information (and that your server can handle the load, and that you ensure some nasty bit of text can’t through and cause havoc).

I’ve seen this multiple places, so apologies if you’ve said this and I’m not quoting you directly, but one thing JSON provides is a more efficient data access functionality than is provided by many browser’s XML parsers. Even then, unless you’re making a lot of calls, with a lot of data, and for a lot of people, most applications could use either JSON or XML without any impact on the user or the server. I, personally, have not found the XML difficult to process, and if I wanted really easy data returns, I’d use formatted HTML–which is another format that can be used.

You could also use Turtle, the newly favored RDF format.

You could use comma separated values.

You could use any of these formats with either the cross-domain solution, or using XMLHttpRequest. Yes, really, really.

As was commented at Dare’s, the cross-domain issue is not dependent on JSON. HOWEVER, and this one is worthy of capitals: most people ASSUME that JSON is used, and you’re not returning JSON, you better make sure to emphasize that a person can a) choose the return format (which is a superior option), and/or b) make sure people are aware if you’re not using JSON by default with callback functions.

As for using JSON for all web service requests, give us a break, mate. Here’s a story:

When the new bankrupty laws were put into effect in the year 2005, Congress looked around to find some standard on which to derive ‘reasonable’ living costs for people who have to take the new means test. Rather than bring in experts and ask for advice, their eyes landed on the “standards of living expenses” defined by the IRS to determine who could pay what on their income tax.

The thing is, the IRS considers payment to itself to probably be about as important as buying food and more than paying a doctor. The IRS also did not expect that their means test would be used by any other agency, including Congress to define standards for bankruptcy. The IRS was very unhappy at such when it was discovered.

In other words, just because it ‘works’ in one context doesn’t mean it works well in all contexts: something that works for one type of application shouldn’t be used for all types of applications. Yes, ECMAScript provides data typing information, but that’s not a reason to use JSON in place of XML. Repeat after me: JavaScript/ECMAScript is loosely typed. I’m not sure I’d want to model a data exchange with ‘built-in typing’ based on a loosely typed system.

Consumers of JSON or XML (or comma separated values for that matter) can treat the data they receive in any way they want, including parsing it as a different data type than what the originator intended. Yes, JSON brings a basic data typing, and enforces a particular encoding, but for most applications, we munge the returned data to ensure it fits within our intended environment, anyway.

What’s more important to consider is: aren’t we getting a little old to continually toss out ‘old reliables’ just because a new kid comes along? I look at the people involved in this discussion and I’m forced to ask: is this a guy thing? Toss out the minivan and buy the red Ferrari? Toss out the ‘old’ wife for a woman younger than your favorite shirt? Toss out old data formats? Are the tools one uses synonymous with the tools we have?

Snarky joking aside and channeling Joel Spolsky who was spot on in his writing, just because a new tech is sexy for it’s ‘newness’ doesn’t mean that it has to be used as a template for all that we do.

The biggest hurdle RDF has faced was it’s implementation in XML. It’s taken me a long time to be willing to budge on only using RDF/XML, primarily because we have such a wealth of tools to work with XML, and one can keep one’s RDF/XML cruft-free and still meaningful and workable with these same tools. More importantly, RDF/XML is the ‘formal’ serialization technique, and there’s advantages to knowing what you’re going to get when working with any number of RDF APIs. However, I have to face the inevitable in that people reject RDF because of RDF/XML. If accepting Turtle is the way to get acceptance of RDF, then I must. I’d rather take another shot at cleaning up RDF/XML, but I don’t see this happening, so I must bow to the inevitable (though I only use RDF/XML for my own work).

We lose a lot, though, going with Turtle. We loose the tools, the understanding, the validators, the peripheral technologies, and so on. This is a significant loss, and I’m sometimes unsure if the RDF community really understands what they’re doing by embracing yet another serialization format for yet another data model.

Now we’re doing the same with JSON. JSON works in its particular niche, and does really well in that niche. It’s OK if we use JSON, no one is going to think we’re only a web developer and not a real programmer if we use JSON. We don’t have to make it bigger than it is. If we do, we’re just going to screw it up, and then it won’t work well even within that niche.

Flickr and other web services let us pick the format of the returned data, Frankly, applications that can serve multiple formats should provide such, and let people pick which they use. That way, everyone is happy.

Ajaxian: Next up: CSV vs. Fixed Width Documents. *snork*

*most likely having something to do with my sense of humor and ill-timed levity.