Categories
Technology

Ajax Myth Busting

Recovered from the Wayback Machine.

Frontforge has a set of Ajax evaluation criteria that look, at first glance, quite good until you drill down into each item. Since the page is created in such a way that I can’t right-mouse button copy text in order to make quotes (and breaks in Safari to boot), you’ll have to read each section first before reading my notes.

Taking each in turn:

#1 Back button, history, and bookmarks

If I create a slidehow using Ajax, according to this list I must make sure that each slide is reflected in the history and accessible via the browser back button, as well as being bookmarkable. However, one of the advantages of slideshows is that I can go through a set of photos and then hit the back button once to leave the page and return to where I started.

There is no contract with the users that history or back button or any such is ‘required’ in order for a web application to be successful. That’s all made up. What is required is that one understands how people might use the application and then respond accordingly. So, for the slideshow, rather than try to force behavior into the browser for it to manage an arbitrarily constructed ‘history’, what one needs to do is understand what the users might want.

In the case of the show, being able to access a specific show ‘page’ using an URL that either can be bookmarked or, more likely, linked, could be the critical item. In this case, creating a ‘permalink’ for each page, and then assigning parameters to the URL, such as the following, would enable a person to return to one specific page.

http://somecompany.com?page=three

Then the JavaScript framework or whatever can parse the URL and load the proper page. As for having to maintain history, I would actually prefer to see history locked behind signed and sealed scripts and not easily accessible by JavaScript developers. This, then, would also include screwing around with the back button.

#2: standard and custom controls

In this one, the author states that an Ajax framework should have a good set of built-in controls to start a project and then a means to easily extend these for custom behavior. This one I do agree with: a JavaScript library (any class library, really) should include one layer that defines a basic set of behaviors, which is then used by other libraries to build more sophisticated components.

Except that this isn’t what the author is saying. What he or she is saying is that a ‘basic’ behavior is something such as a complete packaged piece of functionality such as a drag and drop or a visual fade, while the more complex behaviors would be something such as a complete mouseover and slideshow or some such thing. Of course, when you see a complex behavior listed as something ‘basic’, it makes sense that customization becomes increasingly difficult. That’s more a matter of incorrect abstraction than making use of something such as JavaScript’s prototype property.

In my opinion, simpler components and good documentation is worth any number of increasingly sophisticated objects.

#3: single page interface

Being able to edit a page’s contents in page, such as that implemented at Flickr is what the author is talking about here. I gather there’s now a formal name for this: single-page interface. That’s like Jesse James Garret’s use of Ajax to describe a set of already existing technologies–if you speak with enough authority, anyone can name anything.

I hereby call this a weblog. Oh, wait a sec…

But the premise of the author’s assumption goes far beyond this. According to the writing, a page should be split into components and then each should be loaded at runtime and formed into a cohesive whole through JavaScript. So a five step form starts by loading the first step and then the additional steps are loaded as the person is working the first.

My response is: why? Why do I need to use JavaScript to access the components that we already know are going to be loaded? I can see using something such as the accordion effect (collapsed sections that hide specific pieces of the form until expanded, such as that being used with WordPress), but it makes no sense to somehow load these individually, starting with the first part and then use Ajax calls to the server to load the rest.

If what’s loaded is altered by what the person has entered, such as providing a list of cities when a state is picked, then it makes sense. But if it’s static content, or content ahead of time, we should let the web server do its job uncomplicated by client-side overriding of basic behavior.

If time is the issue (the author doesn’t really state a good argument for this, but lets assume time is the issue), then perhaps if we didn’t build such over-engineered Ajax libraries, page loading times wouldn’t be such a problem.

Hiding and displaying components of the form and allowing in-page editing does make sense. But this loading by piecemeal using JavaScript? No.

#4: productivitiy and maintainability

This is the one that drove me to write this post. This was the ‘money’ item for me.

According to the author, I gather that it’s difficult to find qualified Ajax developers, and it’s an unwieldy burden to get existing developers (page or server-side) up to speed to be able to use all of this Ajaxian goodness. There’s even an OpenAjax movement afoot to …to promote Ajax interoperability, standards and education in order to accelerate the global adoption of Ajax. (And it doesn’t hurt one’s business to put their membership in this effort into their About page, either. Point of fact to the effort: Ajax does not ‘stand’ for anything–it’s not an acronym. It’s just a simple term that Garrett picked.)

Instead of forcing JavaScript on folks, a better approach (according to the author) is to use existing CSS and HTML elements. How? A couple of different ways.

One way is for the script to take all elements of a certain type and add functionality to each, such as to add validation techniques for each form element (triggered by some behavior such as loss of focus when the person keys away from the field). This isn’t a bad approach at all, though there is usually some JavaScript involved as the person has to attach some information about how a field is to be validated or some such thing. Still, it’s nice not to have to worry about attaching event handlers and capturing the events and processing the data and so on.

However, according to ‘declarative’ Ajax, this additional information isn’t provided into JavaScript. Instead of using script to provide information to the program (no matter how simply this could be done), the Ajax framework developers have ‘extended’ HTML elements to include new attributes. With this, the poor Ajax-deprived page designer and developer need never get their fingers dirty by actually touching JavaScript.

Sounds good, one problem: these methods ‘break’ the page, leaving invalid HTML in their wake.

You don’t have to take my word for it. This page discusses the “right way to do Ajax is Declaratively”. The page validates, but then it’s not using any of the Ajax frameworks. One of the frameworks mentioned Hijax also validates, because it’s using standard HTML elements (list items) and the class attribute (standard for all HTML elements), in order to maintain its effects. The other libraries, such as hInclude and FormFaces, do not validate. Why not? Because they’re using non-standard attributes on the HTML elements.

We’ve spent over a decade finally getting the cruft out of pages. I’m not going to be sanguine about blithly putting more cruft back into the pages because we want a ‘cool’ effect. (For more on this, I recommend Roger Johansson’s Why Standards Still Matter.)

Leaving aside my concerns about how much we’re overloading the ‘class’ attribute, deliberately introducing invalid HTML is not a mark of progress. Leaving aside how this could impact on user agents such as text-to-speech browsers, this approach seems to assume that everyone who wants to incorporate Ajax outside of a select few intelligent enough to understand its complexities must be protected from the ‘working’ bits. That’s arrogant in the extreme because there’s nothing overly complex at all about Ajax. It’s actually based on some of the simplest developing technologies there are. Well, other than when such are obfuscated in order to puff up the concept.

Moving on to the rest of the Ajax must haves…

#5: client server

Well, this one is like being agin sin. Client server is good! OK, we got that one.

There’s little to quibble with on this item. The front end work separated from the back; hide cross-browser differences; communicate with the server using XML (though I think this one can be negotiable). I am concerned with a growing interest in making all aspects of a web application into Ajax initiated function calls, rather than using server-side templates and such; other than that, I can agree with this point.

#6: XPath targetting

My god how have we managed to get by now, with having to use JavaScript functions such as document.getElementById. If I’m reading this one correctly, it would seem that Ajax frameworks have to take the DOM and convert it into XPath notation before we can even begin to work with a web page.

There is something to be said for being able to access a page element using an XPath notation rather than having to use the DOM’s getChildNodes, but the examples given don’t demonstrate the power of such operation, and I’m not sure this is as essential as the author makes it out to be.

#7: comprehensive event model

Luckily, I found I can copy; I just have to use the Edit menu copy, rather than right mouse button. The author writes:

Next-generation Ajax frameworks finally make it possible to put the presentation layer into the browser client where it belongs. This opens up a huge business opportunity to migrate strategic desktop applications to the web where they can deliver value to partners, customers, and remote employees. To do this, it must be possible to define mouse actions and key combinations that trigger events.

Next generation Ajax frameworks may allow this, but I don’t see a mad dash to put desktop applications on the web. This isn’t Oz, and clicking our Ruby slippers together, saying, “There’s no place like the web…there’s no place like the web” isn’t going to make this happen.

I do agree with a standard set of event handlers, and making sure that each library chains its functionality on to an event so that it doesn’t bump another library’s (or the web page developer’s), but I don’t think this means then that we can move Adobe Photoshop on to the web. I don’t think this is something that users are demanding.

As for making a set of ‘custom’ events, such as NextMonth for calendars, this totally abrogates much of the concept of abstration by taking a general event (such as a mouse click) and blurring the lines between it and a business event (NextMonth). This not only adds to the complexity of the frameworks, it adds to the size of the code, as well as the points of breakage any time a new browser is released (or a new library included in the application).

What is really needed is a basic understanding of how event handlers are attached to objects, ensuring that no one library impacts adversely on another library or on the web developer’s efforts. That and a simplified way of attaching events to objects is nice. More than that is an example of a developer with too much time on their hands.

#8: state and observer pattern

It always amazes me to see even the most simple concepts obscured whenever the term ‘Ajax’ is involved.

This last item has some good points but they’re heavily obscured with using design pattern terminology such as “Observer pattern” and so on. Yes, I know that the use of such is to make communication easier so that when a person writes, “Observer pattern” we understand what they mean without them going into details. But then this means that the reader has to stop reading, go to the pattern page, read the pattern, get understanding, return, and then try to decipher is what the person is writing is what was meant by the pattern and so on.

I understand what is meant by ‘observer pattern’, and I can agree with the idea that wouldn’t it be nice to attach a behavior (register) to a form button that allows it to enable itself when certain form fields are completed. The whole thing is based on storing information about the ‘state’ of the object, which as the author of this Ajax framework must-haves notes, isn’t part of basic JavaScript behavior.

Except that it is. Within the same page, information about state of element is stored all the time, and I fail to see how there needs to be a formalized mechanism in place to facilitate this. As for storing state between pages, again mechanisms are in place through cookies, server-side storage and so on.

As for observer/registration, is it that we want all blur events triggered in the page to be funneled through an event handler for the form submit button that checks to see if it can now enable itself? Or blur events for the target fields? That’s doable, but seems overly process intensive. I would think a better approach would be to have the form submit button enabled, and then when the form is submitted, use Ajax (JavaScript) to test if required fields are filled in and then direct the person’s attention to what’s missing. Then event handling is simplified and the only time the process is invoked is when the form is submitted, not when each field is filled in.

In other words, what may work well with an application built with VB.NET may not work well, or never work well, within a browser and what we should not do is coerce functionality from one development environment onto another.

That’s they key for much of these ‘essential’ items for an Ajax framework: many add unecessarily to the complexity of the libraries, as well as their size, the number of points where a break can occur, and the likelihood of failure when more than one library is used or a new browser version is released.

In web development, there’s one important rule: less is more.

Thanks to Ajaxian for pointer to this list. In the interests of disclosure, note that I am writing a book, Adding Ajax, whereby I demonstrate that any web application developer familiar with basic JavaScript can add Ajax benefits and effects to their existing web applications. As such, I’ve violated the premise behind #4 by assuming that just anyone can become an ‘Ajax’ developer.

Categories
Web

Web 1.0 must die

Recovered from the Wayback Machine.

I think Web 2.0 is killing Web 1.0. I think there’s a ‘young eating their parent’ thing going on.

Amazon has been using its resources to put out S3 and the new video service at the same time that the company’s bread & butter online store seems to be taking a hit. Maybe it’s just me and my machines and my internet access, but the site is slow, the redirecting is broken, and the pages are excessively cluttered now.

Google’s another. I really like Google maps, and gmail was OK, but much of the new functionality the company is putting out seems to only appeal to a small group of geeks. The office style products are, to put it delicately, uninspiring. In the meantime, I’ve noticed more and more that searches return links to stores or businesses, rather than useful information. The SEOs are winning, while Google is focused on profitable good works.

Yahoo, on the other hand, bought its way into Web 2.0–perhaps as a strategy to keep from being eaten by the young ‘uns.

eBay bought Skype and Paypal, and we know why it bought Paypal, but I think we’re all collectively scratching our heads on the Skype. Meanwhile, it flounders around now trying to find new revenue streams, while it’s core functionality is being phished to death.

Even the venerable conferences of yore are giving way to SillyValley “Meet Mike” or “Stick it to Tim” shmooze and booze sessions where any pretense of actually discussing technology has given way to breathless panting about startups: hot or not. Isn’t it nice to know that the long tail is being wagged by a puppy?

Categories
Web

And the young eats itself

Recovered from the Wayback Machine.

Liz Gannes at GigaOM writes a story on Evan Williams and Odeo, and Williams confession at the recent Web Apps of the Future yak fest. Williams talks about how he royally screwed up with his startup, Odeo, burning through it sounds like millions, hiring a staff of 14, all to build a product for an audience the company hadn’t even defined yet. So now that Ev has recognized his mistake, is he going to do better?

So what’s he doing to fix these mistakes? Not refunding the VCs their investment, that’s for sure. And not even trying to earn revenue; Williams freely admitted Odeo hasn’t yet settled on a business model.

I expected Williams to get at least a verbal slap for such, but oh no.

All in all, we can’t say we came out of the presentations convinced Odeo is set to conquer the universe, but Williams’ honesty and humility are admirable. The best part is, his advice has a chance of making an impact while it’s still relevant to today’s startups.

Speechless. I’m speechless.

Categories
Technology

Office of the Future

Recovered from the Wayback Machine.

What the heck, it’s Friday so I might as well push Nick Carr’s post up the techmeme flag pole.

I can agree with Carr on the following:

Whatever the flaws of Microsoft Office, most end users are comfortable with it – and they have little motivation to overturn the apple cart. What is absolutely unacceptable to them is to take a step backward in functionality – which is exactly what would be required to make the leap to web PPAs today. Web apps not only disappear when you lose an internet connection, they are also less responsive for many common tasks, don’t handle existing Office files very well, have deficiencies in printing (never underestimate the importance of hard copy in business), and have fewer features (Microsoft Office of course has way too many, but – here’s the rub – different people value different ones). Moreover, many of the current web apps are standalone apps and thus represent an unwelcome retreat to the fragmented world of Office 1.0. Finally, the apps are immature and may change dramatically or even disappear tomorrow – not a strong selling point for the corporate market.

Aside from everyone completely discounting OpenOffice and the Mac hybrids and interest in open source, the point is good: why should people give up functionality for the dubious distinction of having part or all of said functionality hosted on the web?

Where I disagree, is with the following:

What we’re entering, then, is a transitional generation for office apps, involving a desktop/web hybrid. This generation will last for a number of years, with more and more application functionality moving onto the web as network capabilities, standards, and connectivity continue to advance. At some point, and almost seamlessly, from the user’s perspective, the apps will become more or less fully web-based and we’ll have reached the era of what I call Office 4.0 (and what others currently call Office 2.0). Driving the shift will be the desire of companies, filtered through their IT staffs, to dramatically simplify their IT infrastructure. Mature web-based apps don’t require local hardware, or local installation and maintenance, or local trouble-shooting, or local upgrading – they reduce costs and increase flexibility. These considerations are largely invisible to end users, but they’re very important to companies and will become increasingly important as the IT world shifts to what might be called utility-class computing.

I hear of two reasons for net-hosted office tools: collaboration (Office 2.0) and ease of maintenance (Carr’s Office 4.0).

First of all, in his timeline of Office architectures, Carr neglects to mention all of the work done in the last decade on collaborative tools, such as Ray Ozzie’s Groove, or the old Lotus Notes. These are infrastructures set up for collaboration, but aren’t necessarily considered ‘office’ tools.

Where the idea that the functionality provided by office like tools must be collaborative in nature arose, I don’t know; for the most part whatever would make these collaborative would probably make them unattractive to the typical user. Think of Word as a wiki and you’ll get the point.

I worked in an insurance company that used Lotus Notes to track software bugs, testing, and communication between the members of the entire development staff. It worked well. I also remember a woman putting a Word document up on the division’s intranet without locking out edits and a male supervisor editing the hell out of it in the interests of ‘collaboration’. There was a pretty horrid row over that one and the two ended up barely speaking to each other. So much for collaboration.

There are tools for collabration and there are tools for individual contributions. You mix the two, and you’re not necessarily working to people’s expectations.

There’s also a centralized element to the Office 2.0 of today, and the Office whatever of the future. If the purpose of the tools is to enable collaboration, then the documents produced have to be stored centrally. Some architectures like Groove get around this by listing documents on an individual’s PC as being in the group’s space. However, if the person goes offline, and the document hasn’t been opened by another yet (and hence copied to their machine), *poof* document gone.

Yet a centralized system is a target for hackers, or at a minimum, a place of vulnerability that could have major impact far and beyond one person’s machine failing. If my machine fails, I’m held up from work. If a centralized service fails, the entire department get off from work early that day.

If collaboration is not an issue, there’s absolutely no indicator other than wishful thinking that tools to create things are better when hosted on the net. Doing so implies making changes in the underlying web infrastructure that adds points of further vulnerability.

Many Ajax hackers are working to override or overcome the web browser barriers put in place to protect us from various forms of attacks. Why? Just to build tools such as those in Office 2.0. They use Flash and all manner of technology in order to store increasingly large amounts of data on the client, many times without us even knowing such is happening. Why? Just to build tools such as those in Office 2.0.

What was it the character that Wil Smith played in the movie, Independence Day, said about the dog bringing slippers to him in bed?

If he wants to impress me, why don’t he go out and get a job or something.

I know it can be a twisted bit of code to make a Word like interface on the web, but I can’t be impressed with such when I don’t see that it’s all that useful.

IT departments wanting this new web-based functionality to reduce the overhead that comes from upgrades of individually hosted applications makes more sense, and I remember this from days long ago when I was a Corporate Employee. Again, though, there have been innovations in computer maintenance that simplify upgrades at a global scale, and most companies (medium to large) can make a deal for good pricing of applications.

Carr agrees with the Office 2.0 on one point: that the natural progression is for Office to move to the web. Not just provide web services, but to be hosted and accessed through the web (or more likely a company intranet). How feasible is this, though? We’ve already gone through our phase of thin computers and net hosted functionality and no one was buying: corporate or individual.

The concern that Carr mentions about companies reducing costs this way: how much of an issue is that today? I would say companies have other issues more important. For instance, the issue of security.

Will web services be cheaper? Considering OpenOffice and NeoOffice and such as free, I’m not sure how the web service can be cheaper. Eventually, all of them will have to make some form of money. Ads in the same page where you’re writing your document? Not likely.

Who wants these tools? I don’t know. I do know that I’m seeing a number of applications that provide a desk top tool for web-based applications, such as Blogger and WordPress. That’s the way of the future: editing on the client and simplified publishing to the group or the web; specialized readers that provide access to specialized data.

I agree with Carr: the whole plethora of the so-called “Office 2.0″ applications have very little chance of success. Yes, even those created by Google. Where I disagree with Carr is that based on today’s web architecture, I don’t see this changing in the future.

Categories
Technology

Apple Pie

Recovered from the Wayback Machine.

Since Apple released the new iTunes, movie downloads, games, and television show resolution changes, I’ve been testing them all out on my Windows PC.

Originally I had my main iTunes installation and music on my Mac; it has now been converted to purely photography and development, which means I needed to move the installation. However, my iPod had been formatted to Mac, which was unusable on the PC. Luckily, the new iTunes interface provides a Restore option that restores the iPod to factory settings–including the FAT32 operating system. After reformatting, I then uploaded all my music back to the iPod in preparation to move to the PC using the new transform process.

Unfortunately, transforming only works with Apple purchased media. Luckily, I have the Apple folders backed up on my portable storage device and it was simple to add the music using iTunes Add Folder import option.

Once moved, I downloaded a couple of TV shows (Eureka) and a move: Under the Tuscan Sun. I also downloaded two games: Bejeweled and Majong.

The games are amazingly well done, considering that the only user interface you have with an iPod is the touch wheel. I wondered how the designers could get the Mahjong tiles to show up on such a small screen, but the entire game is beautifully crafted, and the tile designs sharply distinctive.

(Rich colors, clever use of feedback, lovely background.)

For the TV shows and the movie I used the iTunes player and projected the shows on my new 27 inch HD widescreen TV. The extra resolution of the downloads is noticeable. They’re not as sharply detailed as a DVD would be through my upconverter DVD player through HDMI connector to the TV, but much more rewarding than watching the shows on my old television. Especially the color: I don’t know how these were digitalized, but I’ve never seen richer colors. Even the indigo blue color, impossible to pick up on a regular television, came through with flying colors. The same for the movie, though it seemed crisper and better viewing than the TV shows.

Unless you sit a few feet from the TV screen, the viewing experience is very satisfying. The iTunes player also provides chapter selection, so you can go to a specific scene in the movie just like with a DVD.

I had my iTunes sound turned to the max (one bug was having this set lower) and I controlled the sound through my Logitech speakers. With their associated base unit, I had a surprisingly good media experience from a file that was originally meant to be played on an itty bitty iPod screen.

People have had problems and iTunes 7.0 has been touted as a ‘lemon’. However, I’ve tried iTunes on three machines and have had nothing more than minor glitches. I noticed a few quirks with the download, and having to re-authorize my system to play the games. I could have wished that Apple provided a way to upgrade already pre-downloaded television episodes to the new advanced resolution, as well as provide a way to backup all files from iPod to computer, but, I like the new interface. I like being able to ‘flip’ through albums (and have been inspired to create something based on this, using PHP and Ajax), and the cleaner, simpler interface.

As for the movies and not being able to burn a DVD, I must confess this is not a problem for me. I can watch these movies and TV shows on all my computers and my television. The quality is very good, and though the price isn’t as cheap as I’d like, it is cheaper than Amazon. More importantly, I don’t need plastic, and would prefer that we get to a point where media is not burned on plastic. (Plastic is not eco-friendly.)

When I hear people concerned about not being able to burn a DVD, and not being able to ‘loan’ DVDs to friends and so on, I have to wonder how much of an issue this is. I, personally, would never borrow a friend’s DVD (I’d be too worried about damaging it). As soon as I buy a DVD, I rip it to have on my machine or in secondary strorage, though I’ve not been able to rip any movie to match the quality of Apple’s digitalization. (How did they get that vivid indigo blue?)

Another issue is DRM. If we go Apple, we’re going DRM, but if we go Zune, we’re going a different DRM (same for Guba, for Amazon, and so on). Unlike music, I don’t think that we’ll ever be able to burn DVDs from a download service. Either we continue buying movies-on-plastic, or we go with the internet/digital approach that works for us.

I’ll probably pass on iTV, as I have a decent connection between my computer and my TV–in fact, I have an entire media corner, and feel just like the hip kids (so cool–kiss my toes). I do like the wireless connectivity of iTV, and being able to use an HDMI or composite video interface between computer and TV, so I’m keeping my options open.

If Apple hasn’t given me the ability to burn DVDs to plastic, it did give me something else: freedom from cable. I can now download my favorite television shows from iTunes, watch them whenever I want, and joyfully cancel service from a company who thinks they have me ‘locked’ in, and has been treating me and all their customers with extreme indifference. There’s more than one form of lock-in: right now, I’ll pick Apple’s over Charter’s.

I’m not that interested in the iPod announcements, other than it is good to see price drop and storage increase. I’m happy with my 30GB and still have room, even with the games. I think we should start a pool to see whose iPod Shuffle goes through the spin cycle first. The brushed aluminum for the Nanos is a good idea, but I bet you can still easily scratch the view screen.

Microsoft also just released it’s new player: Zune. Or is that released a press release talking about its new player and service?

Interesting use of colors. I like what one commenter said:

And did market research tell MS that people were CRAVING a brown DAP? “I Love the iPod, but I wish it was colored like a turd!”

Zune in Brown

Did I read the rumors correctly? Will you be able to run Apple media files on Zune? If so, that’s one less nail in the lock-in door. If not, hopefully over time we’ll not have such proprietary formats. I still wouldn’t buy a Zune: the larger video screen of Zune doesn’t do that much for me. I don’t watch movies on my iPod, and think the new game option is a better time killer.

(Too much time being killed, must behave now.)

I’m intrigued by the subscription service of Zune, and wonder how many studios MS has signed to provide music. I can’t imagine many of the big labels being happy about a subscription service. It’s a good option though and will be curious to see how this works.

Oh, and I’ll pass on the Wifi. Stream a song to your friend (who also has to have a Zune) just so they can listen to it three times before being told to buy it? This is a joke, right?

The concept of customer cloning is representative of who Microsoft sees as its audience: Zune is being targeted so aggressively at the under 30’s (and the über chic) that I feel Microsoft doesn’t really want me as a customer; sort of like me buying one would be, “There goes the neighborhood.” I already experienced customer disdain from Charter, I’ll pass on it from Little Blue.

So far this week:

Amazon – A big 0 Zero, zip, nada, burn the witch

Apple – +1

Microsoft’s Zune – Don’t ask me, I’m not a 23 year old Urban Goth who listens to independent garage bands and hip hop.