Categories
Critters

We’re bored

Recovered from the Wayback Machine.

It was stormy last night, stormy today. Stormy and wet.

I’m at my computer at the window, overlooking the street when I see a young man dressed in sweats running down the street. Running full out, as if he’s chasing something. I stand up and look out the window and watch as he meets up with another young man, also wearing sweats. I wonder if they’re fighting but they seem amicable. I then wonder if they’re playing some form of adult keepaway.

One looks up the street, points a finger, and takes off; the other follows. In front of them is a small brown/gray body running like mad. I think perhaps a cat or a dog had gotten loose and they’re trying to catch it.

The gray body runs under a car and then out, and I realize it’s a squirrel. They’re chasing the squirrel. They’re chasing all the squirrels, all over the neighborhood–into trees, under cars, down the street.

The squirrels are scared to death and the young men are laughing, having a grand old time, faces red with their effort.

I run downstairs and open the door and when they start past, I ask them why they’re chasing the squirrels.

“We’re bored,” one replies. “Nothing else to do”, says the other.

I tell them to knock it off, they’re scaring the squirrels.

“There’s nothing wrong with scaring squirrels”, one says. The other yells out, “They shoot squirrels in this state.”

I said then they should get a gun and shoot the squirrels. The one says, “You think it’s better to shoot the squirrels than chase them?”

I do.

I tell them to stop, now, or I’ll call management. I also tell them to grow up. My roommate says this won’t stop them, but I disagree. They have something else to do now, in their boredom: bitch about the woman who yelled at them to knock it off and grow up.

Categories
Critters

They shoot squirrels, don’t they?

 

Many experts believe the red/grey/white tension is simply a by-product of the longstanding White Squirrel War and Black Squirrel Squabble that have long raged in the U.S. And, as these conflicts make clear, it is not just squirrel infighting that threatens to explode into an orgiastic, terrorist bloodbath. Squirrels are increasingly making humans the victims of their savage and cowardly terrorist acts. In fact, for every bad deed done to man by man, or man by nature, I can cite five that are the work of squirrels.

For example, an undoubtedly cute squirrel terrorist sabotaged a power transformer in Tampa, Fl. last week, according to Tampa Bay’s 10. His deliberate action caused close to 2,000 Tampans to lose power for up to four hours but was restored, the piece emphasizes, “by ten p-m.”

Some animal lover might claim this act was not deliberate. To them I say, “You’re nuts.”

How else to explain a coordinated squirrel suicide attack in Kansas City — one-thousand miles away — on yet another power transformer on the same day? The KC squirrel’s terrorist act caused 1,700 customers to lose power.

Another 2,000 electrical customers also lost power recently in Brighton, Ma. after still another squirrel attack on our power supply — this one carried out in spite of security measures like “squirrel guards” taken to prevent such terrorist acts.

Attacks on our power supply and petty crimes (such as theft of household bird feed) have not proven sufficiently vile acts for squirrels, though. They have begun to target humans with direct physical assaults. In Leominster, Ma. last month a squirrel attacked a police officer who was attempting to arrest the squirrel’s human minder. As this gripping slideshow of the carnage makes clear, the squirrel is now caged, the officer’s surname is “Flowers,” and the latter was mocked handily by his peers.

From Squirrels: They’re cute, they’re fluffy, and they must be stoped

Categories
Media

Saturday Matinee

Recovered from the Wayback Machine.

I’m returning to the “Saturday Matinee” posts, where I’ll review an oldie but goodie movie beginning this weekend. Tomorrow, it’s a double feature: Voyage to the Prehistoric Planet and Voyage to the Planet of Prehistoric Women.

There’s been an amazing number of old science fiction movies that have entered into public domain, as well as being re-released on multi-disc packs; many of which are fascinating to watch, because they provide a hint into the culture of the times (and based on the country where the film originates). I’ve also found that even the worst of these movies usually has a tiny spark of brilliance somewhere in them; if you pay attention, your time is rewarded.

Some movies have been lovingly restored to excellent condition and modern displays, such as The Beginning of the End and include interviews with someone associated with the movie (such as the director’s wife, or the director themselves). These movies typically have ‘bad’ ratings, and everyone talks about the poor acting, or the ‘cheezy’ effects.

I think we’ve become too dependent on effects now–especially computer effects. I, for one, really like the original Star Wars movies, with the use of models rather than the more modern CGI-based triple. I recently watched Disney’s Lady and the Tramp and was struck again how beautiful this movie is with its hand painted scenes. This same beauty is captured in more modern anime films, such as Spirited Away, with its combination of exquisite hand painted scenes and characters, combined with computer animation.

Modern television shows such as Battlestar Galactica and Firefly focus more on the characters then the effects. You can go an entire episode with both with not seeing much more than a ship hanging in space. I think that’s what makes these shows stand out: the development of the characters and plots as compared to reliance on effects.

Categories
Just Shelley

Guido the Librarian

Recovered from the Wayback Machine.

I had checked out a book in late July on art and photography. It’s an older book, recommended by one of you, and not necessarily a quick read. It’s history showed I was the first person to have checked it out in several years.

As happens sometimes, I read a bit and then got caught up in the book I’m writing, as well as proofs, and site design. I realized I wouldn’t have time for it until after I finished my book, so I put it down on the gossip bench by the door to take back to the library. I promptly forgot it, until I got a notice a couple of weeks ago from the library. It was ten days overdue, due back August 28.

I immediately renewed the book (until October 10th), which I usually do with overdues, and then dropped it off at the library a few days later. Being distracted, I dropped it off at the county library not the city public library. I realized my error the next day, but the libraries have an inter-department book return plan so wasn’t worried. Not until today when I received another overdue notice, this one threatening to send me to a collection agency to collect the costs of the book if I didn’t return it.

I’m not sure where I was more taken aback: that the county library still hadn’t returned the book, or getting what amounts to a collection agency threat from my local library.

I called and found out the book was returned, and this notice must have gone out the same day. I was also informed that I now owed $.85 (that’s 85 cents) in overdue fines. I refrained from asking if they were going to send Guido over to break my kneecaps and hung up.

I can understand about libraries wanting their books returned on time, and I try to do so, and am happy to pay whatever fines I owe when I’m late. But this: a collection agency? On the second notice? Less than three weeks after it’s overdue? With a $.85 fine?

I checked around and found this weblog post from another person facing the same ‘threat’ from her local library, for what sounds like a book she never even checked out.

This may make sense economically, and from a business perspective, but I feel oddly betrayed. The friendly neighborhood library I once knew from long ago is gone–replaced by this efficient machine more interested in economics than community ties. Oh, before you ask, the public library has a reserve fund of over twenty million dollars, a reserve that gains about a million a year. The people in St. Louis take good care of their library. Too bad the same can’t be said for how the library takes care of the people in St. Louis.

Needless to say, I will drop off payment for the fine next week, when I also turn in my card. As they’ve demonstrated, they don’t want people checking out their books: books are meant to be shelved.

Categories
Technology

Ajax Myth Busting

Recovered from the Wayback Machine.

Frontforge has a set of Ajax evaluation criteria that look, at first glance, quite good until you drill down into each item. Since the page is created in such a way that I can’t right-mouse button copy text in order to make quotes (and breaks in Safari to boot), you’ll have to read each section first before reading my notes.

Taking each in turn:

#1 Back button, history, and bookmarks

If I create a slidehow using Ajax, according to this list I must make sure that each slide is reflected in the history and accessible via the browser back button, as well as being bookmarkable. However, one of the advantages of slideshows is that I can go through a set of photos and then hit the back button once to leave the page and return to where I started.

There is no contract with the users that history or back button or any such is ‘required’ in order for a web application to be successful. That’s all made up. What is required is that one understands how people might use the application and then respond accordingly. So, for the slideshow, rather than try to force behavior into the browser for it to manage an arbitrarily constructed ‘history’, what one needs to do is understand what the users might want.

In the case of the show, being able to access a specific show ‘page’ using an URL that either can be bookmarked or, more likely, linked, could be the critical item. In this case, creating a ‘permalink’ for each page, and then assigning parameters to the URL, such as the following, would enable a person to return to one specific page.

http://somecompany.com?page=three

Then the JavaScript framework or whatever can parse the URL and load the proper page. As for having to maintain history, I would actually prefer to see history locked behind signed and sealed scripts and not easily accessible by JavaScript developers. This, then, would also include screwing around with the back button.

#2: standard and custom controls

In this one, the author states that an Ajax framework should have a good set of built-in controls to start a project and then a means to easily extend these for custom behavior. This one I do agree with: a JavaScript library (any class library, really) should include one layer that defines a basic set of behaviors, which is then used by other libraries to build more sophisticated components.

Except that this isn’t what the author is saying. What he or she is saying is that a ‘basic’ behavior is something such as a complete packaged piece of functionality such as a drag and drop or a visual fade, while the more complex behaviors would be something such as a complete mouseover and slideshow or some such thing. Of course, when you see a complex behavior listed as something ‘basic’, it makes sense that customization becomes increasingly difficult. That’s more a matter of incorrect abstraction than making use of something such as JavaScript’s prototype property.

In my opinion, simpler components and good documentation is worth any number of increasingly sophisticated objects.

#3: single page interface

Being able to edit a page’s contents in page, such as that implemented at Flickr is what the author is talking about here. I gather there’s now a formal name for this: single-page interface. That’s like Jesse James Garret’s use of Ajax to describe a set of already existing technologies–if you speak with enough authority, anyone can name anything.

I hereby call this a weblog. Oh, wait a sec…

But the premise of the author’s assumption goes far beyond this. According to the writing, a page should be split into components and then each should be loaded at runtime and formed into a cohesive whole through JavaScript. So a five step form starts by loading the first step and then the additional steps are loaded as the person is working the first.

My response is: why? Why do I need to use JavaScript to access the components that we already know are going to be loaded? I can see using something such as the accordion effect (collapsed sections that hide specific pieces of the form until expanded, such as that being used with WordPress), but it makes no sense to somehow load these individually, starting with the first part and then use Ajax calls to the server to load the rest.

If what’s loaded is altered by what the person has entered, such as providing a list of cities when a state is picked, then it makes sense. But if it’s static content, or content ahead of time, we should let the web server do its job uncomplicated by client-side overriding of basic behavior.

If time is the issue (the author doesn’t really state a good argument for this, but lets assume time is the issue), then perhaps if we didn’t build such over-engineered Ajax libraries, page loading times wouldn’t be such a problem.

Hiding and displaying components of the form and allowing in-page editing does make sense. But this loading by piecemeal using JavaScript? No.

#4: productivitiy and maintainability

This is the one that drove me to write this post. This was the ‘money’ item for me.

According to the author, I gather that it’s difficult to find qualified Ajax developers, and it’s an unwieldy burden to get existing developers (page or server-side) up to speed to be able to use all of this Ajaxian goodness. There’s even an OpenAjax movement afoot to …to promote Ajax interoperability, standards and education in order to accelerate the global adoption of Ajax. (And it doesn’t hurt one’s business to put their membership in this effort into their About page, either. Point of fact to the effort: Ajax does not ‘stand’ for anything–it’s not an acronym. It’s just a simple term that Garrett picked.)

Instead of forcing JavaScript on folks, a better approach (according to the author) is to use existing CSS and HTML elements. How? A couple of different ways.

One way is for the script to take all elements of a certain type and add functionality to each, such as to add validation techniques for each form element (triggered by some behavior such as loss of focus when the person keys away from the field). This isn’t a bad approach at all, though there is usually some JavaScript involved as the person has to attach some information about how a field is to be validated or some such thing. Still, it’s nice not to have to worry about attaching event handlers and capturing the events and processing the data and so on.

However, according to ‘declarative’ Ajax, this additional information isn’t provided into JavaScript. Instead of using script to provide information to the program (no matter how simply this could be done), the Ajax framework developers have ‘extended’ HTML elements to include new attributes. With this, the poor Ajax-deprived page designer and developer need never get their fingers dirty by actually touching JavaScript.

Sounds good, one problem: these methods ‘break’ the page, leaving invalid HTML in their wake.

You don’t have to take my word for it. This page discusses the “right way to do Ajax is Declaratively”. The page validates, but then it’s not using any of the Ajax frameworks. One of the frameworks mentioned Hijax also validates, because it’s using standard HTML elements (list items) and the class attribute (standard for all HTML elements), in order to maintain its effects. The other libraries, such as hInclude and FormFaces, do not validate. Why not? Because they’re using non-standard attributes on the HTML elements.

We’ve spent over a decade finally getting the cruft out of pages. I’m not going to be sanguine about blithly putting more cruft back into the pages because we want a ‘cool’ effect. (For more on this, I recommend Roger Johansson’s Why Standards Still Matter.)

Leaving aside my concerns about how much we’re overloading the ‘class’ attribute, deliberately introducing invalid HTML is not a mark of progress. Leaving aside how this could impact on user agents such as text-to-speech browsers, this approach seems to assume that everyone who wants to incorporate Ajax outside of a select few intelligent enough to understand its complexities must be protected from the ‘working’ bits. That’s arrogant in the extreme because there’s nothing overly complex at all about Ajax. It’s actually based on some of the simplest developing technologies there are. Well, other than when such are obfuscated in order to puff up the concept.

Moving on to the rest of the Ajax must haves…

#5: client server

Well, this one is like being agin sin. Client server is good! OK, we got that one.

There’s little to quibble with on this item. The front end work separated from the back; hide cross-browser differences; communicate with the server using XML (though I think this one can be negotiable). I am concerned with a growing interest in making all aspects of a web application into Ajax initiated function calls, rather than using server-side templates and such; other than that, I can agree with this point.

#6: XPath targetting

My god how have we managed to get by now, with having to use JavaScript functions such as document.getElementById. If I’m reading this one correctly, it would seem that Ajax frameworks have to take the DOM and convert it into XPath notation before we can even begin to work with a web page.

There is something to be said for being able to access a page element using an XPath notation rather than having to use the DOM’s getChildNodes, but the examples given don’t demonstrate the power of such operation, and I’m not sure this is as essential as the author makes it out to be.

#7: comprehensive event model

Luckily, I found I can copy; I just have to use the Edit menu copy, rather than right mouse button. The author writes:

Next-generation Ajax frameworks finally make it possible to put the presentation layer into the browser client where it belongs. This opens up a huge business opportunity to migrate strategic desktop applications to the web where they can deliver value to partners, customers, and remote employees. To do this, it must be possible to define mouse actions and key combinations that trigger events.

Next generation Ajax frameworks may allow this, but I don’t see a mad dash to put desktop applications on the web. This isn’t Oz, and clicking our Ruby slippers together, saying, “There’s no place like the web…there’s no place like the web” isn’t going to make this happen.

I do agree with a standard set of event handlers, and making sure that each library chains its functionality on to an event so that it doesn’t bump another library’s (or the web page developer’s), but I don’t think this means then that we can move Adobe Photoshop on to the web. I don’t think this is something that users are demanding.

As for making a set of ‘custom’ events, such as NextMonth for calendars, this totally abrogates much of the concept of abstration by taking a general event (such as a mouse click) and blurring the lines between it and a business event (NextMonth). This not only adds to the complexity of the frameworks, it adds to the size of the code, as well as the points of breakage any time a new browser is released (or a new library included in the application).

What is really needed is a basic understanding of how event handlers are attached to objects, ensuring that no one library impacts adversely on another library or on the web developer’s efforts. That and a simplified way of attaching events to objects is nice. More than that is an example of a developer with too much time on their hands.

#8: state and observer pattern

It always amazes me to see even the most simple concepts obscured whenever the term ‘Ajax’ is involved.

This last item has some good points but they’re heavily obscured with using design pattern terminology such as “Observer pattern” and so on. Yes, I know that the use of such is to make communication easier so that when a person writes, “Observer pattern” we understand what they mean without them going into details. But then this means that the reader has to stop reading, go to the pattern page, read the pattern, get understanding, return, and then try to decipher is what the person is writing is what was meant by the pattern and so on.

I understand what is meant by ‘observer pattern’, and I can agree with the idea that wouldn’t it be nice to attach a behavior (register) to a form button that allows it to enable itself when certain form fields are completed. The whole thing is based on storing information about the ‘state’ of the object, which as the author of this Ajax framework must-haves notes, isn’t part of basic JavaScript behavior.

Except that it is. Within the same page, information about state of element is stored all the time, and I fail to see how there needs to be a formalized mechanism in place to facilitate this. As for storing state between pages, again mechanisms are in place through cookies, server-side storage and so on.

As for observer/registration, is it that we want all blur events triggered in the page to be funneled through an event handler for the form submit button that checks to see if it can now enable itself? Or blur events for the target fields? That’s doable, but seems overly process intensive. I would think a better approach would be to have the form submit button enabled, and then when the form is submitted, use Ajax (JavaScript) to test if required fields are filled in and then direct the person’s attention to what’s missing. Then event handling is simplified and the only time the process is invoked is when the form is submitted, not when each field is filled in.

In other words, what may work well with an application built with VB.NET may not work well, or never work well, within a browser and what we should not do is coerce functionality from one development environment onto another.

That’s they key for much of these ‘essential’ items for an Ajax framework: many add unecessarily to the complexity of the libraries, as well as their size, the number of points where a break can occur, and the likelihood of failure when more than one library is used or a new browser version is released.

In web development, there’s one important rule: less is more.

Thanks to Ajaxian for pointer to this list. In the interests of disclosure, note that I am writing a book, Adding Ajax, whereby I demonstrate that any web application developer familiar with basic JavaScript can add Ajax benefits and effects to their existing web applications. As such, I’ve violated the premise behind #4 by assuming that just anyone can become an ‘Ajax’ developer.