Shopping Carts

Recently, someone, I’ll call him Joe, sent me an email and asked a question about maintaining shopping cart items using client-side cookies.

A shopping cart is basically a program that maintains a list of items that a person has ordered, a list that can be viewed and reviewed and usually modified.

Joe had run into a problem in that browsers limit both the size and quantity of cookies that can be set from a Web application, and this limited the number of items that could be added to his company’s online store’s shopping carts. He asked whether there was a client-side Javascript technique he could use that would update a file on the server instead of on the client so that customers could add an unlimited number of items to their carts.

Instead of trying to maintain the company’s shopping cart using cookies, I recommended that Joe check out his Web server’s capabilities, and see what type of server-side scripting and applications his server supported. Then he could use this technology to maintain the shopping cart items. I also recommended that he limit the use of the client-side cookie to setting some form of session ID so that the connection between the cart items and the shopper was maintained.

Shopping Cart Technology

Joe’s email did get me to thinking about how online stores use different techniques to maintain their shopping carts.

For instance, all of the stores I shop at, except one, use some form of a client-side cookie, but each store uses these cookies differently. Additionally, the stores use server-side techniques to support online shopping, though this support can differ considerably between the stores.

Client-side cookies were originally defined by Netscape for use in Navigator, though most browsers support cookies. Cookies are small bits of information stored at the client that can be maintained for the length of a session, a day, a week, or even longer.

The use of client-side cookies is rigidly constrained in order to prevent security violations. You can turn off cookies with your browser, but be aware that cookies do not violate the security of your machine and without the use of cookies your online shopping will be severely curtailed.

This YASD article does a little technology snooping of four online shopping sites and snoops out how each site uses a combination of server-side and client-side processing to maintain its carts.

Covered in this article are the shopping cart techniques used at the following online stores:

  • Amazon.com
  • Beyond.com
  • Catalogcity.com
  • Reel.com

Shop Til You Drop

To understand shopping cart maintenance, its important to understand customer shopping habits. We’ll start this article by looking at some online shopping habits.

First, online shopping has grown enormously in the last few years. Once folks found out that it was safer to send your credit card number over the Net at a secure site then to give it over a wireless phone, much of the hesitation about online shopping vanished.

What are the types of things that people buy online? Books, CDs, and Videos are popular, but so are kitchen utensils, computer hardware and software, photography equipment and film, food gifts, bathroom towels, and even irons.

People shop online because of necessity, convenience, and cost. We purchase books, CDs, and videos online because the online stores have a much larger selection than any local store could possibly have. We know that when we are looking for a book, even an out of print book, we will most likely be able to get the book from one of the online bookstores such as Amazon.

Some businesses we shop at, such as Dell, have no physical store location. This type of business provides service for their customers through mail or phone order, only. Many of us prefer to use online shopping for these types of stores rather than having to call someone up and go through a long list of items or manually fill out an order form, all the while hoping we put down the right item number. It is a whole lot simpler to look at an item and click on an image that says something like “Add to Shopping Cart”. An added benefit to online shopping is that we can review my order before it is sent, and can get a hard copy printout of the order for our records.

Normally, most of us only shop for a few items at a time, but the number of items can become large, especially with online grocery stores — a rising phenomena. However, it isn’t unusual for us to add some items to our shopping cart one day, a couple of more items another day, and so on, until we’re ready to actually place the order. At times we may even forget we have items in a shopping cart, until we add another item to the cart and the previous items show up.

We also change our mind at times, and may need to remove items from the shopping cart, or change the quantity of an item ordered. It’s also handy to have a running total for the order so we can make at least an attempt to stay within our budgets. If the shipping charges are also shown, that’s an added bonus.

Many of us may have more than one computer and may start a shopping session with a laptop and finish it at a desktop computer, though as you will see later in this article, this sometimes isn’t that easy. In addition, the use of both Netscape Navigator and Microsoft’s Internet Explorer on the same machine isn’t all that unusual for heavy Net users, and we may start a shopping cart with one browser and add to the cart from another browser.

Pulling requirements from these patterns of use, we come up with the following:

  • An item can be added to a shopping cart with the touch of a button
  • Shopping cart items need to persist for more than one shopping session
  • Some indication that there are items in the shopping cart should show, at least on the home page for the site
  • The store needs to provide a means to modify the shopping cart items, or to remove an item or all items
  • A running total needs to be maintained each time the shopping cart contents are reviewed
  • Showing standard shipping charges and other costs when the shopping cart is reviewed is an added bonus
  • The shopping cart needs to follow the client
  • Stores need to provide the ability to review an order before placed
  • Stores also need to provide the ability print out the contents of the shopping cart
  • Shopping carts should support an indefinite number of items, or the number of items should be limited by company policy, not Web technology limitations.

A pretty good list of requirements. Now, how do each of the stores measure up?

To determine when cookies are used at each of the sites evaluated, I set my browsers to prompt me when the online store wants to set a cookie. Using this approach I can see what kind of cookies the store uses, and get an idea of the cookie purpose.

Amazon.com

Probably the undisputed king of online bookstores is Amazon.com. This company began as a pure Net-based business, and has shown the world that online commerce not only works, it works very well, thank you.

Amazon has one of the better store interfaces, and some of the best account and order maintenance setups, but does it meet all of our shopping cart requirements? Let’s check it out.

First, all items that Amazon sells can be added to the existing shopping cart with the touch of a button, even items that are on order but not yet in stock. In addition, the shopping cart contents will persist even if you leave the site and return at a later time. In fact, Amazon tells you that the item will remain in the shopping cart for 90 days, if I read this correctly, a feature I found to be very nice.

Bonus technique: Let people know how long the shopping cart items will remain in the cart. The only surprise to pull on a customer is to let them know an item is on sale, or that they are the millionth customer and have won something. Anything else will lose you business.

Amazon also provides a feature to save the item for purchasing at a later time. This removes the item from the cart, but still keeps the item on a list for later purchase.

The shopping cart can be reviewed at any time, and there is an icon on every page that allows you easy access the shopping cart. You can also modify the cart contents by changing the quantity of an item you’re ordering, or removing an item altogether.

Amazon makes use of standard HTML technology, so the shopping cart review page should print out fairly well. Unfortunately, the shopping cart does not display shipping charges and does not display a running total for the items. However, Amazon does show a total, including shipping, that you can review before you place the order. This page can also be printed out.

So far so good. Amazon has met most of our requirements, but the real test of Amazon’s supremacy in shopping cart technology is whether the cart can follow the customer. Unfortunately, the company does not support this capability.

When you access Amazon from a browser at a specific machine for the first time, Amazon sets an ID that is used to track your shopping cart items. Access Amazon from the same browser and the same machine, and you will get the same shopping cart items. However, if you access Amazon from another machine or even another browser, you will not get access to these shopping cart items.

Is it important to maintain shopping cart persistence from browser to browser, machine to machine? You bet it is.

I, as with other folks involved with Web development and authoring, use both Navigator and IE. In addition, there are some sites geared more towards one of these browsers, so most folks who spend a lot of time on the Net have both browsers.

There are times when I am sure I have placed an item in the shopping cart, only to find out I did, but using a different browser or from a different machine. This happens more often than I would like, and is an annoyance every time.

Now the online stores have to ask themselves the question: Are people like myself part of a customer base they want to attract? Think of this: Who is more likely to be purchasing items online than folks who spend a large amount of their time, online. And who is likely to use more than one machine and more than one browser? Folks who spend a lot of time, online.

To summarize, Amazon uses client-side cookies to establish a persistent ID between the customer and the shopping cart. The company also uses this ID to establish a connection from the customer to the customer’s account information. The shopping cart items, however, are maintained on the server, persist for 90 days, and there is no limit to the number of items that can be ordered at Amazon, at least as far as I could see. Where Amazon does not meet the requirements is by not providing a running total on the shopping cart review page, and by not providing a shopping cart that moves with the customer.

Based on the requirements met by Amazon, I give them a score of 8 our of 10, for their support of shopping cart techniques.

Beyond.com

Beyond.com sells computer software and hardware and is a Net-only based company.

Beyond.com maintains a client ID in client-side cookies, which is used to track shopping cart contents for the session, only. Beyond.com does not persist the shopping cart contents outside of a specific browser session. Once you close the browser, the current shopping cart contents are gone.

In addition, it does look as if Beyond.com maintains the shopping cart totally within one client-side cookie, tagged with the name “shopcart”.

By maintaining the shopping cart on the client, Beyond.com has chosen one of the simplest approaches to maintain a shopping cart, and simple can be good. There is little or no burden on the server other than accessing the original item that is added to the cart. There is also less maintenance to this type of system, primarily because the Web developers and administrators do not have to worry about issues of storage of information on the server, or cleaning up shopping carts that become orphaned somehow. Additionally, Beyond.com is taking advantage of a client-side storage technique that is safe and simple to use.

However, there is a limitation with this approach in that the cookie is limited to a size of 4 kilobytes. It may seem that 4K is more than large enough to support a cart, but when you store information for each item such as cart item number, name of product, version, price, store identification number, quantity and price, you can reach an upper limit more quickly then you would think. Additionally, a limit is a limit, and you have to ask yourself if you really want to limit how many items a person can add to their shopping cart.

Most online stores could probably get away with shopping carts that have number of items limitations. After all, it might be a bit unusual to purchase 40 or 50 items from a software company at any one time.

If a store’s customers tend to purchase only a few items at a time, then it might not be cost effective to provide shopping cart technology that provides for virtually unlimited items.

Beyond.com also provides a quick look at the shopping basket from every page of the site. This quick view provides the first few letters of the cart item, the quantity ordered, and a running total for the cart. As much as I appreciate having this information, I found that I would have preferred having just the quantity of items in the shopping cart and the running total: listing all of the items actually became a distraction when I had more than a few.

Beyond.com uses standard HTML and the shopping cart page did print out using the browser’s print capability. In addition, you can review the order, including the shipping charges, before the order is fully placed.

To summarize, I liked Beyond.com’s support for shopping cart status display on the other Web pages. I also liked Beyond.com’s running total. The biggest negative to Beyond.com’s shopping cart was the lack of persistence outside of the browser session. I may not order more than 5 or 10 items at a time, but it isn’t unusual for me to add a couple of items at one time, close down the browser and return at a later time to add more items and finalize the order. In addition, it isn’t impossible for people to accidentally close down their browsers, which means they lose all of the items from their cart and have to start over. Based on the lack of persistence, I would have to give Beyond.com a 6 in shopping cart technology.

Catalogcity.com

CatalogCity is an interesting online business that presents the contents of catalogs from several mail order firms, allowing you to shop for everything from a new jacket to kitchen knives. Once you have placed your order for all of the items you’re interested in, CatalogCity then submits the orders for the individual items to the specific catalog company.

Of all the online shops I have seen, CatalogCity is one of the most clever. It provides both goods and a service, but without the hassle of maintaining inventories and supporting warehouses. I am sure that CatalogCity charges a fee to use their services to the catalog companies listed, but it is most likely more profitable for these companies not to hassle with online ecommerce issues. Even for those companies that have their own site and that use CatalogCity, they will get access to people who are looking to shop, but don’t necessarily know the catalog company’s name or Web site URL.

I do like to see effective and innovative uses of Web commerce. If I have a problem with the site, it is that not all of the catalog companies support online shopping in the catalog through CatalogCity. You can review the catalog and use the phone number provided to place your order. However, it’s just not the same as one button shopping.

CatalogCity uses cookies to set a customer id the first time you access their site. However, after that, all information about the shopping cart is stored on the server. There is no indication in the pages that you have shopping cart items, but you can access the shopping cart from a well placed icon on each site page.

The shopping cart page lists all of the items, provides the ability to change or remove an item, and provides a running total — sans shipping charges. It also provides a hypertext link from the item to the page where the item was originally listed, so you can review the item before making a final purchase.

The technology that CatalogCity uses for their shopping cart is fairly standard, so the cart page should print out easily. In addition, the company does provide the customer the ability to review the order, including shipping charges, before the order is placed.

The CatalogCity shopping cart is the most persistent of all of the shopping carts that I have seen. First, if you access the site but don’t set up an account, the cart will persist from browser session to browser session, but only with the same browser and machine. However, if you create an account with CatalogCity and sign in each time you access the site, the shopping cart will follow you from one browser to another, and from one machine to another. In fact, of all the sites I reviewed for this article, CatalogCity is the only one that provided this functionality.

To summarize the CatalogCity shopping cart technology, the company has provided the best support for shopping cart persistence of all the sites visited. In addition, the company also provides easy access to the cart, and provides a running total on the shopping cart page. CatalogCity also provides you with a change to review and modify your order as well as review the shipping charges before the order is placed. About the only non-positive aspect I found with this site’s shopping cart technology is that the site does not provide information that the shopping cart has items on the first page. If CatalogCity had provided this, I would have given the site a score of 10, but I’ll have to give it a score of 9.

Reel.com

Reel.com is an online video store that sells new, and used, VHS and DVD videos. It has an excellent selection of movies and a nicely organized site.

Reel.com uses cookies to set a user id when you first access the site. When you access a specific item, the site uses ASP pages and ASP (Microsoft’s server-side technology) sets a cookie with a session id when the first ASP page is accessed. After that, no other cookies are set. All shopping cart items are maintained on the server.

ASP or Active Server Pages, was the technology that seemed to be most used at the different online stores. ASP technology originated with the release of Microsoft’s Internet Information Server (IIS), but has since been ported to other Web servers and even to Unix from a company called ChiliSoft.

ASP provides for both server-side scripting as well as server-side programming with ASP components. In addition, ASP provides full support for data access through Microsoft’s Active Data Object technology.

One cookie that is set with ASP is the Session ID. When you access a site for the first time during a browser session, Microsoft tries to set a Session ID, to maintain a connection between your browser and the Web server. Without this, it is very difficult to maintain information about the session, such as shopping cart contents, from Web page to Web page.

Reel.com does not provide a running total for the purchases on the shopping cart page, and does not provide a visual indicator that items are in the shopping cart from any of the pages, except the shopping cart page. The company does provide easy text-based access to the shopping cart from each page and does allow you to change the quantity of an item ordered, as well as remove an item from the cart.

Reel.com provides shipping and full order information for the customer to review and modify before the order is placed, and the order page, as well as the shopping cart, can be printed out.

Reel.com does not provide persistence beyond the session. Once you close the browser, the shopping cart is gone.

To summarize, Reel.com did not score very high by meeting many of the requirements for a shopping cart. It didn’t provide a visual cue about shopping cart contents, at least on the first page of the site, nor did it provide a running total on the shopping cart page. The biggest negative, though, was that the site did not maintain the shopping cart persistently outside of the browser session. Reel.com did provide the ability to review and modify the order before the order was placed, but based on the requirements met, I would have to give Reel.com only a 4 for shopping cart technology.

Summary

Four different online store sites, each using different techniques to support the site’s shopping cart.

All of the sites used cookies to establish a connection between the browser session and the shopping cart. In addition, each site provided shopping cart pages that could be printed out, and provided the ability for the customer to review and modify, or cancel, the order before it was placed.

The highest scorer in my evaluation is CatalogCity, primarily because of their support for persistence across different browsers and machines. This was followed by Amazon, which provided for browser and machine specific persistence.

Both Reel.com and Beyond.com dropped drastically in points because of their lack of shopping cart persistence, in any form. However, Beyond.com did provide feedback as to shopping cart contents, something that CatalogCity and Amazon did not. Beyond.com may want to consider dropping their line item display of the shopping cart as this can be distracting. They were also the only online store maintaining their shopping cart in client-side cookies. While this approach has an advantage of being the quickest technique to displaying the shopping cart contents when the customer wants to review the shopping cart, and is the simplest technique to use, I still don’t see this approach as the way to go for online shopping.

If we could take CatalogCity’s persistence and add to it a running total with estimated shipping charges, and provided feedback on the cart contents in at least the home page of the site, we would have, in my opinion, the perfect shopping cart.

The next article in the Technology Snoop series will be on order processing and order status maintenance. In this you’ll see that Amazon shines. You’ll also read about how not to win customers and to lose the ones you have when I add some new online stores to the review.

Adding Persistence to DHTML Effects

Originally published at Netscape Enterprise Developer, now archived at the Wayback Machine

Dynamic HTML (DHTML), implemented in Netscape Navigator 4.x and Microsoft Internet Explorer 4.x, gives the Web page visitor the ability to alter the position, format, or visibility of an HTML element. However, the effects that are created with DHTML are transitory in that the next time the page is refreshed, the current DHTML state is not maintained and the page opens showing the same content layout as when the page is first loaded. Any changes to the content based on DHTML are not maintained. Sometimes this isn’t a problem, but other times this is irritating to the reader. This article covers how to add persistence to a DHTML page. Examples should work with all forms of Netscape Navigator 4.x and Internet Explorer 4.x, but have only been tested with IE 4.x and Netscape Navigator 4.04 in Windows 95.With DHTML, developers can provide functionality that hides or shows whichever layer the Web page reader wants to view. Developers can also add functionality that lets readers move content. The problem is that DHTML effects are not session persistent, which means that they aren’t maintained between page reloads. Following a link to another site and returning to the page can trigger a reload, which wipes out the effect and annoys the user,. especially if he or she has spent considerable effort achieving the current DHTML state.

For example, you can use DHTML to let your reader position your page contents for maximum visibility in her browser and get the page look just right. If she decides to leave your site to check Yahoo’s news for a few minutes and then comes back, her settings will have been erased.

So, what’s the solution to these problems? It’s relatively simple — add document persistence using Netscape-style cookies.

Using cookies 
Despite the name, you won’t find Netscape-style cookies at your local bakery. Cookies are bits of information in the form of name-value pairs that are stored on the client’s machine for a set period of time or until the browser is closed. For security, cookies are created and accessed using specific syntax, are limited to 300 cookies total within the cookie database at 4K per cookie, and are limited to 20 cookies per domain (the URL where cookie was set).

Cookie syntax consists of a string with URL characters encoded, and may include an expiration date in the format:

Wdy, DD-Mon-YY HH:MM:SS GMT

Netscape has a complete writeup on cookies (see our Resource section at the end), including functions that can be copied and used to set and get cookies. I created modified versions of these functions to support my DHTML effects. As I don’t want to add to overly burdened cookie files, I don’t set the expiration date, which means the cookie does not get stored in the persistent cookie database or file. This also means that the DHTML effect only lasts for the browser session. However, this fits my needs of maintaining a persistent DHTML state in those cases where the reader moves to another site or hits the reload button.

DHTML state cookie functions
I created a JavaScript source code file called cookies.js that has two functions. One function sets the cookie by assigning the value to the document.cookie object. More than one cookie can be set in this manner, as cookie storage is not destructive — setting another cookie does not overwrite existing cookies, it only adds to the existing cookie storage for the URL. The cookie setting function is shown in the following code block:

// Set cookie name/value
//
function set_cookie(name, value) {
   document.cookie = name + "=" + escape(value);
}

Next, the function to get the cookie accesses the document.cookie object and checks for a cookie with a given name. If found, the value associated with the cookie name is returned, otherwise an empty string is returned. The code for this function is:

// Get cookie given name
//
function get_cookie(Name) {
  var search = Name + "="
  var returnvalue = "";
  if (document.cookie.length > 0) {
    offset = document.cookie.indexOf(search)
    if (offset != -1) { // if cookie exists
      offset += search.length
      // set index of beginning of value
      end = document.cookie.indexOf(";", offset);
      // set index of end of cookie value
      if (end == -1)
         end = document.cookie.length;
      returnvalue=unescape(document.cookie.substring(offset, end))
      }
   }
  return returnvalue;
}

That’s it to set and get cookie values. The next section shows how to create two Web pages with simple, cross-browser DHTML effects. Then each page is modified to include the use of cookies to maintain DHTML state.

_BREAK1 Creating the DHTML Web pages
To demonstrate how to add persistence to DHTML pages, I created two Web pages implementing simple DHTML effects. The first page layers content and then hides and shows the layers based on Web-page reader mouse clicks. The second page has a form with two fields, one for setting an element’s top position and one for setting an element’s left position. Changing the value in either field and clicking an associated button changes the position of a specific HTML element.

Adding DHTML persistence for layered content
For the first demonstration page, a style sheet was added to the top of the page that defines positioning for all DIV blocks, formatting for an H1 header nested within a DIV block, and three named style sheet classes. The complete style sheet is shown here:

<STYLE type="text/css">
        DIV { position:absolute; left: 50; top: 100; width: 600 }
        DIV H1 { font-size: 48pt; font-family: Arial }
        .layer1 { color: blue }
        .layer2 { color: red }
        .layer3 { color: green }
</STYLE>

Next, three DIV blocks are used to enclose three layers, each given the same position within the Web page. Each block also contains a header (H1), with each header given a different style-sheet style class. The HTML for these objects is:

<DIV id="layer1">
<H1 class="layer1">LAYER--BLUE</H1>
</DIV>
<DIV id="layer2" style="visibility:hidden">
<H1 class="layer2">LAYER--RED</H1>
</DIV>
<DIV id="layer3" style="visibility:hidden">
<H1 class="layer3">LAYER--GREEN</H1>
</DIV>

That’s it for the page contents. To animate the page, I created a JavaScript block that contains a global variable and two functions. The global variable maintains the number of the layer currently visible. The first function is called cycle_layer; this function determines which layer is to be hidden and which is shown next, and then calls a function that performs the DHTML effect:

<SCRIPT language="javascript1.2">
<!--

// current layer counter
current_layer = 1;

// assign document clicks to function pointer
document.onclick = cycle_layer;

// cycle through layers
function cycle_layer() {
   var next_layer;
   if (current_layer == 3)
        next_layer = 1;
   else
        next_layer = current_layer + 1;
   switch_layers(current_layer, next_layer);
   current_layer = next_layer;
}

The scripting block also assigns the function cycle_layer as the event handler function for all mouse clicks that occur within the document page. By doing this, clicking any where in the page document area that doesn’t include any other content results in a call to the function to change the layer.

The switch_layers function is the DHTML effects function, and it uses the most uncomplicated technique to handle cross-browser differences: it checks for browser type and then runs code specific to the browser. Other techniques can be used to handle cross-browser differences, but these are outside of the scope of this article. All that the function does is hide the current layer and show the next layer in the cycle, as shown here:

// hide old layer, show new
function switch_layers(oldlayer, newlayer) {
   if (navigator.appName == "Microsoft Internet Explorer") {
        document.all.item("layer" + oldlayer).style.visibility="hidden";

        document.all.item("layer" +
newlayer).style.visibility="inherit";
        }
   else {
        document.layers["layer" + oldlayer].visibility="hidden";
        document.layers["layer" + newlayer].visibility="inherit";
        }
}

Try out the first sample page. Clicking on the document background, not the text, changes the layer. Try setting the layer to the green or red layer, accessing another site using the same browser window, and returning to the sample page. When you return, the page is reset to the beginning DHTML effect, showing the blue layer.

To correct the loss of the DHTML effect, we can add persistence to the page by storing which layer is showing when the page unloads. This information is captured in a function called when the onUnLoad event fires.

To add persistence, I added the cookies Javascript source code file to the page, using an external source code reference:

<!-- add in cookie functions -->
<SCRIPT language="javascript" src="cookie.js">
</SCRIPT>

Next, I added the function to capture the DHTML state:

// onunload event handler, capture "state" of page (article)
function capture_state() {
   set_cookie("current_layer",current_layer);
}

When the page reloads, the onLoad event is fired, and a function is called from this event to “redraw” the DHTML effect. The function is called start_page, and it pulls in the cookie containing which layer should be shown:

function start_page() {
// get cookie, if any, and restore DHTML state
  current_layer = get_cookie("current_layer");
  if (current_layer == "")
        current_layer = 1;
  else
        switch_layers(1, current_layer);
}

Finally, the two event handlers, onUnLoad and onLoad, are added to the BODY HTML tag:

<BODY onload="start_page()" onunload="capture_state()">

You can try out the second sample page, which has DHTML persistence — again, change the layer, open some other site and then return to the sample page. This time, the DHTML effect persists during a page reload.

Sometimes an effect requires more than one cookie, as the next example demonstrates.

_BREAK2 Adding DHTML Persistence for Positioned Content
In the next example, I used a style sheet again to define CSS1 and CSS-P formatting for the DIV block that is going to be moved in the page:

<STYLE type="text/css">
        DIV { position:absolute; left: 50; top: 100; width: 600 }
        DIV H1 { font-size: 48pt; font-family: Arial }
        .layer1 { color: blue }
</STYLE>

Next, I created a form that has two text elements and associated buttons. One text and button element pair is used to change the DIV block’s top position, one pair is used to change the DIV block’s left position. The entire form is shown here:

<DIV id="first" style="position:absolute; left: 10; top: 10; z-index:2">
<form name="form1">
Set new Left Value: <input type="text" name="newx" value="50">
<input type="button" onclick="newleft(newx.value)" value="Set New
Left"><p>
Set new Top Value: <input type="text" name="newy" value="100">
<input type="button" onclick="newtop(newy.value)" value="Set New Top">
</form>
</DIV>

Next, the positioned DIV block is created:

<DIV id="layer1" style="z-index: 1">
<H1 class="layer1">LAYER--BLUE</H1>
</DIV>

Once the page contents have been created, you can add code to animate the DIV block based on whether the top or left position is changed. The function to change the top position is:

// change top value
function newtop(newvalue) {
   if (navigator.appName == "Microsoft Internet Explorer")
        layer1.style.pixelTop = parseInt(newvalue);
   else
        document.layer1.top = newvalue;
}

Again, the least complex technique to handle cross-browser differences is used, which is to check for the browser being used and run the appropriate code. The function to change the left position is identical to the top position function, except the CSS-P attribute “left” is changed rather than the CSS-P attribute “top”:

// change left value
function newleft(newvalue) {
   if (navigator.appName == "Microsoft Internet Explorer")
        layer1.style.pixelLeft = parseInt(newvalue);
   else
        document.layer1.left = newvalue;
}

That’s it for this page. You can try it out on the third sample page, where you can change the position of the DIV block by changing the left position, the top position, or both. Again, open another site in the browser and then return to the example page. Notice how the position of the DIV block does not persist between page reloadings.

Adding persistence to this page is very similar to adding persistence to the layered example page, except two values are maintained: the top and left values. The start_page and capture_state functions for this DHMTL effect are:

function start_page() {
// get cookie, if any, and restore DHTML state
  var tmpleft;
  tmpleft = get_cookie("currentleft");
  if (tmpleft != "") {
        currentleft = parseInt(tmpleft);
        currenttop = parseInt(get_cookie("currenttop"));
        newtop(currenttop);
        newleft(currentleft);
        if (navigator.appName == "Microsoft Internet Explorer") {
           document.forms[0].newx.value=currentleft;
           document.forms[0].newy.value=currenttop;
           }
        else {
           document.first.document.forms[0].newx.value = currentleft;
           document.first.document.forms[0].newy.value = currenttop;
           }
        }
}

function capture_state() {
        set_cookie("currentleft", currentleft);
        set_cookie("currenttop", currenttop);
}

To see how it works, try out the fourth sample page, move the DIV block, open another Web site in the browser and return to the example page. This time the DHTML effect does persist between page reloadings.

Summing up
In a nutshell, the steps to add persistence to a page are:

  1. Create the DHTML page
  2. Determine the values that must be maintained to re-create an existing effect
  3. Add the Netscape-style functions to the page
  4. Capture the onUnLoad event and call a function to capture the DHTML effect persistence values
  5. Capture the onLoad event and call a function to restore the DHTML effect from the persisted values

Using Netscape-style cookies to maintain DHTML effects won’t help with some DHTML persistence problems. For example, DHTML effects can be lost when a page is resized. In addition, if a page is resized between the time the DHTML persistence values are stored and the page is reloaded, the values may no longer work well with current page dimensions. However, for most effects and for most uses, the use of Netscape-style cookies is a terrific approach to DHTML persistence.

Using Old Page layout tricks with new technology

Originally published at Netscape Enterprise Developer, now archived at the Wayback Machine

Prior to the release of Netscape Navigator 4.0 and Microsoft Internet Explorer 4.0, Web developers had to use a lot of tricks to control the layout and looks of a Web page. There was no easy way to control the positioning of elements on the page — the two photos that lined up perfectly over the text on your browser when you laid out your page could be completely askew when someone else downloaded them.

To counteract this non-standard display problem, Web authors tended to develop an arsenal of “tricks” that made layout easier. Among the more common tricks were the use of HTML tables to control page layout and a one-pixel transparent GIF to control the placement of content within a line or within a page.

With the newer 4.0 browsers, however, site developers have new, fewer roundabout techniques to control element placement and page layout. For instance, the Cascading Style Sheet Positioning (CSS-P) specification (now incorporated into the CSS2 working effort) and the original CSS1 specification add layout capabilities to HTML development. (See the page listed in our Resources section below for detailed Cascading Style Sheet information.) In addition, the rise of dynamic HTML technology means that authors not only have control of the static placement of HTML elements on a page, they can also move elements and hide or show them after a page is displayed.

Even with these new techniques, however, I find myself using old standbys — specifically, HTML tables and the one-pixel transparent GIF — but now I’m using them in combination with the new technologies to create interesting and useful effects. You too can combine the best of the old and new for more control over your page layout.

The one-pixel transparent GIF
Prior to the release of Navigator 4.0 and IE 4.0 and the development of the Cascading Style Sheet Positioning spec, Web page authors were severely limited in how much we could control the placement of Web page elements. One element that you could use to provide some placement control was the image element, defined with the <IMG> tag. Using the image element, an author could specify a vspace or hspace (vertical or horizontal spacing) value, as well as a vertical or horizontal alignment (right or left, top or bottom).

In addition to the image element, some older browsers supported the use of the transparent GIF. Using a transparent GIF, the specific color you identify within the image is transparent when loaded into a browser.

David Seigal of Killer Web Site fame combined the two ideas and came up with the image element that happened to be a small one-pixel transparent GIF. Since it’s tiny and transparent, it can be easily added to a layout without disrupting the underlying content; by using the specific spacing value capability of the image element on the one-pixel transparent GIF, you could add white space or position content in a specific location.

Beginning with Navigator 4.0 and followed by Internet Explorer 4.0, however, Web authors began using CSS-P for specific element positioning and CSS1 for adding margins or padding to a page (or even within an element). There was no longer a need for the little one-pixel transparent GIF. But an odd thing happened when I was about ready to delete my trusty little GIF file: I actually found a new use for it.

I wanted to create a page for my Dynamic Earth Web site that showed several different minerals and included a moveable “frame” that positioned itself over each image based on some event. When the frame was over the image, the image was activated — if the user clicked on it, information about the mineral would appear. Of course I knew I could use CSS1 to create a frame around content, then use CSS-P to move the frame, but I wanted the interior of the frame to be transparent.

The solution I came up with was to create the frame using a one-pixel transparent GIF for the interior, setting its width and height to be the exact size of the frame I wanted. Using CSS-P, I could then move this GIF around from image to image. I also created two separate layers for each image: one that had the original image, which didn’t do anything when the reader clicked it, and another that had a framed, transparent GIF, which showed the mineral information layer when clicked. Figure 1 shows a snapshot of this page:

 


Figure 1

The one-pixel transparent GIF turned out to be much more than a simple placement and whitespace tool; it’s a handy tool that can be stretched and shrunk as needed and reused as often as necessary.

Positioning a menu for both old browsers and new
I soon found other uses for my newly rediscovered little GIF file, including a solution for a new Web page authoring problem I encountered.

I used to have a menu along the top of my pages that contained four separate images and text-based links just below the images. I used an HTML table to control the placement of images and text, ensuring that the images were placed directly next to each other and were aligned along the top of the page. One thing I never liked about this approach was that I preferred that menu text be layered over the images rather than displayed below them.

With CSS-P, I could position the images at the top of the page and use absolute positioning to place the menu text over the images. The problem I encountered, though, is that the page didn’t look very good when viewed with a pre-Navigator or IE 4.0 browser.

Instead, I used an HTML table along the top of the page, including the four images and their associated text in the table cells. However, I included these images and text in blocks delimited by <DIV> tags (these allow formatting to span multiple elements) and then used CSS-P to position the images and text so they overlapped. With this approach, older browsers see the page as they always have, as shown in Figure 2, but in the newer browsers the page has the new overlapped look, as shown in Figure 3.

 


Figure 2

 


Figure 3

There was one last problem with this approach, though. The thing with absolute positioning is that if you have it on content within a table element, the contents do not influence the table row’s height or the table cell’s width. This means that any other content on the page gets placed after the table, but overlaps the images and text because the table is not sized correctly.

I found that a simple solution for this was to use my trusty one-pixel transparent GIF as the first table cell and set it to the height I wanted the table to occupy. This sized the entire row to the height I needed to cover the positioned content — once again, a successful blend of the old and new.

XML Expectations

Originally published at Netscape Enterprise Developer, now archived at Wayback Machine

Extensible Markup Language is a language that defines other languages. It also has the potential to give structure and meaning to the information contained in HTML documents or any other data form — making such information naturally as searchable and structured as the information locked into a database. Such capabilities mean that XML can turn our current view of data upside down — instead of a static, impenetrable lump of information, a file that uses XML suddenly has a logical structure that can be manipulated, queried, and changed without delving down into the data itself. The potential of such a meta-language is huge — if it’s implemented as an open standard. Right now, XML remains such a standard, and if it continues to evolve along open lines, it could drastically improve Web-based development.XML is similar to SQL in a number of ways; SQL is also an example of a multi-purpose language used to define data structures and query those same data structures without concern as to how the information is displayed or used. The only guarantees are that the information is defined in structures, the structures follow certain rules, and the information contained within the structures can be accessed automatically or manually. Both SQL and XML define structures for information in the form of elements, element attributes, and element content. The main difference is that instead of defining data that is stored in a physical storage medium usually only accessible by a database engine, XML describes data that is stored and accessed from within documents.

XML’s parent: SGML
XML is a subset of SGML (Standardized General Markup Language), a generalized markup language that was passed as an ISO standard in the 1980s. Rather than specifying a language’s elements directly, SGML is used to define the rules that constrain the elements of a specific language.

SGML grew out of the need to define a document’s structure and to define rules used to determine whether the document is valid and well formed. The document’s structure is defined through the use of markup tags, which delimit the elements, and Document Type Definition (DTD) files that define each element’s structure and content, providing a sort of grammar for the document.

For example, using SGML, a customer element within a document could have the following structure:

<CUSTOMER name="Shelley Powers" id="CUST011A1">
<PO id="PO23349008">
<POITEM id="POI1">
<ITEM id="14453">
Item ID: 14453
Item Desc: some description
</ITEM>
</POITEM>
</PO>
</CUSTOMER>

To validate the markups used to define the structure of the document, an associated DTD would have to be created, with statements similar to the following:

<!ELEMENT customer - - (POITEM)+><!ATTLIST customer    name CDATA   id CDATA>

This extremely simplified and abbreviated DTD uses an Extended Backus-Naur Form (EBNF) syntactic notation to create the grammar.

Using a standardized meta-language to define entities within a document allows SGML parsers to pull out the individual entities (such as the customer entity just described) and any associated attributes and content. An application can then use that information for a number of purposes, including the following possibilities:

  • To define information in a database-neutral format for transport between unlike databases.
  • To provide a search engine that allows a person to query on the entity type as well as the data.
  • For report generation, or even an online hypertext order processing form that allows the reader to drill down within the document to find the desired information.
  • To define a standard language for a specific industry or science, such as the petroleum industry or chemistry, which includes special notational conventions.

The concept of SGML is very attractive: Define a language that in turn defines a document structure used for a specific group of documents and which can be extended without impacting the underlying language generation mechanism. Unfortunately, the downside to SGML is that it is far from trivial to define the DTD for a language. SGML is a complex standard that is difficult to implement.

HTML, a derivative of SGML
SGML did, however, provide the roots of the first Web document specification, HTML. HTML was derived from SGML, except that it predefined a group of elements that controlled the delivery of a Web page’s content. In addition, the original HTML elements were expanded to include suggested presentation elements that controlled the appearance of the Web page. SGML does not control the presentation of elements, only the element structure and semantics.

The following code defines an HTML unnumbered list element, which is defined by the DTD associated with the HTML 4.0 specification as having a start and end tag and containing at least one list item:

<!ELEMENT UL - - (LI)+>

According to the EBNF associated with SGML, this DTD states that UL is an element, the double dashes assert that the element requires a start and end tag, and the element consists of at least one, and possibly more than one, list item (LI). When a user agent such as a browser parses an HTML element, it knows to look for both beginning and ending UL tags and at least one LI element contained within those tags.

Associated with the DTD for HTML 4.0 is an implied visual presentation of an unnumbered list, which is that each list item has a specified list graphic, each list item is on a separate line, and each list item lines up beneath the previous list item. However, not all user agents (such as browsers) are visual, so presentation can only be a suggestion, not a mandate.

After the first releases of the HTML specification, new elements crept into the language to provide control over page presentation. One such element is the FONT element, which controls the size, color, type, and font family of any text the element contains. The problem with using a specific tag like FONT, however, is that non-standardized tags can lead to different Web page presentations depending on the user agents.

To help differentiate an element’s structure and its presentation in HTML, the W3C issued a recommendation for CSS1, or Cascading Style Sheet Level 1, a specification that provides presentation information for HTML elements.

The real advantage of HTML was that it was relatively easy to code and display, even in different browsers. The ease of HTML was directly responsible for the massive growth of the Web. If Web document access had begun with XML, chances are you wouldn’t be reading this article right now and Web access would probably be limited to the scientific community. We initially needed a simple mechanism to create Web documents, and HTML was it. The very lack of flexibility was the language’s strength.

Now that the Web and Internet-based technologies have matured to a certain extent, enterprise developers are increasingly demanding a way to build flexibility into documents like Web pages in order to increase their effectiveness and ease of access.

Enter XML
XML arose from a need to create more generalized markup languages without having to follow the large and complex SGML standard. The XML standard still demands that a markup language be defined as well-formed, but it makes the validation step optional, which means that an associated DTD is not required (though one can be included). Additionally, XML uses only a subset of the rules for SGML, letting developers understand the principles and implementation of the technology more quickly.

Like SGML, XML is a meta-language that provides rules to define a set of tags that can be used within a document. These tags are then used to delimit an XML entity, its attributes, and its contents, and to define the elements’ syntax. These tags are read by a XML processor, which in turn provides an application with access to the entities. The application can then perform one or more actions on the XML entities.

XML processors can either be validating, which means that they make use of an associated DTD in order to ensure valid structures, or non-validating. Regardless of whether or not they are validated, XML documents can be considered to be well-formed as long as they match the XML syntax overall and as long as each entity within the document meets the syntax for a well-formed XML entity.

The main requirements for a well-formed Extensible Markup Language include the following:

  • The language may begin with a valid XML declarative statement or prolog.
  • There is one element that acts as the root element and which acts as parent to all other elements.
  • Elements are either not empty or, if they are empty, they have a “hint” encoded within the element that defines this information to the XML parser.
  • Non-empty elements must have start and ending tags.
  • All elements except the root element are contained within some element, referred to as the element’s parent; all contained elements are referred to as the parent element’s children.
  • Elements can contain character data, other elements, CData sections, processing instructions or comments.
  • Each parsed element within the document is well-formed.
  • Character data that may be processed as XML is enclosed within CData sections.
  • Documents can include comments, white space, and processing instructions.

Consider that a valid and well-formed XML document consists of the following EBNF format non-terminating symbols (non-terminating meaning that the symbols are themselves expanded elsewhere):

document::= prolog element Misc*

A complying document could be as simple as:

<?XML VERSION="1.0" ENCODING="UTF-8"?>
<ARTICLE name="XML" author="Shelley Powers"/>

This document consists of the prolog section which includes the XML declaration (“<?XML”) and includes the version number of the XML definition, as well as the encoding declaration. It also contains one element, ARTICLE, which has two attributes, NAME and AUTHOR. Since the element is an empty element, it ends with a backslash to signal to the processor that the element contains no other content. This is necessary for a non-DTD (non-validating) document. Otherwise the XML processor would not know when to look ahead in the parsing for required element content. This is one of the key features of XML: Forward processing information is embedded directly within the document, negating the necessity of creating an associated DTD.

The example just provided is a well-formed document, but not a valid one, since no DTD is provided for validation. The example also demonstrates the simplicity of XML. An even simpler version of the language would be:

<ARTICLE name="XML" author="Shelley Powers"/>

To make the document a valid one, I could have added a DTD for the ARTICLE element directly into the document, or linked to a DTD external file:

<?XML VERSION="1.0" ENCODING="UTF-8"?>
<!DOCTYPE article SYSTEM "article.dtd">

<ARTICLE name="XML" author="Shelley Powers"/>

XML in action
Though the standard is relatively new, there are several XML parsers that validate whether an XML document and its associated DTD fit the rules for a valid XML document. In addition, these same parsers may return the elements within a document exposed in their tree-like form — a form that can be used by applications.

XML is being used in the real world already. For example, Microsoft has defined an XML application it terms “Channel Definition Format,” or CDF. CDF files contain entities that describe the contents of an active channel. Following the accepted technique for XML, CDF files do not contain reference to a DTD file and instead use clues embedded within the tags and tag definitions to provide forward-looking information for the XML parser.

CDF’s purpose is to provide a document that defines the use of push technology at a specific Web site, including which pages are to be displayed as channels, what icons to display, what the update schedules are, etc. With this information, the XML processor provides the key elements that a channels-based application can use to control channel access on the Web site.

The following code shows the CDF file I have defined for use at my personal Web site. The root element for the file is the CHANNEL element. It is the parent element for several other elements, such as an ICON element, an ITEM element, and an ABSTRACT element. Each of the elements within the document may or may not have attributes, and a child element may in turn be the parent for another element:

<?XML VERSION="1.0" ENCODING="UTF-8"?>
<CHANNEL HREF="http://www.yasd.com/plus/index.htm" 
        BASE="http://www.yasd.com/plus/">
    <TITLE>YASD+</TITLE>
    <ABSTRACT>YASD+ pages, using the newest technologies</ABSTRACT>
    <LOGO HREF="http://www.yasd.com/mm/wide_logo.gif" STYLE="IMAGE-WIDE"/>
    <LOGO HREF="http://www.yasd.com/mm/logo.gif" STYLE="IMAGE"/>
    <LOGO HREF="http://www.yasd.com/mm/icon.gif" STYLE="ICON"/>
    <SCHEDULE>
        <INTERVALTIME DAY="1"/>
        <EARLIESTTIME HOUR="0"/>
        <LATESTTIME HOUR="12"/>
    </SCHEDULE>
    <ITEM HREF="http://www.yasd.com/samples/bytes/daily.htm">
        <LOGO HREF="http://www.yasd.com/mm/icon.gif" STYLE="ICON"/>
        <ABSTRACT>YASD Code Byte</ABSTRACT>
    </ITEM>
    <ITEM HREF="http://www.yasd.com/samples/bytes/cheap.htm">
        <LOGO HREF="http://www.yasd.com/mm/icon.gif" STYLE="ICON"/>
        <ABSTRACT>Cheap Page Tricks</ABSTRACT>
    </ITEM>
</CHANNEL>

Notice that the first line contains the XML declaration element, a version number, and an encoding declaration. The main entity within the document is the CHANNEL entity, enclosing other elements such as TITLE, ITEM, ABSTRACT, and LOGO. Each of these elements falls within the allowable XML definition for elements:

element ::= EmptyElemTag | STag content ETag 
EmptyElemTag ::= '<' Name (S Attribute)* S? '/>'
STag :: = '<' Name (S Attribute)* S? '>'
ETag::= '</' Name S? '>'
content ::= (element | CharData | Reference | CDSect | PI | Comment )*

Without continuing to resolve the non-terminating references, what the syntax just shown states is that each element is either an empty element, in which case it ends with a backslash/angle bracket combination (‘/>’), or it has start and end tags which enclose content. A “well-formed” constraint is placed on the start and end tags in that the NAME used in both is the same. The enclosed content can include other elements, comments, processing instructions, or other well-formed XML entities. Both empty and non-empty elements can have zero or more attributes, as the following demonstrates:

<CHANNEL HREF="http://www.yasd.com/plus/index.htm" 
        BASE="http://www.yasd.com/plus/">
...
</CHANNEL>

or

<INTERVALTIME DAY="1"/>

Internet Explorer 4.0 has an associated XML parser that pulls the element information out of the document. IE 4.0 uses this parsed element information to create the channel for the Web site, including the two sub-channel items, as shown below:

This site has a main Web page channel, denoted by the top-level graphic, and two sub-channels, with the second sub-channel loaded into the browser.

Accessing the CDF file directly with IE 4.0 opens a dialog box asking the individual how they would like to subscribe to the site’s channel, and allowing the reader to determine how and when the channel contents are downloaded to their client machine.

Other uses of XML
In addition to CDF, Microsoft and Marimba, Inc. have also proposed XML-based technology called the Open Software Description (OSD) format, which can be used to control software downloads and installations over a corporate network. A major IS expense for larger corporations, especially those that are geographically distributed, is installing and maintaining software upgrades on employees’ desktops. One small upgrade to a popular piece of software can take days of planning and weeks of actual implementation (i.e., walking around to each person’s desk and installing the upgrade). During the upgrade rollout, employees will have different versions of the same software, which can create problems. With OSD, software upgrades can be handled automatically using push technology, reducing both IS staff hours and logistical problems.

SGML and XML have both been used to create a Chemical Markup Language (CML) for the chemistry community. With the CML vocabulary, molecular structures can be defined within a document and the information can be either posted or transmitted. XML processors can pull out the CML elements and pass these to applications that perform actions like preparing a print-out of the information, either textually or graphically, or creating an online three-dimensional model of the information using VRML or some other 3D technology.

Netscape, Apple, and others have proposed a Meta Content Framework (MCF) created in XML that can expose a Web site structure for navigation or online exploration. MCF can be used to do such things as generating a three-dimensional site map which can be used for Web site publication and administration. The technology is currently used by Apple’s ProjectX/HotSauce browser, and “Xspace”-compatible content can also be viewed using a plug-in available from Apple.

XML can also be used to define a relational database meta-language, which can then be used to describe documents containing relational database information. These same documents can be easily generated from the relational database dictionaries, which are repositories of information about the information stored in the database. The extensible markup language can then be used to create context-centered documents like “all information pertaining to any purchases, week of January 16 through January 23,” rather than using the context-neutral database format. In addition, supporting information that is not part of the data in the database, like images or reference material, can be pulled into the document.

An XML processor can process this context-based data document and use the information therein to present reports, perform online research and queries, or even to create interactive three-dimensional models of the data. Instead of issuing a SQL statement such as:

select customer_name, customer_address, city, state, zip_code from customer, 
purchase_orderwhere purchase_order.order_id = 32245 and 
customer.customer_id = purchase_order.customer_id;

I could enter a three-dimensional VRML world at a purchase_order portal and scan a virtual filing cabinet for my purchase order number. Once I find it, I can open the door into another room with doors labeled “Purchase Order items” and “Customer” and open the Customer door into another room containing the information I am looking for. Best of all, the documents containing the context-based data could be generated automatically, processed automatically, and presented automatically. This means a change in the database table could be handled automatically.

Besides three-dimensional database applications, defining data in an XML document could be used as a method to convert database data in one format, such as relational data, into another format, such as object- based database records. The resources section at the end of this article has a reference to a preliminary XML representation of a relational database.

In addition, with XML processors (or XML parsers, if you prefer), the most difficult aspect of XML has already been implemented: pulling the entities out of the document.

Returning to the CDF example, not only can the XML document be used by Internet Explorer 4.0 to provide information about the structure of a Web site’s channels, I can also access the XML entities using JavaScript, C++, or Java and use the information for other purposes. For example, the following JavaScript functions open a CDF file, pull out information about the elements contained within the CDF file, and print out this information in a newly opened window.

<script language="jscript">
<!--
var doc = new ActiveXObject("msxml");
var wndw = null;

// display elements in CDF file
// file reference must be fully resolved Internet reference
function DisplayElements(cdffile)
{
// Display this with an appropriate message in a popup window
wndw = window.open("","CDFFile",
"resizable,scrollbars=yes");
wndw.document.open();
doc.URL = cdffile;

// begin displaying elements at root
displayElement(doc.root);

wndw.document.write("</body>");
wndw.document.close();

}

// display element tagname, if any
// and information about element such as any attributes (even 
// if undefined for element) and text and element type
function displayElement(elem) {
if (elem == null) return;
wndw.document.writeln("<p>");
if (elem.type == 0)
    wndw.document.writeln("Document contains element with 
                           tagname: " + elem.tagName);
else
    wndw.document.writeln("Document contains element with no tagname");
wndw.document.writeln("<br>Element is of type: " + 
                                GetType(elem.type) +"<br>");
wndw.document.writeln("Element text: " 
                                + elem.text + "<br>");
wndw.document.writeln("Element href: " 
                                + elem.getAttribute("href") + "<br>");
wndw.document.writeln("Element base: " 
                                + elem.getAttribute("base") + "<br>");
wndw.document.writeln("Element style: " 
                                + elem.getAttribute("style") + "<br>");
wndw.document.writeln("Element day: " 
                                + elem.getAttribute("day") + "<br>");
wndw.document.writeln("Element hour: " 
                                + elem.getAttribute("hour") + "<br>");
wndw.document.writeln("Element minute: " 
                                + elem.getAttribute("min") + "<br>");

// check to see if element has children
var elem_children = elem.children;
if (elem_children != null)
   for (var i = 0; i < elem_children.length; i++) {
      element_child = elem_children.item(i);
        displayElement(element_child);
   }

}

// element type
function GetType(type) { 
if (type == 0) 
        return "ELEMENT"; 
if (type == 1) 
        return "TEXT"; 
if (type == 2) 
        return "COMMENT"; 
if (type == 3) 
        return "DOCUMENT"; 
if (type == 4) 
        return "DTD"; 
else 
        return "OTHER";
}

//-->
</script>

See the Resources section for a pointer to an XML demonstration.

Creating an XML document
A key to the true usefulness of XML is that once an XML parser has been created to process an XML document, you can use it to parse out entity information from any document containing any well-formed XML content.

In the last section, I used Internet Explorer’s ability to parse XML entities, attributes, and content to create a Web page that listed the entities, their attributes, and some content. An interesting example, but not really useful. But what if I were to define my own XML document, including my own XML entities and attributes, and then use IE’s built-in XML parser to create my own graphic menu Web page application? This is fairly simple and only took a couple of hours of playing around to accomplish.

First, I defined my own CDF file and created my own entities, as shown here:

<?XML VERSION="1.0" ENCODING="UTF-8"?>

<DOCUMENT >
    <TITLE>YASD+</TITLE>
    <STYLESHEET HREF="http://www.yasd.com/css/daily.css" />
    <ITEM HREF="http://www.yasd.com/plus/plus.htm">
        <IMAGE HREF="http://www.yasd.com/plus/logo.jpg">
        <ALT>YASD+ Main Page</ALT>
        </IMAGE>
    </ITEM>
    <ITEM HREF="http://www.yasd.com/samples/bytes/daily.htm">
        <IMAGE HREF="http://www.yasd.com/plus/logo.jpg">
        <ALT>YASD Code Byte</ALT>
        </IMAGE>
    </ITEM>
    <ITEM HREF="http://www.yasd.com/samples/bytes/cheap.htm">
        <IMAGE HREF="http://www.yasd.com/plus/logo.jpg">
        <ALT>YASD Cheap Page Tricks</ALT>
        </IMAGE>
    </ITEM>
</DOCUMENT>

I redefined what ITEM is, created a new root element called “DOCUMENT,” and added some new elements of IMAGE, STYLESHEET, and ALT. I followed the XML convention for well-formed entities — opening up this document for parsing within IE 4.0 generates no errors.

I then created an application, consisting of two frames, that uses the images associated with the items to create a graphical menu bar in the top frame of the window and set the link associated with each image to open in the bottom frame of the window. The window originally opens with the form to access the CDF file and process its contents. This form is then overwritten with the processing results. The code for the form and to process the form contents is as follows:

 
<script language="jscript">
<!--
var doc = new ActiveXObject("msxml");
var wndw = null;

var title = "";
var stylesheet = "";
items = new Array();
itemimages = new Array();
itemalts = new Array();
ct = -1;

function createWindow(cdffile)
{
doc.URL = cdffile;

// find main document and any associated item documents
findElements(doc.root);

// if associated documents
if (ct > 0) {
  var strng = "<HTML><HEAD><TITLE>" + title + 
        "</TITLE><LINK REL=STYLESHEET TYPE='text/css'" +  
        " HREF='" + stylesheet + "'></HEAD><BODY>";
  for (var i = 0; i <= ct; i++) 
     strng+="<a href='" + items[i] + 
                "' target='Body'><IMG src='" + itemimages[i] + "' ALT='" + 
                itemalts[i] + "' border=0>" + 
                "</a>"; 
  strng+="</BODY></HTML>";
  document.open();
  document.writeln(strng);
  document.close();
  }
}

// display element tagname, if any
// and information about element such as any attributes (even if undefined for element)
// and text and element type
function findElements(elem) {
if (elem == null) return;
if (elem.type == 0) {
    if (elem.tagName == "TITLE")
        title = elem.text;
    if (elem.tagName == "STYLESHEET")
        stylesheet = elem.getAttribute("href");
    if (elem.tagName == "ITEM") {
        ct++;
          items[ct] = elem.getAttribute("href");
        }
    if (elem.tagName == "ALT") 
        itemalts[ct] = elem.text;
    if (elem.tagName == "IMAGE")
        itemimages[ct] = elem.getAttribute("href");
    }
        
// check to see if element has children
var elem_children = elem.children;
if (elem_children != null)
   for (var i = 0; i < elem_children.length; i++) {
      element_child = elem_children.item(i);
        findElements(element_child);
   }
}
//-->
</script>

I could have defined any elements within the XML document as long as I used well-formed XML entities, and I could process the results in virtually any way I desired just by using simple scripting techniques.

Linking and style information
In addition to the XML specification, other efforts are currently underway to add supporting specifications. The first is XML part 2, which includes linking. Another is XSL, the Extensible Style Language, which defines an XML stylesheet.

Linking has been extended considerably with XML. You can specify an attribute that determines how a resource is displayed, specify whether the resource is displayed automatically, and even specify multiple layers of linkage. Of particular interest is the capability to define a group of links, associating documents together in such a way that the person following the links does not have to hunt around for related documents. If you have ever jumped to a Web site page by following a link from another site, you know how frustrating it can be to try establish the context of the link in order to find related documents.

XSL would be specified using XML and would provide a way to define presentation elements, such as those used currently in HTML. For example, HTML includes the Emphasis element, delimited with <EM> </EM> tags, the Strong element, delimited with <STRONG> </STRONG>, and others. With XSL, you could create styles to provide recommendations for how an XML entity is rendered.

The downside to XML
While XML’s implementation-neutral technique allows parsed information to be used for multiple purposes in multiple applications, it is this same flexibility that may cause problems.

Returning one last time to my CDF example, I created a simple JavaScript application that opens the main channel page and all the associated pages into a frames-based Web page. The main page opens into the top-most frame, and each individual CDF ITEM element opens into one of the smaller frames located along the bottom of the document.

This isn’t a problem for my own CDF file, which is relatively simple. Applying the same application to another CDFfile, however — one I neither created nor control — creates a Web page that probably does not meet the expectations of the page’s designer. The following screen shot shows the result of using the frames-generation application on the IDG.net channel:

To create this page, I used a publicly accessible file, IDG.net’s CDF file, and exposed the XML elements to create a presentation neither Microsoft nor IDG.net intended. Even with the new effort on XSL, currently only a W3C proposal, there is no guarantee that the information exposed with XML will be used for anything approaching the intended purpose of the XML document’s original creator.

Another potential problem area with XML is the CDF specification. CDF’s potential is great; you could use it to build an XML-based document that could be used by different push technology vendors with relatively comparable results. But what happens if a vendor supports channels but doesn’t want to use CDF? Do we end up with different “flavors” of channels? Does the W3C then create a different standards specification for channels, another for chemistry, another for math, another for finance, and so on in order to ensure that only one specification for each “topic” or “business” is created? Or can we design tools for translating between each of the XML document definitions?

In conclusion
Even with these issues at stake, XML is a terrific addition to Web and other application development. One of the most difficult aspects in application programming is extracting the structure as well as the contents of documents. XML has made this process a whole lot easier.

During the recent XML/SGML conference in Washington DC (December10-12), XML became a proposed recommendation of the W3C, the last remaining step before becoming a real recommendation. It may be only a matter of time before XML is just as common as SQL is today.

Netscape Navigator’s JavaScript 1.1 vs Microsoft Internet Explorer’s JScript

Originally published at Digital Cats, now archived at the Wayback Machine

Prior to Netscape implementing JavaScript in Netscape Navigator, web developers had few tools to create interactive content for their web pages. Now this scripting language gives developers the ability to do things such as check form contents, communicate with the user based on their actions, and modify the web page dynamically without the web page being re-loaded and without the use of Java, plug-ins or ActiveX controls.

Unfortunately, JavaScript was not usable by any other browser until Microsoft released Internet Explorer (IE) 3.0. With this release web developers could deliver interactive content that would at least work with the two most widely used browsers. Or could they?

On the surface, the JavaScript supported by both companies is identical. They both provide the same conditional control statements, have defined objects such as window or document,and can be used directly in HTML documents. They both support events based on user actions and support functions in a similar manner. However, this article will demonstrate that though the languages may look the same on the surface, there are differences that can trip the unwary developer.

JavaScript Objects

To understand the differences between the two implementations of JavaScript you must examine the objects both support. As an example, with Navigator 3.0 Netscape provides a new object image which is an array (defined as images that contains the images in the document currently loaded. This object allows the developer to change the images of a document without re-loading the document or using Java or other technique. With this capability the developer can write the following JavaScript code section without error:


sSelected = "http://www.some.com/some.gif"
document.images[iIndex].src = sSelected

This code will change the src property of the image that is contained in the array at the index given in the variable iIndex. If this variable contained the value 2, the 3rd image as loaded in the document (the array indices begin at 0) would be changed to the image located in the given URL. An example using images can be found here.

Running the same example with Internet Explorer will result in a alert message that states that “‘images’ is not an object”.

Other examples of objects that are defined for Netscape Navigator but not for Microsoft Internet Explorer are:

  • The Area object, which is an array of links for an image map that allows the developer to capture certain events for the image map that can be used to provide additional information to the user. With this, the developer can capture a mouseOver event to write out information about the link in the status bar.
  • The FileUpload object which provides a text like control and a button marked with “Browse” that will allow a reader to enter a file name. The JavaScript can then access the name of the file.
  • The Function object which allows the developer to define and assign a function to a variable which can then be assigned to an event.
  • The mimeTypes Array of supported MIME types.
  • The option object which is an array of the options implemented for SELECT and which allows the developer to change the text of the option at runtime
  • The applet object, which is an array of Applets in the document (read-only)
  • The plugin object, which is an array called embeds that contains the plug-ins contained in the document (read-only)
  • The plugins array, which is an array of plug-ins currently installed in the client browser

At this time there are no JavaScript or JScript objects defined for Internet Explorer that are also not defined for Netscape. However, as JScript is an implementation script for IE and Microsoft has defined their own IE scripting model, this could change in the future.

JavaScript Object Behavior and Ownership

Internet Explorer may not have additional objects but it has defined a different hierarchy and ownership for some of the objects that are used by both it and Navigator. All objects are contained within the window object in the IE scripting model, which can be viewed here, but not all objects are owned by window with the Navigator model, which can be viewed with the JavaScript Authoring Guide. The object Navigator, which is the object that stores information about the browser currently being used, is an example of an owned object by window in IE but not in Navigator.

This will not present an incompatibility problem between the two browsers as the developer will usually not preface the object with the term “window” as the following code demonstrates:


<FORM NAME="form1">
<input type=button value='press' onClick="alert(navigator.appName)">
</FORM>

The above code will work with both browsers.

The differences between the ownership can become a problem when an object is owned by different levels of objects. An example is the history object, which is owned by the window object in IE, but by the document object in Navigator. When used in the current window and document, the object will work the same as the following code will demonstrate:


<FORM NAME="form1">
<input type=button value='press' onClick="history.back()">
</FORM>

The reason the same code can work for both is that window is assumed for both IE and Netscape and document is assumed for Navigator, at least in this instance because history is also an object in its own right. In the case of a document being opened as part of a frame, the differences then become noticeable. The following code will work when the document is opened as a frame in IE, but will not work in Navigator:


<body>
<script>
function clicked() {
history.back()
}
</script>
<FORM NAME="form1">
<input type=button value='press' onClick="clicked()">
</FORM>
</body>

Clicking on the button from the code above will work for IE. The previous document in the History list will be opened, but the same code will not work for Navigator. Clicking on the button will result in neither a change of document nor an error. Prefacing the history object with parent will enable this code to work with both browsers.

JavaScript Properties

Even when IE and Navigator share a common object and a common object ownership, they can differ on the properties for an object. An example is the document object. The properties for both implementations are the same except for an URL property for the Navigator implementation and a location property for the IE implementation. However, if you examine both properties, they are identical! Both contain the URL of the document. Both are read-only. The following code will work with Navigator, but results in an empty Alert message box for IE:


<FORM NAME="form1">
<input type=button value='press' onClick="alert(document.URL)">
</FORM>

According to Microsoft documentation, the equivalent for IE would be to use document.location.href. However, though this does not result in an error, it also results in an empty alert box. The following code achieves the desired results and, happily, works in both browsers:


<FORM NAME="form1">
<input type=button value='press' onClick="alert(location.href)">
</FORM>

The above example does demonstrate another area of caution when using the JavaScript language: this language is unstable in both environments and is changing continuously. Don’t assume something will work because the documentation states it will, and don’t assume it will work the same on all operating systems. Both browsers can have different behaviors across different operating systems, sometimes because of operating system differences and sometimes because of bugs that were missed during testing on that specific OS.

JavaScript Methods and Events

A web developer can find ways of working around differences in objects and properties, but working around differences in methods may not be so easy. When we develop we expect a certain behavior to result when we call a specific function and pass to it certain parameters. With Microsoft Internet Explorer and Netscape Navigator, the best case scenario is that we can use a pre-defined object method with both browsers and have the same result. The worst case scenario is that the method works with both browsers, but the result is different.

An example of the best case scenario is to use JavaScript to validate form field contents which have changed or to use these contents to calculate a value used elsewhere. Calling a JavaScript function from the onChange event to process the changed contents as demonstrated below will work with both browsers:


<!--- Form fields
<p>Item Quantity: <INPUT TYPE="text" Name="qty">
Item Cost: <INPUT TYPE="text" Name="cost"
                onChange="NewCost()">
Total Cost: <INPUT TYPE="text" Name="total" Value=0>
…
<!--- Function
<SCRIPT LANGUAGE="javascript">
<!--- hide script from old browsers

// NewCost will
// calculate cost of qty and item
function NewCost() {
        var iCost = parseFloat(document.Item.cost.value)
        var iQty = parseInt(document.Item.qty.value)

        var iTotal = iCost * iQty
        document.Item.total.value = iTotal
        }

The above will work as expected with both IE and Navigator. When the user enters a quantity and a cost, the onChange event will fire for the cost field and a JavaScript function called NewCost() will be called. This function will call two built-in JavaScript functions, parseFloat() and parseInt(), to access and convert the form field values. These will then be used to compute a total which is placed in the total field.

So far so good. Another JavaScript function in the web page will be processed when the user presses the submit button. This pre-defined button style will normally submit the form. The developer can capture the submission and perform validation on the fields. Coding this for Navigator would look like the following:


<FORM NAME="Item"
        ACTION="some.cgi" onSubmit="return SendOrder()">
…
<!--- Function
// submit order
function SendOrder() {

        // validate data
        if (document.Item.Name.value == "") {
                alert("You must enter your name")
                return false
                }
        return true
        }

Capturing the onSubmit event will allow the developer to call a function to process the form fields. If they choose, they can perform validation in this function. If the validation fails, say the user did not provide a name, the function notifies the reader and returns false, preventing the form from being submitted. If the user did provide a name, the function would return true, and the form would be submitted.

Following the documentation that can be found at the Microsoft site, the developer would expect something like this to work for IE as well as Navigator. It does, to a point.

With IE the onSubmit event is captured and the SendOrder() function is called. If the user did not enter a name value, an alert would occur. The behavior is the same for both browsers at this point. However, if the user does provide a value, Navigator would then submit the form and a follow-up form would be displayed. This does NOT occur with IE IF you are testing the page locally, probably due to a bug missed during the testing. It does work if you run the web page documents through a web server.

However, without knowing that the difference between the two results was a matter of document location rather than document coding the web page developer could have spent considerable time trying to get the same behavior for both browsers.

Aside from the differences already noted, the browsers may process code in a funtionally identical manner and yet perform quite differently. This can be demonstrated with another popular use of JavaScript which is to open a secondary window for some purpose and to maintain communication between the two windows. This is widely used by Netscape for their tutorials.

Both browsers support a property for the window object called opener. This property can be used to contain a reference to the window that opened the secondary window.

The following code demonstrates using JavaScript to open a secondary window and to set the opener property to the original window:


<HTML>
<HEAD><TITLE> Original Window </TITLE>
<SCRIPT LANGUAGE="JavaScript">
<!--- hide script from old browsers
// OpenSecondary will open a second window
//
function OpenSecond(iType, sMessage) {
        // open window
        newWindow=window.open("second.htm","",
                "toolbar=no,directories=no,width=500,height=300")
        // modify the new windows opener value to
        // point to this window
        if (newWindow != null && newWindow.opener == null)
                newWindow.opener=window

        }
// end hiding from old browsers -->
</SCRIPT>
</HEAD>
<BODY>
<H1> Open a new Window and get information</H1>

<FORM NAME="CallWindow">
<p>Click to open Window:
<p>
<INPUT TYPE="button" NAME="OpenWin" VALUE="Click Here"
        onClick="OpenSecond()">
<p>
</FORM>
</BODY>
</HTML>

The above HTML and JavaScript creates a simple document with one button. Pressing the button opens a new window (creates a new instance of the browser) that does not have a toolbar or directories and is set to a certain width and height. The HTML source document that is opened into this new window is “second.htm”.

The HTML and JavaScript in the secondary window is shown below:


<HTML>
<HEAD><TITLE> Information</TITLE>

<SCRIPT LANGUAGE="JavaScript">
<!--- hide script from old browsers
// SetInformation
// function will get input values
// and set to calling window
function SetInformation() {
        opener.document.open()
        opener.document.write("<BODY>")
        opener.document.write("<H1> Return from Secondary</h1>")
        opener.document.write("</BODY>")
        opener.document.close()
        opener.document.bgColor="#ff0000"
        window.close()
        }
// end hiding from old browsers -->
</SCRIPT>
</HEAD>
<BODY>
<p><form>
Click on the button:<p>
<INPUT type="button" Value="Click on Me"
     onClick="SetInformation()">
</CENTER>
</FORM>
</BODY>
</HTML>

This code will display a button labeled “Click on Me”. Pressing this button will result in the document page for the original window being modified to display the words “Return from Secondary”. The JavaScript will also change the background color of the original document and then will close the secondary window. When run in Navigator, the sample behaves as expected. Not so, however, with IE.

First, when you open the secondary window in IE you will notice that the document in the original window seems to blank out for a moment. In addition, the secondary window opens at the top left hand side of the desktop with IE but opens directly over the original window with Navigator, no matter where that original window is.

When you press the button that modifies the original document, IE does modify the document with the new header, as Navigator does, and the background on the original document will change briefly. However, when the secondary window is closed the background color of the original document returns to the original color!

What is worse is that running this sample for a few times with IE will cause the browser to crash! How soon it will crash seems to suggest that resources are not being freed, or are being freed incorrectly. Whatever the cause, this behavior should make a developer feel cautious about opening up a secondary window within IE.

Solutions to working with both Internet Explorer and Navigator

JavaScript is a useful tool for creating interactive content without using Java or some other technique. As we have seen, problems arise out of the incompatibility between Navigator and IE. What are ways to avoid these problems?

  • Code for one browser only. This is not really a viable solution. While Netscape Navigator is the most popular browser in use today, Microsoft Internet Explorer is gaining quite a following. Using Navigator-only JavaScript that will limit your web page’s audience.
  • Code only to the least common denominator. By limiting the JavaScript to that which works for both Navigator and IE, developers will have access to most of the functionality they need. However, with this approach developers will need to be aware of the gotchas that have been discussed in this article. In other words, the developer has to test with both browsers; the tests will have to occur via the web server as well as run locally; the tests should be repeated several times to ensure that no adverse effects show up over time; the code should be tested with all possible environments.
  • Code for each browser within the same page. This is actually my favorite approach though it will require more effort by the developer. Both browsers will process JavaScript labeled as <SCRIPT LANGUAGE=”javascript”>, however only Navigator will process script labeled as <SCRIPT LANGUAGE=”javascript1.1″>. With this approach, the developer can place JavaScript common to both browsers in a SCRIPT tag labeled with “javascript” and Navigator specific code in a SCRIPT tag labeled with “javascript1.1”. To add in IE specific code, the developer would probably want to use VBScript code or use the tag JScript whenever applicable. At this time, JScript is implemented for scripting with events only, not as a language specification for the SCRIPT tag.

Summary

JavaScript is a very effective tool, and a relatively simple method for adding interactive content to a web page. With version 3.0 of Internet Explorer, using JavaScript is viable for both of the most popular browsers in use. However, developersneeds to be aware of the differences between Netscape Navigator and Microsoft Internet Explorer and should test thoroughly with both before posting the page to their site.