Categories
People Places

Climbing Mt. Everest

I decided to spend some time looking at the history of climbing Mt. Everest because of a note I found at the PBS Nova Web site. It seems that a new expedition is heading to Mt. Everest, but the goal of this expedition is rather different than others.

You see, we know that Sir Thomas Hillary was the first man to reach the top of Mt. Everest in 1953…or was he. There is conjecture that he might not have been first, that early Mt. Everest pioneers George Mallory and Andrew Irvine may have been first. However, as these two climbers failed to return, all that we know of the success or failure of their bid to reach the summit is as clouded as the summit was that day when they were last seen.

However, there is a chance, a small chance, that the knowledge of Mallory’s and Irvine’s quest did not die with them. Mallory carried a camera with him, a camera that has never been found.

Just think of it! What if someone could find this camera and could develop the film, even after all these years, and the film shows that instead of a person reaching Mt. Everest in 1953, another person reached the summit almost thirty years earlier, in 1924. What an incredible discovery. Regardless of the success of this quest, it is a fitting tribute to these earlier climbers who gave their lives for their own quest: to attempt to find the truth.

You can read more about the Mallory and Irvine research expedition at The MoutainZone’s Mallory & Irvine Research Expedition page. The expedition left in March, and daily updates are posted to this site from the expedition members Dave Hahn and Eric Simonson.

Nova is also following this expedition with their own Web site titled Lost on Everest. Nova’s coverage begins April 27th, and also includes an awesome 360 degree image of what the view is like from the summit. If you don’t go for any other reason, you have to go to the site for this. It will take your breath away.

Accuracy is the key

At the same time that the expedition to find the camera of George Mallory is underway, which occurs on the North face of Mt. Everest by the way, another expedition is also on a quest, though a completely different one. This quest is the Everest Millennium Expedition and the purpose of it is to measure Mt. Everest.

At this time, the most accurate assessment we have of the height of Mt. Everest is 29,028 feet, or about five miles up. The Millennium Expedition plans on using the latest technology, the Global Positioning System (GPS), to get the most accurate measurement of all.

Follow along with the expedition from the MountainZone’s Everest South Side Expedition Web site. This also includes frequent updates with expedition members, as well as photos and other multimedia.

The GPS equipment is being run for Brad Washburn, former Director of the Boston Museum of Science, and well known mountain photographer. The Museum of Science has a an excellent exhibit of Everest photos and memorobilia, as well as a scale model of Everest.

National Geographic is also covering this expedition, and you can view their site on it at Everest: Measure of a Mountain. Do watch the opening intro, it is worth it.

How’s the Weather up there?

Two expeditions seeking knowledge, are joined by a third expedition trying to answer the age old question: How’s the Weather up there? The Weather Channel follows two MIT graduate students, Matthew Lau and Chris Metcalfe as they join veteral researcher David Mencin to study weather on Mt. Everest, as well as place advanced telemetry at the peak for research.

The hope is that advances in the technology used for these instruments will allow them to function for a full year, being an invaluable resource for learning about the weather patterns at the world’s highest peak, but also providing information for climbers in hopes of having safer expeditions.

This expedition is covered at the Weather Channel’s Everest Web page. You can learn more about GPS at The NAVSTAR GPS homepage, or the University NAVSTAR Consortium.

 

Summary

The power of the Web is that folks like me, who will never climb something like Everest, can experience the next best thing by hearing from those who do and seeing the images they send back. My appreciation to them for giving me a glimpse of their quests.

Categories
Technology

Shopping Carts

Recently, someone, I’ll call him Joe, sent me an email and asked a question about maintaining shopping cart items using client-side cookies.

A shopping cart is basically a program that maintains a list of items that a person has ordered, a list that can be viewed and reviewed and usually modified.

Joe had run into a problem in that browsers limit both the size and quantity of cookies that can be set from a Web application, and this limited the number of items that could be added to his company’s online store’s shopping carts. He asked whether there was a client-side Javascript technique he could use that would update a file on the server instead of on the client so that customers could add an unlimited number of items to their carts.

Instead of trying to maintain the company’s shopping cart using cookies, I recommended that Joe check out his Web server’s capabilities, and see what type of server-side scripting and applications his server supported. Then he could use this technology to maintain the shopping cart items. I also recommended that he limit the use of the client-side cookie to setting some form of session ID so that the connection between the cart items and the shopper was maintained.

Shopping Cart Technology

Joe’s email did get me to thinking about how online stores use different techniques to maintain their shopping carts.

For instance, all of the stores I shop at, except one, use some form of a client-side cookie, but each store uses these cookies differently. Additionally, the stores use server-side techniques to support online shopping, though this support can differ considerably between the stores.

Client-side cookies were originally defined by Netscape for use in Navigator, though most browsers support cookies. Cookies are small bits of information stored at the client that can be maintained for the length of a session, a day, a week, or even longer.

The use of client-side cookies is rigidly constrained in order to prevent security violations. You can turn off cookies with your browser, but be aware that cookies do not violate the security of your machine and without the use of cookies your online shopping will be severely curtailed.

This YASD article does a little technology snooping of four online shopping sites and snoops out how each site uses a combination of server-side and client-side processing to maintain its carts.

Covered in this article are the shopping cart techniques used at the following online stores:

  • Amazon.com
  • Beyond.com
  • Catalogcity.com
  • Reel.com

Shop Til You Drop

To understand shopping cart maintenance, its important to understand customer shopping habits. We’ll start this article by looking at some online shopping habits.

First, online shopping has grown enormously in the last few years. Once folks found out that it was safer to send your credit card number over the Net at a secure site then to give it over a wireless phone, much of the hesitation about online shopping vanished.

What are the types of things that people buy online? Books, CDs, and Videos are popular, but so are kitchen utensils, computer hardware and software, photography equipment and film, food gifts, bathroom towels, and even irons.

People shop online because of necessity, convenience, and cost. We purchase books, CDs, and videos online because the online stores have a much larger selection than any local store could possibly have. We know that when we are looking for a book, even an out of print book, we will most likely be able to get the book from one of the online bookstores such as Amazon.

Some businesses we shop at, such as Dell, have no physical store location. This type of business provides service for their customers through mail or phone order, only. Many of us prefer to use online shopping for these types of stores rather than having to call someone up and go through a long list of items or manually fill out an order form, all the while hoping we put down the right item number. It is a whole lot simpler to look at an item and click on an image that says something like “Add to Shopping Cart”. An added benefit to online shopping is that we can review my order before it is sent, and can get a hard copy printout of the order for our records.

Normally, most of us only shop for a few items at a time, but the number of items can become large, especially with online grocery stores — a rising phenomena. However, it isn’t unusual for us to add some items to our shopping cart one day, a couple of more items another day, and so on, until we’re ready to actually place the order. At times we may even forget we have items in a shopping cart, until we add another item to the cart and the previous items show up.

We also change our mind at times, and may need to remove items from the shopping cart, or change the quantity of an item ordered. It’s also handy to have a running total for the order so we can make at least an attempt to stay within our budgets. If the shipping charges are also shown, that’s an added bonus.

Many of us may have more than one computer and may start a shopping session with a laptop and finish it at a desktop computer, though as you will see later in this article, this sometimes isn’t that easy. In addition, the use of both Netscape Navigator and Microsoft’s Internet Explorer on the same machine isn’t all that unusual for heavy Net users, and we may start a shopping cart with one browser and add to the cart from another browser.

Pulling requirements from these patterns of use, we come up with the following:

  • An item can be added to a shopping cart with the touch of a button
  • Shopping cart items need to persist for more than one shopping session
  • Some indication that there are items in the shopping cart should show, at least on the home page for the site
  • The store needs to provide a means to modify the shopping cart items, or to remove an item or all items
  • A running total needs to be maintained each time the shopping cart contents are reviewed
  • Showing standard shipping charges and other costs when the shopping cart is reviewed is an added bonus
  • The shopping cart needs to follow the client
  • Stores need to provide the ability to review an order before placed
  • Stores also need to provide the ability print out the contents of the shopping cart
  • Shopping carts should support an indefinite number of items, or the number of items should be limited by company policy, not Web technology limitations.

A pretty good list of requirements. Now, how do each of the stores measure up?

To determine when cookies are used at each of the sites evaluated, I set my browsers to prompt me when the online store wants to set a cookie. Using this approach I can see what kind of cookies the store uses, and get an idea of the cookie purpose.

Amazon.com

Probably the undisputed king of online bookstores is Amazon.com. This company began as a pure Net-based business, and has shown the world that online commerce not only works, it works very well, thank you.

Amazon has one of the better store interfaces, and some of the best account and order maintenance setups, but does it meet all of our shopping cart requirements? Let’s check it out.

First, all items that Amazon sells can be added to the existing shopping cart with the touch of a button, even items that are on order but not yet in stock. In addition, the shopping cart contents will persist even if you leave the site and return at a later time. In fact, Amazon tells you that the item will remain in the shopping cart for 90 days, if I read this correctly, a feature I found to be very nice.

Bonus technique: Let people know how long the shopping cart items will remain in the cart. The only surprise to pull on a customer is to let them know an item is on sale, or that they are the millionth customer and have won something. Anything else will lose you business.

Amazon also provides a feature to save the item for purchasing at a later time. This removes the item from the cart, but still keeps the item on a list for later purchase.

The shopping cart can be reviewed at any time, and there is an icon on every page that allows you easy access the shopping cart. You can also modify the cart contents by changing the quantity of an item you’re ordering, or removing an item altogether.

Amazon makes use of standard HTML technology, so the shopping cart review page should print out fairly well. Unfortunately, the shopping cart does not display shipping charges and does not display a running total for the items. However, Amazon does show a total, including shipping, that you can review before you place the order. This page can also be printed out.

So far so good. Amazon has met most of our requirements, but the real test of Amazon’s supremacy in shopping cart technology is whether the cart can follow the customer. Unfortunately, the company does not support this capability.

When you access Amazon from a browser at a specific machine for the first time, Amazon sets an ID that is used to track your shopping cart items. Access Amazon from the same browser and the same machine, and you will get the same shopping cart items. However, if you access Amazon from another machine or even another browser, you will not get access to these shopping cart items.

Is it important to maintain shopping cart persistence from browser to browser, machine to machine? You bet it is.

I, as with other folks involved with Web development and authoring, use both Navigator and IE. In addition, there are some sites geared more towards one of these browsers, so most folks who spend a lot of time on the Net have both browsers.

There are times when I am sure I have placed an item in the shopping cart, only to find out I did, but using a different browser or from a different machine. This happens more often than I would like, and is an annoyance every time.

Now the online stores have to ask themselves the question: Are people like myself part of a customer base they want to attract? Think of this: Who is more likely to be purchasing items online than folks who spend a large amount of their time, online. And who is likely to use more than one machine and more than one browser? Folks who spend a lot of time, online.

To summarize, Amazon uses client-side cookies to establish a persistent ID between the customer and the shopping cart. The company also uses this ID to establish a connection from the customer to the customer’s account information. The shopping cart items, however, are maintained on the server, persist for 90 days, and there is no limit to the number of items that can be ordered at Amazon, at least as far as I could see. Where Amazon does not meet the requirements is by not providing a running total on the shopping cart review page, and by not providing a shopping cart that moves with the customer.

Based on the requirements met by Amazon, I give them a score of 8 our of 10, for their support of shopping cart techniques.

Beyond.com

Beyond.com sells computer software and hardware and is a Net-only based company.

Beyond.com maintains a client ID in client-side cookies, which is used to track shopping cart contents for the session, only. Beyond.com does not persist the shopping cart contents outside of a specific browser session. Once you close the browser, the current shopping cart contents are gone.

In addition, it does look as if Beyond.com maintains the shopping cart totally within one client-side cookie, tagged with the name “shopcart”.

By maintaining the shopping cart on the client, Beyond.com has chosen one of the simplest approaches to maintain a shopping cart, and simple can be good. There is little or no burden on the server other than accessing the original item that is added to the cart. There is also less maintenance to this type of system, primarily because the Web developers and administrators do not have to worry about issues of storage of information on the server, or cleaning up shopping carts that become orphaned somehow. Additionally, Beyond.com is taking advantage of a client-side storage technique that is safe and simple to use.

However, there is a limitation with this approach in that the cookie is limited to a size of 4 kilobytes. It may seem that 4K is more than large enough to support a cart, but when you store information for each item such as cart item number, name of product, version, price, store identification number, quantity and price, you can reach an upper limit more quickly then you would think. Additionally, a limit is a limit, and you have to ask yourself if you really want to limit how many items a person can add to their shopping cart.

Most online stores could probably get away with shopping carts that have number of items limitations. After all, it might be a bit unusual to purchase 40 or 50 items from a software company at any one time.

If a store’s customers tend to purchase only a few items at a time, then it might not be cost effective to provide shopping cart technology that provides for virtually unlimited items.

Beyond.com also provides a quick look at the shopping basket from every page of the site. This quick view provides the first few letters of the cart item, the quantity ordered, and a running total for the cart. As much as I appreciate having this information, I found that I would have preferred having just the quantity of items in the shopping cart and the running total: listing all of the items actually became a distraction when I had more than a few.

Beyond.com uses standard HTML and the shopping cart page did print out using the browser’s print capability. In addition, you can review the order, including the shipping charges, before the order is fully placed.

To summarize, I liked Beyond.com’s support for shopping cart status display on the other Web pages. I also liked Beyond.com’s running total. The biggest negative to Beyond.com’s shopping cart was the lack of persistence outside of the browser session. I may not order more than 5 or 10 items at a time, but it isn’t unusual for me to add a couple of items at one time, close down the browser and return at a later time to add more items and finalize the order. In addition, it isn’t impossible for people to accidentally close down their browsers, which means they lose all of the items from their cart and have to start over. Based on the lack of persistence, I would have to give Beyond.com a 6 in shopping cart technology.

Catalogcity.com

CatalogCity is an interesting online business that presents the contents of catalogs from several mail order firms, allowing you to shop for everything from a new jacket to kitchen knives. Once you have placed your order for all of the items you’re interested in, CatalogCity then submits the orders for the individual items to the specific catalog company.

Of all the online shops I have seen, CatalogCity is one of the most clever. It provides both goods and a service, but without the hassle of maintaining inventories and supporting warehouses. I am sure that CatalogCity charges a fee to use their services to the catalog companies listed, but it is most likely more profitable for these companies not to hassle with online ecommerce issues. Even for those companies that have their own site and that use CatalogCity, they will get access to people who are looking to shop, but don’t necessarily know the catalog company’s name or Web site URL.

I do like to see effective and innovative uses of Web commerce. If I have a problem with the site, it is that not all of the catalog companies support online shopping in the catalog through CatalogCity. You can review the catalog and use the phone number provided to place your order. However, it’s just not the same as one button shopping.

CatalogCity uses cookies to set a customer id the first time you access their site. However, after that, all information about the shopping cart is stored on the server. There is no indication in the pages that you have shopping cart items, but you can access the shopping cart from a well placed icon on each site page.

The shopping cart page lists all of the items, provides the ability to change or remove an item, and provides a running total — sans shipping charges. It also provides a hypertext link from the item to the page where the item was originally listed, so you can review the item before making a final purchase.

The technology that CatalogCity uses for their shopping cart is fairly standard, so the cart page should print out easily. In addition, the company does provide the customer the ability to review the order, including shipping charges, before the order is placed.

The CatalogCity shopping cart is the most persistent of all of the shopping carts that I have seen. First, if you access the site but don’t set up an account, the cart will persist from browser session to browser session, but only with the same browser and machine. However, if you create an account with CatalogCity and sign in each time you access the site, the shopping cart will follow you from one browser to another, and from one machine to another. In fact, of all the sites I reviewed for this article, CatalogCity is the only one that provided this functionality.

To summarize the CatalogCity shopping cart technology, the company has provided the best support for shopping cart persistence of all the sites visited. In addition, the company also provides easy access to the cart, and provides a running total on the shopping cart page. CatalogCity also provides you with a change to review and modify your order as well as review the shipping charges before the order is placed. About the only non-positive aspect I found with this site’s shopping cart technology is that the site does not provide information that the shopping cart has items on the first page. If CatalogCity had provided this, I would have given the site a score of 10, but I’ll have to give it a score of 9.

Reel.com

Reel.com is an online video store that sells new, and used, VHS and DVD videos. It has an excellent selection of movies and a nicely organized site.

Reel.com uses cookies to set a user id when you first access the site. When you access a specific item, the site uses ASP pages and ASP (Microsoft’s server-side technology) sets a cookie with a session id when the first ASP page is accessed. After that, no other cookies are set. All shopping cart items are maintained on the server.

ASP or Active Server Pages, was the technology that seemed to be most used at the different online stores. ASP technology originated with the release of Microsoft’s Internet Information Server (IIS), but has since been ported to other Web servers and even to Unix from a company called ChiliSoft.

ASP provides for both server-side scripting as well as server-side programming with ASP components. In addition, ASP provides full support for data access through Microsoft’s Active Data Object technology.

One cookie that is set with ASP is the Session ID. When you access a site for the first time during a browser session, Microsoft tries to set a Session ID, to maintain a connection between your browser and the Web server. Without this, it is very difficult to maintain information about the session, such as shopping cart contents, from Web page to Web page.

Reel.com does not provide a running total for the purchases on the shopping cart page, and does not provide a visual indicator that items are in the shopping cart from any of the pages, except the shopping cart page. The company does provide easy text-based access to the shopping cart from each page and does allow you to change the quantity of an item ordered, as well as remove an item from the cart.

Reel.com provides shipping and full order information for the customer to review and modify before the order is placed, and the order page, as well as the shopping cart, can be printed out.

Reel.com does not provide persistence beyond the session. Once you close the browser, the shopping cart is gone.

To summarize, Reel.com did not score very high by meeting many of the requirements for a shopping cart. It didn’t provide a visual cue about shopping cart contents, at least on the first page of the site, nor did it provide a running total on the shopping cart page. The biggest negative, though, was that the site did not maintain the shopping cart persistently outside of the browser session. Reel.com did provide the ability to review and modify the order before the order was placed, but based on the requirements met, I would have to give Reel.com only a 4 for shopping cart technology.

Summary

Four different online store sites, each using different techniques to support the site’s shopping cart.

All of the sites used cookies to establish a connection between the browser session and the shopping cart. In addition, each site provided shopping cart pages that could be printed out, and provided the ability for the customer to review and modify, or cancel, the order before it was placed.

The highest scorer in my evaluation is CatalogCity, primarily because of their support for persistence across different browsers and machines. This was followed by Amazon, which provided for browser and machine specific persistence.

Both Reel.com and Beyond.com dropped drastically in points because of their lack of shopping cart persistence, in any form. However, Beyond.com did provide feedback as to shopping cart contents, something that CatalogCity and Amazon did not. Beyond.com may want to consider dropping their line item display of the shopping cart as this can be distracting. They were also the only online store maintaining their shopping cart in client-side cookies. While this approach has an advantage of being the quickest technique to displaying the shopping cart contents when the customer wants to review the shopping cart, and is the simplest technique to use, I still don’t see this approach as the way to go for online shopping.

If we could take CatalogCity’s persistence and add to it a running total with estimated shipping charges, and provided feedback on the cart contents in at least the home page of the site, we would have, in my opinion, the perfect shopping cart.

The next article in the Technology Snoop series will be on order processing and order status maintenance. In this you’ll see that Amazon shines. You’ll also read about how not to win customers and to lose the ones you have when I add some new online stores to the review.

Categories
Weather

The Ice Storm

Recovered from the Wayback Machine.

In 1997 I woke up with a new decision: I wanted to move to Vermont. I talked this over with my now ex-husband and we chatted and thought about it and decided that yes, we would move to Vermont from our home in Portland, Oregon; and move we did, in May of 1997 to a small farm located in Grand Isle, in the middle of Lake Champlain.

Photo of Ice Storm

Photo
 courtesy of NOAA

Our first winter was pure Currier and Ives, from soft snow covering the yard, to spotting our first cardinal, to watching a red fox hunting in the back yard. Snow fell about Halloween, and stayed from that point but it wasn’t a problem getting around. Until January that is.

It started as the oddest sort of rain, and we walked out into the yard to see it up close. We’d seen freezing rain in Oregon, but not like this, and not so heavy. A short time outside left us glazed; covered in a thin layer of transparent ice–like jelly beans after getting their confectioner’s coating.

We, very carefully, made our way back inside, and spent the evening looking at the ice building up on the deck, the power lines, and the trees dotting our property. Reflected in the outside light, the ice was beautiful, but we also noticed that the power lines were getting much lower than normal, and the tree branches seemed to be dipping too low to the ground. However, not much you can do about an ice storm, so we went to bed.

In the middle of the night a large crash woke us. We ran to the bedroom window, opened it, and looked outside to see what was happening. Another crashing sound came from a stand of trees that stood between us and one of our neighbors. Another crash sounded, and another, and another, and so on into the night, as tree branches, weighed down by the ice, broke and fell to the ground.

Nature had created an invisible monster that was decimating all of the trees around our house, and this monster worked tirelessly through the night. The next morning dawned cold, and clear, and most of the trees around us were decimated. Overhead a group of National Guard helicopters flew by, the only thing moving anywhere.

You would have thought we had been at war. Instead, this was the aftermath of the Great Ice storm of 1998.


What causes Freezing Rain and Ice Storms

Particles of moisture are present in all clouds, but this doesn’t result in any form of precipitation until the particles become too large (and heavy) and the state of equilibrium that the particles existed in is upset, and rain (or whatever) is the result.

Freezing rain is a specific form of precipitation, as is rain, drizzle, sleet, and hail. It is the result of water particles falling as snow, then encountering a layer of warm air where the particles melt into rain. However, before the rain hits the ground, it goes through a cold layer of air just before the surface of the ground. The rain is cooled, forming supercooled drops– drops that are cooled to below freezing but are not frozen.

When these supercooled drops hit the ground, they thin out and freeze, instantly, forming a thin film of ice.

There are actually two forms of this supercooled precipitation: freezing rain resulting from droplets of a size greater than or equal to 0.5 millimeters; freezing drizzle forms from droplets less than 0.5 millimeters in size. Both types, however, can be deadly.

Even when very thin, freezing rain–also known by the names of silver thaw or glaze–is dangerous; You’ve experienced this yourself if you’ve ever tried to walk on a surface covered with the ice, or worse, tried to drive on it. But the real danger with freezing rain is the build-up of ice on trees and power lines. The weight of this ice is enormous, and can cause tree limbs to break, fell large trees, and cause power lines to sag and break. A severe ice storm can be as deadly as a hurricane, though–usually–not as devastating to property.

Why was the Ice Storm of 1998 unique?

The Northeast does get hit with freezing rain in the winter, but the storm that occurred in January, 1998, was unique. Looking at the storm from the perspective of its impact in Canada, the amount of precipitation totaled 100 mm (about 4 inches) in Montreal. The last big ice storm in Canada occurred in 1961 and this one only dropped about 30-40 mm of ice. This is a significant difference. Think of 4 inches of ice coating a tree branch or street or power line, as compared to ice about an inch thick.

Add to this the extreme cold in the northern reaches of North American in January, and loss of power. If you don’t have a wood stove, you’ll freeze to death; several people did die of hypothermia–on our island, as well as elsewhere in the US and Canada. Others died from falling limbs and poles, or from carbon monoxide poisening from using karosene stoves in areas not adequately ventilated. More than 25 deaths were blamed on this storm.

All Storms End

The National Guard, power crews, and other public workers were outstanding, seemingly working days without taking a break to re-establish power lines, check to make sure people had firewood, or evacuate those folks in need of help.We even had crews come in from Hawaii.

The job of restoring power was enormous, with millions without power, than hundreds of thousands, then thousands, and finally, weeks later, down to the last few hundred.

As power was restored, we were asked to put signs in our yard if we saw that a neighbor had power and we didn’t so that crews could tell where work still needed to be done.

During this time, we picked up broken branches and piled them by the side of the road
for hauling — lots and lots and lots of broken branches and some pretty big piles. Luckily, the trees that were damaged could go until Spring until they had to be pruned, with splintered ends sawed neatly off and treated to prevent pests from further damaging the trees.

We, my husband and I and our cat, Zoe, were also lucky, in that we were without power for only three days. Additionally, we had a wood stove that could heat about half the state, and which could also be used for cooking. I even had my laptop and could check my email once a day (we did have phone service). This and our books and the solar powered radio (if you don’t own one, buy one) got us by.

Others did not fare as well as we did, and went for weeks without power, living in shelters, or trying to make do in their own homes.

My husband and I thought that this event would finally break through the reserve of our neighbors, as we stopped by several, offering to pick up supplies for them as we made a run into town as soon as the roads were passable. And the folks did seem grateful. But as soon as the ice cleared, they went back to being islanders, and we went back to being outsiders.

Seems only certain types of ice thaw.

Categories
Web Writing

Dynamic Web Publishing Unleashed: Chapter 37 – the Future of Web Publishing

Recovered from the Wayback Machine.

IN THIS CHAPTER

  • The Chaos of Change
  • The Current Technology Platform
  • A Summary Review of the Technologies Covered in the Book–Where Will They Be in a Year?
  • Client Scripting and CSS1 Positioning

With an industry that seems to go through a new revolutionary change at least four times a year, it’s hard to predict where Web publishing will be in four months, much less the next couple of years. However, taking a look at existing technologies and their state and at some new technologies can give us a peek through the door to the future, even though the crack we are peeking through might be pretty small.

First, many of the technologies covered in this book existed two years ago and will continue to be around two years from now. That means Java, which was introduced in 1995, HTML, the scripting techniques, and the basic Web page, which will continue to consist mainly of text and an occasional image. It’s hard to say if some new and incredibly different Web page development technique or tool will appear in the next two years, but regardless, people will also continue to use what is available now. In fact, the technologies introduced this year, such as Dynamic HTML and CSS1, will begin to become more familiar in 1998, and only begin to become mainstream technology in mid- to late 1998.

Web servers don’t seem to increase in technical capability exponentially the way Web page technology does. The real key to server technology is fast, reliable, and secure Web content access. The servers will become faster, hopefully more reliable, and the security should grow to meet the increasing demands placed on these servers to support commercial transactions. Additionally, there are new methods–particularly in the realm of commerce–that will work with existing server technology. Those are discussed in this chapter.

There are new technologies that have barely begun being explored this year. Channels and push technology started out with a bang and nearly ended with a whimper. Web consumers just didn’t buy into the new technology. However, with the built-in channel capability Netscape Navigator and Microsoft’s Internet Explorer have now, and with simpler channel development processes, channels can be considered down, but not out.

The real key to the future rests with standards as much as with implementation. The Document Object Model (DOM) working group of the W3C should present a first working draft of DOM by the end of 1997. DOM covers which HTML elements are exposed, and to an extent, in what way these same elements are exposed, what standard properties and events are, and how these elements relate to each other. If HTML doesn’t meet your needs, just wait–XML describes how to extend any markup language, to create an element such as <BUTCHER> or <BAKER> or, yes, even <CANDLESTICK_MAKER>. This chapter closes with a review of the new technologies currently under review and design.

The Chaos of Change

Sometimes you might feel you have to spend 24 hours a day just to keep up with the technology being released. Needless to say, this is both frustrating and discouraging, all at the same time.

Web development does seem, most of the time, as if it undergoes a revolution in technology every three months; many times one specific aspect of the technology undergoes a change only about once per year. However, preceding the release of the changed technology is a period when the technology is being reviewed, or the product is being beta-tested, or some form of pre-release activity is occurring. Then, the release of the standard or technology or product occurs, and there is a period of comparing it with its older version, or with other products. Then, you have to spend some time learning how the new technology works, how to migrate older pages or applications, or checking to see if existing pages or applications break with the new release. Finally, just when you think you are starting to become comfortable with the new or modified technology, the company or organization announces the new release of whatever the product or standard is.

Consider also that Web development is made up of several different technologies, including browsers, standards, embedded object technology, and server technology. Putting all of these aspects in one category–“Web Development”–and considering the multiple-phase delivery of most Web technology, provides what seems to be continuous change.

As an example, in 1997 it probably seemed as if a new browser were being released every quarter. Well, what actually happened is that there were minor bug fix releases of Netscape Navigator 3.x and Internet Explorer 3.x in the year’s beginning, and Netscape also released several different beta versions of Navigator 4.0 before it actually released Navigator 4.0. After the release, there have been several enhancement and bug fix releases of Navigator 4.0.

Microsoft also released two major beta releases of Internet Explorer and released the final version about the time this book went to editing. There will likely be enhancement and bug fix releases for IE 4.0 before the year is out.

Add the international releases with these other releases, and you have a browser release on the average of about every three weeks, not months.

Also consider that browser manufacturers themselves are at the mercy of the release of new standards or new versions of existing standards. The year 1997 saw the beginning of the DOM effort, a new version of the HTML specification, HTML 4.0, the rise in interest in XML, the passage of the ECMA standard for scripting, ECMAScript, and the recommendation of CSS1 for Web page presentation. And these are only some of the standards that impact browsers.

So, how do the browser manufacturers cope with the changing standards? The same way you can cope with all of the other changing technology: First, you define your Web development and Web client platforms. You determine what technologies make up each, including versions, and concentrate on these technologies, complete the effort you determine to complete with the defined platform, and then, and only then, begin to plan your next Web platforms.

The Current Technology Platform

For many companies and individual users, the current technology platform consists of Netscape Navigator 3.x or Internet Explorer 3.x for the browser, Apache 1.2, Netscape Enterprise Server 2.0 or 3.0, O’Reilly’s WebSite Pro, or Microsoft’s Internet Information Server 2.0 or 3.0.

Most Web pages contain a combination of text and images, and most of the images are static. Many sites use some form of scripting for page interaction, most likely a form of JavaScript. HTML tables are used to handle the layout of HTML elements, as shown in Figure 37.1.

As you can see from the figure, you can actually create a fairly organized page using HTML tables. The page also uses the font element to color the sidebar text white; the color attributes of the table header and contents are used to set the header to red and the contents to yellow.

Animation in a page occurs through the use of Java applets, animated GIFs, Netscape style plug-ins, or ActiveX controls.

The version of Java used with most applets is based on the first release of the JDK–JDK 1.0.

Server-side applications are used to create dynamic Web pages, to present database information, or to process information returned from the Web page reader.

A Summary Review of the Technologies Covered in the Book–Where Will They Be in a Year?

A good rule of thumb when working with content for multiple versions of a tool is to support the currently released product in addition to one previous release. Based on this, you can count on supporting pages that work with Netscape Navigator 3.x and 4.x, and Internet Explorer 3.x and 4.x. As both browser companies begin the rounds of creating version 5.0 of their respective products, the business world will be cautiously upgrading pages to work with the new 4.0 technologies, particularly CSS1, HTML 4.0, and Dynamic HTML. By the time they have made the move to 4.0 technology, the 5.0 release of the browsers should be close to hitting the street.

The browser companies themselves probably follow a similar reasoning in that they support a specific number of versions of a standard, such as HTML, before they begin to drop deprecated content from earlier releases of HTML.

Standards organizations rarely release more than one recommended version of a standard each year. Sometimes they might go longer than a year before a new release, rarely less than a year.

Based on this, the technology you read about in this book should be viable for two years after publication of the book, which takes the Web into the year 2000.

The following sections look at each of the discussed technologies, with an eye on where each is likely to be on the eve of 2000.

HTML 4.0, CSS1

To start with the basics, the foundation of Web publishing is HTML, and this technology was explored in the first part of the book. Additionally, the first part of the book also looked at Cascading Style Sheets (CSS1) and Dynamic HTML. Dynamic HTML’s future is covered in the next section.

As this book goes to press, HTML 4.0 is the version of HTML currently under draft review. This version provides for increased form and table support, deprecates several existing elements, and adds a few new element types and several new attributes, such as intrinsic events.

HTML 4.0 should become a recommended specification at the end of 1997. Currently, Microsoft has incorporated the HTML 4.0 draft specifications into the first release of IE 4.0, and Netscape has promised to adhere to the standard after it becomes a recommendation. Based on this, any changes to the HTML 4.0 draft will probably result in a minor revision release for IE 4.0. However, the HTML 4.0 changes for Netscape Navigator will probably be extensive enough for the company to incorporate these changes in the next release of Navigator, version 5.0. Following the Navigator 4.0 release schedule, you should begin to see the early beta releases of Navigator 5.0 in the spring of 1998.

No new activity is occurring with the CSS1 standard at this time, at least not in its impact on Web page presentation. Additional style sheet specifications are underway for speech synthesizers (ACSS, which is Aural Cascading Style Sheets), and a printing extension is underway for CSS, allowing for accurate page printing. This is in addition to the CSS-P, or Cascading Style Sheet Positioning.

At this time, tools that process, generate, or incorporate CSS1 in some form include HoTMetaL Pro 4.0 from SoftQuad (http://www.softquad.com), Microsoft’s FrontPage 98, Xanthus Internet Writer (http://www.xanthus.se/), Symposia Doc+ 3.0 from GRIF (http://www.grif.fr/prod/symposia/docplus.html), PageSpinner for the Mac (http://www.algonet.se/~optima/pagespinner.html), and others.

In the next year or two, Web pages will begin to incorporate CSS1 and HTML 4.0, though Netscape Navigator 3.x has a base of users wide enough to prevent most companies from using only CSS1 and HTML 4.0 to create Web pages. However, both of the major browser vendors have promised support of these standards, and many of the Web generation’s tools will integrate them into the new versions of their tools. As these new tools are appearing as beta releases now, they should all be released as products in 1998. By the year 1999, most companies that want to control the presentation of their Web pages should be using at least some form of CSS1, and begin the process of removing deprecated HTML elements from their pages.

You can keep up with the standards for HTML at http://www.w3.org/TR/WD-html40/. The standard for CSS1 can be found at http://www.w3.org/Style/.

Dynamic HTML and DOM

In 1997, with the release of version 4 of their browsers, both Netscape and Microsoft provided support for the first time for Dynamic HTML. Dynamic HTML is the dynamic modification and positioning of HTML elements after a Web page loads.

Dynamic HTML is a great concept and badly needed for Web development. With this new technology you can layer HTML elements, hide them, change their colors, their sizes, even change the elements’ contents. That’s the good news. The bad news is that Netscape and Microsoft have implemented different versions of Dynamic HTML–differences that are a little awkward to work with at best, and conflicting at worst.

Neither Netscape nor Microsoft has implemented broken versions of Dynamic HTML. When Netscape published Navigator 3.0 and exposed HTML images to scripting access, there was a great deal of discussion about Microsoft’s “broken” implementation of JavaScript 1.1, the version of JavaScript also published with Navigator 3.0. However, Internet Explorer 3.x was not broken, but the browser did not implement the same scripting object model as Navigator 3.x. Now, with IE 4.0 and Navigator 4.x, the scripting object models are even more disparate, making it difficult to create Dynamic HTML effects that work equally well with both browsers.

The solution to this problem could be found with the Document Object Model standardization effort currently underway with the W3C.

According to the W3C, the DOM defines an interface that exposes content, structure, and document style to processing, regardless of either the language used or the platform on which the DOM application resides. The functionality of Internet Explorer 3.0 and Netscape Navigator 3.0 is defined by the W3C to be level “zero” of the standard. You might assume it is that functionality that both of these browsers support, which means they would not include images.

At this time, the DOM working group has produced a requirements document, which includes items such as those in the following list:

  • All document content, elements, and element attributes are programmatically accessible and can be manipulated. This means that you can use script to alter the color of header text, or dynamically alter the margins of the document.
  • All document content can be queried, with built-in functions such as get first or get next.
  • Elements can be removed or added dynamically.
  • All elements can generate events, and user interactions can be trapped and handled within the event model.
  • Style sheets can be dynamically added or removed from a page, and style sheet rules can be added, deleted, or modified.

This list is just a sampling of the requirements for the DOM specification, but it is enough to see that when the DOM specification becomes a recommendation, the days of the static and unchanging Web page will soon be over.

To see more about DOM, check out the DOM working page at http://www.w3.org/ MarkUp/DOM/.

Client Scripting and CSS1 Positioning

Excluding the objects exposed to scripting access as members of each browser’s scripting object model, there aren’t that many differences between Netscape’s implementation of JavaScript and Microsoft’s implementation of JavaScript.

Scripting will continue to be used in the years to come, and, hopefully, the language will not get too complicated, or the ease of use of scripting languages will begin to diminish.

Again, the major impact on scripting occurs with the elements that become exposed by the DOM effort. However, this is not a guarantee that the same script block written for Netscape’s Navigator will work with Microsoft’s Internet Explorer.

Consider each browser’s implementation of dynamic CSS1 positioning. First, both companies support CSS1 positioning, a draft recommendation actually created by both companies. This standard provides style sheet attributes that control an element’s width, height, z-order (the element’s position in the stack if elements are layered), the location of the left side and top side of the element. The standard also provides an attribute to control the visibility of the object and the clipping area.

Figure 37.2 shows how well CSS1 positioning works by showing a Web page using this technology, opened in both IE 4.0 and Navigator 4.0. Note how the text aligns directly on top of the image (yes, the text and images are separate elements), and that the images are aligned in a vertical line along the left side of the page, without using an HTML table for layout control.

The example in Figure 37.2 is discussed in Chapter 8, “Advanced Layout and Positioning with Style Sheets,” and is located in the file images3.htm at this book’s Companion Web Site.

Using CSS1 positioning to control the layout of text and images.

If statically positioning elements using CSS1 positioning works equally well with both browsers, dynamic positioning does not. Both browsers create the same effects but use different techniques. Considering that standards usually define an effect or behavior but don’t necessarily define a specific technique, you probably won’t be seeing consistent scripting of HTML elements in the next couple of years.

Java

As this is being written, Sun is on the verge of releasing JDK 1.2, Netscape just created a minor release to cover most of the classes released with the JDK 1.1, and Microsoft also supports JDK 1.1 in its release of IE 4.0.

The use of JavaBeans–Java components that can be packaged, distributed, and used and reused in applications–is among the technologies supported with JDK 1.1. It’s a very good idea and one that has already achieved popularity among Java developers.

However, not all is positive in Java’s future, particularly when used with browsers. The browser companies are usually one version release behind the current Java class releases. That is not a problem. What is a problem is a situation that may make creating cross-browser applets virtually impossible.

The difficulties with the original release of Java had to do with the Advanced Windowing Toolkit or AWT classes. For the most part, interface development in Java was difficult and lacked sophistication. To resolve this, both Microsoft and Netscape began work with interface classes, called Application Framework Classes (AFC) by Microsoft and Interface Framework Classes (IFC) by Netscape.

Netscape joined Sun and combined Sun’s current efforts with its own IFC library to create the basis for the Java Framework Classes (JFC), due to be released with JDK 1.2. However, Microsoft had also spent considerable time with its own framework classes. At this time, the end result is Netscape and Sun supporting one set of classes and Microsoft supporting another.

To add to the problem, Sun also submitted Java to ISO (the International Standards Organization), to become a standardized language. They also asked to be designated a Publicly Available Submitter (PAS), or the group responsible for developing and maintaining the specification. At this time, the ISO working group, JTC 1, has voted against the Sun recommendation, with comments. Sun’s response, in effect, is that they will pull the language from ISO and treat it as a de jure standard, meaning that the company will retain control.

This is not a situation that is guaranteed to increase business confidence in the language. Add this to the difficulty of creating applets using any kind of framework, having the applet work with both IE and Navigator, and the increased sophistication of Dynamic HTML, and you may be in for a future decline of Java use for applets.

ActiveX

The new and exciting technology addition to ActiveX is DirectAnimation, DirectX technology extended for use with Java applets, controls, or scripting.

Being able to create ActiveX controls fairly easily using a variety of tools should lead to an increased popularity of these controls with companies whose intranets use Internet Explorer. The downside with the technology is that it is proprietary.

However, Microsoft also released several new filters that were originally ActiveX controls, but then were built in as style attributes. These controls can change the transparency of a Web page element, have a line of text become wavy, or add pinlights to a page. This technology is so fun and simple to use that the demand may likely be to add these to the DOM once it is released.

With this technology you can create rollover effects for menu images without the extra download of the rollover effect image.

CGI and Server-Side Applications

Server-side applicability is already at a point where most companies’ programming needs are met. CGI is still an effective server application technique and still works with most Web servers. If your company uses Internet Information Server, Active Server Pages is still a viable application option, just as LiveWire is for Netscape’s Enterprise Server.

One change you may see more of is the use of CORBA/COM technology and distributed processing, with Web servers acting as one hub within a distributed network. Browsers might become “interface containers” rather than Web page processing tools. With the increased sophistication of Dynamic HTML, it won’t be long before you might be creating a Web page as the front end for a company application, in addition to using tools such as Visual Basic or PowerBuilder.

VRML

VRML is a wonderful idea that’s still looking for that killer application to take it out of the artist’s realm and plunk it directly into business.

Consider VRML’s concept, which is that you send a simple text file to a VRML-capable reader, which then renders the text contents into incredible, interactive 3D “worlds.” This is Internet technology at its best, as you have seen already with HTML, and will probably see with XML.

With VRML 2.0, the living world specification and the capability to integrate Web page scripting and VRML worlds, you are going to see more of this technology in use, for store catalogs, Web site maps, educational tools, and yes, even to play games and have a little fun.

XML and Channels

Neither XML nor channel technology, which are related, has found a niche yet, but with the release of CDF technology from Microsoft and Netcaster from Netscape, this should change.

The concept of push technology started with a bang at PointCast’s release, and almost disappeared without even a whimper–a case of too much hype and not enough efficient technology. In addition, the channel content just wasn’t there.

The entry of Netscape and Microsoft into the channel technology can only boost the use of this technology. Already, there is an increased number of companies providing channels. Add in those companies that are using the Marimba Castanet technology, and you should see an increased number of channels from diverse Web sites in the next year.

XML is the Extended Markup Language standard that basically adds the concept of extending Web page HTML to include new objects–objects related to the company’s business, based on some topic, or even packaged for reuse.

Microsoft and Marimba have proposed the use of CDF (Channel Definition Format) with the use of XML for use with channel technology. Apple has used a precursor of XML to create 3D Web site maps that are generated automatically; your reader can traverse to determine which page to access.

You can read more about XML at the W3C site at http://www.w3.org/XML/You can read more about Microsoft and Netscape’s implementation at their respective sites or in Chapter 34, “XML.”

Summary

Where will you be in the future of Web development? Not all that far from where you are now. New technologies seem as if they are popping up all the time–but they need time to get absorbed into the mainstream of Web sites.

One technique to begin incorporating the new technologies is to create a special set of pages for your Web site–not your main pages, but those that can demonstrate a product or idea and use the new technologies. A set of pages for the Web site shown in Figures 37.3, Figure 37.4, and Figure 37.5 are viewable only by Netscape Navigator 4.0 and Internet Explorer 4.0; these are additional pages to a Web site, but the sites themselves use mostly mainstream technology, meaning some use of CSS1, scripting, some CSS1 positioning, HTML tables, and just plain text.

The Web pages use Dynamic HTML to hide and display content, as well as hide and display the menu. The page also uses CSS1 positioning to lay out the HTML elements.

The technologies discussed in this book are effective today, and will be effective into the year 2000. Best of all, they won’t break on New Year’s Day, 2000.

Categories
JavaScript

Adding Persistence to DHTML Effects

Originally published at Netscape Enterprise Developer, now archived at the Wayback Machine

Dynamic HTML (DHTML), implemented in Netscape Navigator 4.x and Microsoft Internet Explorer 4.x, gives the Web page visitor the ability to alter the position, format, or visibility of an HTML element. However, the effects that are created with DHTML are transitory in that the next time the page is refreshed, the current DHTML state is not maintained and the page opens showing the same content layout as when the page is first loaded. Any changes to the content based on DHTML are not maintained. Sometimes this isn’t a problem, but other times this is irritating to the reader. This article covers how to add persistence to a DHTML page. Examples should work with all forms of Netscape Navigator 4.x and Internet Explorer 4.x, but have only been tested with IE 4.x and Netscape Navigator 4.04 in Windows 95.With DHTML, developers can provide functionality that hides or shows whichever layer the Web page reader wants to view. Developers can also add functionality that lets readers move content. The problem is that DHTML effects are not session persistent, which means that they aren’t maintained between page reloads. Following a link to another site and returning to the page can trigger a reload, which wipes out the effect and annoys the user,. especially if he or she has spent considerable effort achieving the current DHTML state.

For example, you can use DHTML to let your reader position your page contents for maximum visibility in her browser and get the page look just right. If she decides to leave your site to check Yahoo’s news for a few minutes and then comes back, her settings will have been erased.

So, what’s the solution to these problems? It’s relatively simple — add document persistence using Netscape-style cookies.

Using cookies 
Despite the name, you won’t find Netscape-style cookies at your local bakery. Cookies are bits of information in the form of name-value pairs that are stored on the client’s machine for a set period of time or until the browser is closed. For security, cookies are created and accessed using specific syntax, are limited to 300 cookies total within the cookie database at 4K per cookie, and are limited to 20 cookies per domain (the URL where cookie was set).

Cookie syntax consists of a string with URL characters encoded, and may include an expiration date in the format:

Wdy, DD-Mon-YY HH:MM:SS GMT

Netscape has a complete writeup on cookies (see our Resource section at the end), including functions that can be copied and used to set and get cookies. I created modified versions of these functions to support my DHTML effects. As I don’t want to add to overly burdened cookie files, I don’t set the expiration date, which means the cookie does not get stored in the persistent cookie database or file. This also means that the DHTML effect only lasts for the browser session. However, this fits my needs of maintaining a persistent DHTML state in those cases where the reader moves to another site or hits the reload button.

DHTML state cookie functions
I created a JavaScript source code file called cookies.js that has two functions. One function sets the cookie by assigning the value to the document.cookie object. More than one cookie can be set in this manner, as cookie storage is not destructive — setting another cookie does not overwrite existing cookies, it only adds to the existing cookie storage for the URL. The cookie setting function is shown in the following code block:

// Set cookie name/value
//
function set_cookie(name, value) {
   document.cookie = name + "=" + escape(value);
}

Next, the function to get the cookie accesses the document.cookie object and checks for a cookie with a given name. If found, the value associated with the cookie name is returned, otherwise an empty string is returned. The code for this function is:

// Get cookie given name
//
function get_cookie(Name) {
  var search = Name + "="
  var returnvalue = "";
  if (document.cookie.length > 0) {
    offset = document.cookie.indexOf(search)
    if (offset != -1) { // if cookie exists
      offset += search.length
      // set index of beginning of value
      end = document.cookie.indexOf(";", offset);
      // set index of end of cookie value
      if (end == -1)
         end = document.cookie.length;
      returnvalue=unescape(document.cookie.substring(offset, end))
      }
   }
  return returnvalue;
}

That’s it to set and get cookie values. The next section shows how to create two Web pages with simple, cross-browser DHTML effects. Then each page is modified to include the use of cookies to maintain DHTML state.

_BREAK1 Creating the DHTML Web pages
To demonstrate how to add persistence to DHTML pages, I created two Web pages implementing simple DHTML effects. The first page layers content and then hides and shows the layers based on Web-page reader mouse clicks. The second page has a form with two fields, one for setting an element’s top position and one for setting an element’s left position. Changing the value in either field and clicking an associated button changes the position of a specific HTML element.

Adding DHTML persistence for layered content
For the first demonstration page, a style sheet was added to the top of the page that defines positioning for all DIV blocks, formatting for an H1 header nested within a DIV block, and three named style sheet classes. The complete style sheet is shown here:

<STYLE type="text/css">
        DIV { position:absolute; left: 50; top: 100; width: 600 }
        DIV H1 { font-size: 48pt; font-family: Arial }
        .layer1 { color: blue }
        .layer2 { color: red }
        .layer3 { color: green }
</STYLE>

Next, three DIV blocks are used to enclose three layers, each given the same position within the Web page. Each block also contains a header (H1), with each header given a different style-sheet style class. The HTML for these objects is:

<DIV id="layer1">
<H1 class="layer1">LAYER--BLUE</H1>
</DIV>
<DIV id="layer2" style="visibility:hidden">
<H1 class="layer2">LAYER--RED</H1>
</DIV>
<DIV id="layer3" style="visibility:hidden">
<H1 class="layer3">LAYER--GREEN</H1>
</DIV>

That’s it for the page contents. To animate the page, I created a JavaScript block that contains a global variable and two functions. The global variable maintains the number of the layer currently visible. The first function is called cycle_layer; this function determines which layer is to be hidden and which is shown next, and then calls a function that performs the DHTML effect:

<SCRIPT language="javascript1.2">
<!--

// current layer counter
current_layer = 1;

// assign document clicks to function pointer
document.onclick = cycle_layer;

// cycle through layers
function cycle_layer() {
   var next_layer;
   if (current_layer == 3)
        next_layer = 1;
   else
        next_layer = current_layer + 1;
   switch_layers(current_layer, next_layer);
   current_layer = next_layer;
}

The scripting block also assigns the function cycle_layer as the event handler function for all mouse clicks that occur within the document page. By doing this, clicking any where in the page document area that doesn’t include any other content results in a call to the function to change the layer.

The switch_layers function is the DHTML effects function, and it uses the most uncomplicated technique to handle cross-browser differences: it checks for browser type and then runs code specific to the browser. Other techniques can be used to handle cross-browser differences, but these are outside of the scope of this article. All that the function does is hide the current layer and show the next layer in the cycle, as shown here:

// hide old layer, show new
function switch_layers(oldlayer, newlayer) {
   if (navigator.appName == "Microsoft Internet Explorer") {
        document.all.item("layer" + oldlayer).style.visibility="hidden";

        document.all.item("layer" +
newlayer).style.visibility="inherit";
        }
   else {
        document.layers["layer" + oldlayer].visibility="hidden";
        document.layers["layer" + newlayer].visibility="inherit";
        }
}

Try out the first sample page. Clicking on the document background, not the text, changes the layer. Try setting the layer to the green or red layer, accessing another site using the same browser window, and returning to the sample page. When you return, the page is reset to the beginning DHTML effect, showing the blue layer.

To correct the loss of the DHTML effect, we can add persistence to the page by storing which layer is showing when the page unloads. This information is captured in a function called when the onUnLoad event fires.

To add persistence, I added the cookies Javascript source code file to the page, using an external source code reference:

<!-- add in cookie functions -->
<SCRIPT language="javascript" src="cookie.js">
</SCRIPT>

Next, I added the function to capture the DHTML state:

// onunload event handler, capture "state" of page (article)
function capture_state() {
   set_cookie("current_layer",current_layer);
}

When the page reloads, the onLoad event is fired, and a function is called from this event to “redraw” the DHTML effect. The function is called start_page, and it pulls in the cookie containing which layer should be shown:

function start_page() {
// get cookie, if any, and restore DHTML state
  current_layer = get_cookie("current_layer");
  if (current_layer == "")
        current_layer = 1;
  else
        switch_layers(1, current_layer);
}

Finally, the two event handlers, onUnLoad and onLoad, are added to the BODY HTML tag:

<BODY onload="start_page()" onunload="capture_state()">

You can try out the second sample page, which has DHTML persistence — again, change the layer, open some other site and then return to the sample page. This time, the DHTML effect persists during a page reload.

Sometimes an effect requires more than one cookie, as the next example demonstrates.

_BREAK2 Adding DHTML Persistence for Positioned Content
In the next example, I used a style sheet again to define CSS1 and CSS-P formatting for the DIV block that is going to be moved in the page:

<STYLE type="text/css">
        DIV { position:absolute; left: 50; top: 100; width: 600 }
        DIV H1 { font-size: 48pt; font-family: Arial }
        .layer1 { color: blue }
</STYLE>

Next, I created a form that has two text elements and associated buttons. One text and button element pair is used to change the DIV block’s top position, one pair is used to change the DIV block’s left position. The entire form is shown here:

<DIV id="first" style="position:absolute; left: 10; top: 10; z-index:2">
<form name="form1">
Set new Left Value: <input type="text" name="newx" value="50">
<input type="button" onclick="newleft(newx.value)" value="Set New
Left"><p>
Set new Top Value: <input type="text" name="newy" value="100">
<input type="button" onclick="newtop(newy.value)" value="Set New Top">
</form>
</DIV>

Next, the positioned DIV block is created:

<DIV id="layer1" style="z-index: 1">
<H1 class="layer1">LAYER--BLUE</H1>
</DIV>

Once the page contents have been created, you can add code to animate the DIV block based on whether the top or left position is changed. The function to change the top position is:

// change top value
function newtop(newvalue) {
   if (navigator.appName == "Microsoft Internet Explorer")
        layer1.style.pixelTop = parseInt(newvalue);
   else
        document.layer1.top = newvalue;
}

Again, the least complex technique to handle cross-browser differences is used, which is to check for the browser being used and run the appropriate code. The function to change the left position is identical to the top position function, except the CSS-P attribute “left” is changed rather than the CSS-P attribute “top”:

// change left value
function newleft(newvalue) {
   if (navigator.appName == "Microsoft Internet Explorer")
        layer1.style.pixelLeft = parseInt(newvalue);
   else
        document.layer1.left = newvalue;
}

That’s it for this page. You can try it out on the third sample page, where you can change the position of the DIV block by changing the left position, the top position, or both. Again, open another site in the browser and then return to the example page. Notice how the position of the DIV block does not persist between page reloadings.

Adding persistence to this page is very similar to adding persistence to the layered example page, except two values are maintained: the top and left values. The start_page and capture_state functions for this DHMTL effect are:

function start_page() {
// get cookie, if any, and restore DHTML state
  var tmpleft;
  tmpleft = get_cookie("currentleft");
  if (tmpleft != "") {
        currentleft = parseInt(tmpleft);
        currenttop = parseInt(get_cookie("currenttop"));
        newtop(currenttop);
        newleft(currentleft);
        if (navigator.appName == "Microsoft Internet Explorer") {
           document.forms[0].newx.value=currentleft;
           document.forms[0].newy.value=currenttop;
           }
        else {
           document.first.document.forms[0].newx.value = currentleft;
           document.first.document.forms[0].newy.value = currenttop;
           }
        }
}

function capture_state() {
        set_cookie("currentleft", currentleft);
        set_cookie("currenttop", currenttop);
}

To see how it works, try out the fourth sample page, move the DIV block, open another Web site in the browser and return to the example page. This time the DHTML effect does persist between page reloadings.

Summing up
In a nutshell, the steps to add persistence to a page are:

  1. Create the DHTML page
  2. Determine the values that must be maintained to re-create an existing effect
  3. Add the Netscape-style functions to the page
  4. Capture the onUnLoad event and call a function to capture the DHTML effect persistence values
  5. Capture the onLoad event and call a function to restore the DHTML effect from the persisted values

Using Netscape-style cookies to maintain DHTML effects won’t help with some DHTML persistence problems. For example, DHTML effects can be lost when a page is resized. In addition, if a page is resized between the time the DHTML persistence values are stored and the page is reloaded, the values may no longer work well with current page dimensions. However, for most effects and for most uses, the use of Netscape-style cookies is a terrific approach to DHTML persistence.