Categories
Just Shelley

Every Person’s Math

Ask folks what class they feared the most in high school and college, and I bet you’ll find that “math”, generally, or “calculus”, specifically, is the answer you’ll get more often than any other. Yet math is really nothing more than a) the ability to apply specific equations and get consistent results, and then b) to apply those results to better understand the world around us. So, I think its time to take a look through the Internet and see what we can learn…about math.

Basic Math: If a train left New York and another left Boston…

Most of us know basic math. It’s the math we use when we shop: we pick up half a dozen eggs, we buy 4 steaks, we supply cash for totals and get change back. Its also the math we use at home: we measure out 1 cup of flour, we shape dough into a circle to make a pie, we time how long the pie has baked, and we cut a board in such a way that it fits into a slot on the floor. How about at work, do we use this math at work? You bet: we ask for two packets of sugar for our coffee, the delivery person drops off a gross of pens, we send mail using two day express delivery and know that the mail will be delivered in two days or less.

Basic math is that math that surrounds us and that we use in our everyday world. It is the math that allows us to time events by understanding units of measurement about time, such as hours, minutes, and seconds. It is the same math that then gives us the tools to measure these units and express this measurement as a factor of time elapsed: he ran the marathon in 6 hours, 23 minutes, 3 seconds.

Additionally, basic math is that math we use when quantifying objects, such as 2 apples, 3 people, 4 cats. It is also the math we use with currency and with temperature — though units of measurement can differ here — and with our payroll stubs and income tax.

Basic math consists of addition and subtraction, multiplication, and division, and we can’t forget the most infamous of them all: fractions. It is arithmetic.

Basic math is the math we learn first, and the one that requires us to learn the most and take the largest leap of faith. After all, in algebra we may understand that 2x – y = 3 is a solvable equation, but it is really based on our belief that the number “2” does represent two objects; that two numbers can be multiplied and the result will always be the same; that you can add two numbers and consistently get a third; and that you can then subtract one of the original numbers from the new total, and derive the other original number.

3 + 4 = 7

what will you get if you take 4 away from 7?

Look at that! Your first number quiz.

 

So, did you get the correct answer? If you’re not sure, you might want to ask Dr. Math to help you find the answer . How about a different way of learning math? You might want to check out The Clock (Modular) Arithmetic Page for a little learning about math, in the round. Want to have a little fun with math? Then check out the Math Forum Elementary Problem of the week — see if you can keep up with the kids.

Of course, once we learned basic math, it was time to get into other types of math such as algebra, covered next.

Algebra and the ultimate question: Why?

So what is algebra and why do we need to learn it? Well, something like arithmetic is good when dealing with math of known quantities and objects such as adding two apples together, or measuring a cup of flour. But what if you need to solve an equation involving the addition of 2 quantities of an object, and you only have one of the quantities and the result?

Remember our little math game in the last section:

3 + 4 = 7

what will you get if you take 4 away from 7?

 

Well, let’s rephrase this question and formalize it into an equation. Instead of saying “if you take 4 away from 7”, say “if you take a number away from 7 you’ll get 4”, and rephrase it again to say “if you add a number to x, you’ll get 7”. Drawing this as an equation, you get:

x + 4 = 7

Algebra is involved with solving the equation for the unknown variable, in this case, x, using a set of rules and procedures to accomplish the task.

For our equation, we first need to isolate the variable, or the unknown value. We can do this by using basic math to eliminate the known value from both sides of the equation:

x + 4 - 4 = 7 - 4

x = 3

Isolating the unknown is the same as solving for the unknown.

See, you just did algebra! That wasn’t so bad, was it?

To summarize, algebra is the ability to solve equations containing one or more unknown variables. The solution is found by applying known procedures such as isolating the unknown variable and combining like terms. Algebra then uses these same rules for more complex equations such as finding ratios, multiplying fractions, graphing results on a coordinate plane, and exponents. Before you click away again, let’s look at each of these and see that there is nothing scary or weird with any of them.

First if all, you use ratios anytime you figure out your odds of winning the lottery (1 in a kagillion), or you read about something such as the “ratio” of women to men of those responding to a survey, for instance, the ratio was 3:5, or 3 women out of 5 respondents were women. If we look at this as an equation, we would have:

x + 3 = 5

x + 3 - 3 = 5 - 3

x = 2

there are 2 men for every 5 respondents

How about graphing? Well, I used to love to graph. I loved the graph sheets, I loved getting my ruler and my pencil and drawing out a nice clean line. Didn’t have a clue why I was doing it, but it sure was fun.

You know graphing: on a number line graph all numbers less than 8. You end up with:

Now, what is there about this that isn’t fun?

Of course, once we mastered graphing on a linear line, the next step is to try graphing within a coordinate system. This is a graph where the X values are plotted along a horizontal line and the Y values are plotted along a vertical line. The Y-axis intersects the X-axis at the point where X is zero, and the X-axis intersects the Y-axis at Y’s 0 point. Then, individual points on the graph are plotted at the point where the X value and Y values intersect. So, if you have an X value of 3 and a Y value of 3, your point will exist in the upper right of the system. If you have many points, such as those drawn for an equation and using different values of X or Y in the equation, you can connect the points and you actually have a line. From this you can determine not only what values are from an equation for given values of X or Y, you can determine what all values of X or Y will be. Why? It’s in the graph!

So, we know that basic arithmetic isn’t scary, and algebra can be fun, are you ready to try something a little stronger? Say, Geometry?

If you want to know about algebra, have I got some sites for you. First up is Math for Morons Like Us. Don’t let the name chase you away, this really is an impressive site providing an overview of pre-algebra, algebra, geometry, and calculus. Math for Morons was created for the ThinkQuest program. ThinkQuest is a competition held every year where students or adults who are teachers or studying to be teachers can create Web sites, all based on knowledge and education. There some pretty impressive Web sites from this project. For instance, another Web Site is Volcanoes Online, created by students from all over the World.

Now, doesn’t all this sound like fun? Well, to make it even more fun, James Brennan from Boise State University has created an interactive Java applet called the Graph Applet. Try it out.

Geometry

Well, you’re probably pretty comfortable with addition and subtraction and even equations, about now. Time to up the ante and take a look at geometry.

First of all, to ease your anxiety, and to keep you from clicking out of the page, geometry is not only fun, it is really based on the same mathematical foundation you worked with in the basic math and algebra sections. Now, those sections weren’t so bad, and this one doesn’t need to be either.

So, what is geometry? Well, it has to do with shapes. All kinds of shapes, from lines to circles to triangles to spheres to what have you. Geometry gives you the tools to do such things as find the volume of a sphere or to find the circumference of a circle.

You don’t think you need this kind of stuff? Well, sure you do.

For instance my husband and I walk around a water reservoir behind our place that has a diameter of about .4 miles. We were curious about the actual distance we traveled so we dusted off our geometry and found the formula for finding a circumference of a circle given the circle’s radius:

   C = 2(PI)r

Well, a diameter of a circle is twice the size of the radius, so the radius of the lake would be .2 miles. Plugging this in for r, and remembering that the value of PI is 3.14159 — five decimal places is more than enough, we ain’t building a rocket here – we would have:

  C = 2(PI)r
  C = 2(PI).2
  
  C = 2 x 3.14159 x .2
  C = 1.25663

Hey, 1.25 miles! A nice little jaunt.

Geometry is very big in the computer animation business. Did you like A Bug’s Life or Antz? Well, geometry is a basic tool used in creating these types of animations. Geometry also forms the basis for work accomplished with VRML — Virtual Reality Modeling Language.

If you like Geometry, then you might want to look more closely at trigonometry, covered next.

Where to begin when it comes to learning about Geometry. You can go back to Math for Morons Like Us, which has excellent coverage of Geometry in addition to Algebra. You can also go to the Geometry Home Page, which has some very nice tutorials. There’s also the Geometry Center, with documents, multimedia, and software about geometry. This site led me to another site, called Science U, which has its own Geometry Center. Science U has several interactive demos and games, related to geometry and astronomy. Site also has an online store with some unusual items for sale. There aren’t many places where you can create your own fractal design and then have it made into a T-shirt.

Wait, there’s even more sites. I mentioned the use of geometry with computer animation and VRML. Only fair to mention some sites for these topics. First of all, the grandmother of VRML sites is The VRML Repository. Two other essential links are VRML Consortium, and The VRML Specification. And you can’t mention VRML without reference to the SGI VRML page.

For computer animation, try out The Shape Modeling and Computer Graphics page, from the University of Aizu in Japan. Webreference, a favorite of mine, has a nice site called the 3D Animation Workshop. And the king of computer animation is, of course, Pixar.

Oh, and don’t forget the Antz and A Bug’s Life official Web pages.

Trigonometry

Okay, you had some fun looking at all the pretty computer generated animations and graphics. Let’s get back to the real reason you’re here: to learn more about math. Right?

First, trigonometry — or “trig” as it is affectionately known — is based on angles. It is this, which distinguishes trig from the rest of geometry.

Why learn more about trig? Well, if you are interested in astronomy, you should be aware that it is trig, and the trigonometric tables, that provided the basis for early star charting. Engineering is dependent on trigonometry. When you see surveyors along the road at construction sites, what do you think they are using to plan the work? Why, trigonometry, of course.

Consider a building. Can you measure how tall it is? You could climb to the top of the building and drop a line of rope down from the roof until it touches the ground and then you could measure the rope. However, this doesn’t sound like a very efficient method, and what if you are trying to measure a mountain peek, or a balloon in the air?

A better approach would be to use our friend, the right triangle, and the trigonometric functions.

First, a right triangle is one which has one 90o angle. The angle opposite the right triangle, along the horizontal axis is written as q, and is called theta. The side of the triangle opposite and adjacent to q are known as, respectively, the opposite and adjacent sides. The side opposite the right angle is known as the hypotenuse, as shown in the figure below.

The trigonometric functions, based on the graphic, are:

  • sin q = opposite / hypotenuse
  • cos q = adjacent / hypotenuse
  • tan q = opposite / adjacent
  • csc q = hypotenuse / opposite
  • sec q = hypotenuse / adjacent
  • cot q = adjacent / opposite

Now, considering the right angle and the trigonometric functions, how can we measure that building? Well, you start with a protractor, a small plastic semi-circular or circular disk that allows you to measure angles. You walk 100 feet from the building and then measure the angle from yourself to the top of the building using the protractor. Let’s say this angle is 60o.

At this time you have some known values. You know that q is 60o, and you know that the adjacent side is 100 feet. Now, to get the value for the opposite side, we’ll use the trigonometric formula to compute the tan or tangent of the angle:

tan q = opposite / adjacent

tan 60o = opposite / 100 feet

tan 60o = 1.73

1.73 * 100 feet = opposite / 100 feet * 100 feet

opposite = 173 feet (approximately)

There you go, you found the height of the building all by yourself, with a cheap plastic tool and no long rope. Pretty darn good — and all thanks to trig.

Well, now that you have found that trig is fun, time for pulling in the big guns. Time for calculus.

I just can’t believe how many Web sites there are on math, including trig. First of all, check out the Free-ed Net, specifically the section on Trigonometry. Free-ed Net is a very hot Web site focusing on free educational resources on the Net, and in the Trig section, they list some nice trig resources. First of all is S.O.S. Mathematics, which provides an overview of Trig, and provides a table of trigonometric identities. Then there is the Math Abundance Trigonometry Introduction, which is very extensive. Very.

Sorry, I’m back. I was sidetracked by Net-Ed’s Astronomy section. Where was I? Oh, yes, trig resources. A great trig resource page is at Study Web’s Math page. I can guarantee that if you go through all the resources they list, you will be a math wiz. Angles, are your friends.

Do you want to order a protractor of your very own? Then check out k-12source.com which has most school supplies for sale. Check out the engineering and drafting supplies.

Oh, and if you want to know how to measure the height of a rocket, check out the University of Nebraska page on measuring a rocket’s height, from the N.E.R.D.S (Nebraska Educators Really Doing Science) project.

Bring on the tanks: Calculus

Well, you’ve made it this far so you deserve a real treat: Calculus!

What is calculus about? Well, first of all it takes what you know with the other math types, and goes a bit farther, or nearer as the case may be. The Excite online encyclopedia, InfoPlease has the following definition for calculus:

"branch of mathematics that studies continuously changing quantities. 
The calculus is characterized by the use of infinite processes, involving passage 
to a limit the notion of tending toward, or approaching, an ultimate value. 
The English physicist Isaac Newton and the German mathematician G. W. Leibniz, 
working independently, developed the calculus during the 17th cent. The calculus 
and its basic tools of differentiation and integration serve as the foundation 
for the larger branch of mathematics known as analysis. The methods of calculus 
are essential to modern physics and to most other branches of modern science and
 engineering."

Calculus isn’t just one subject, it’s many. There is differential calculus, integral calculus, there is statistics, and probability, and so on. However, it is also about the world around us. It is not an exercise in seeing how many equations one can stuff into a sophmore’s brain before it explodes.

For instance, could you see needing to know the volume of a sphere? Sure you could. How does one measure the volume of a sphere?

Well, going with empirical method, you could fill the sphere with water and measure how many cups of water fit into the sphere. But, this technique is kind of wet, possibly messy, perhaps not very scientific, or even accurate. Wouldn’t you really rather use a formula?

Borrowing from integral calculus, the formula for calculating the volume of a sphere is:

V = 4(PI)r3/3

So, given the sphere’s radius, you can now find its volume. You can find its surface, too, with the following formula:

S = 4(PI)r2

I won’t lie to you and say that all calculus is this easy. I still think parts of calculus are a joke perpetuated by math majors on the rest of us (“let’s string them along…see when they break”), but calculus can be met face to face at the least, and even mastered (gasp) at the most.

Now, I think that’s enough for me to say on calculus. I’ve forgotten way too much on this subject and if I say anthing more, I’ll embarrass myself. Time to follow this article’s links … and learn a little math.

Well, I have to go back to Math for Morons like us for their coverage of pre-calc and calculus. Boy, I wish they would change the name. But they aren’t alone with the names, as another good site on Calculus is Help with Calculus for Idiots (like me).

The best reference page with listings on calculus is StudyWeb’s Calculus page. Another great resource on Calculus is Calculus Net.

An example of calculus applied to mechanics is nicely illustrated at Calculus and its Applications to Mechanics. A fun site is a page full of Calculus Java applets where you can change values and observe results for calculus equations.

You can calculate the volume of most shapes with the ABE Volume Calculator page. You can find the calculations used at Calculations for Volume.

Flame on.

Categories
People Places

Climbing Mt. Everest

I decided to spend some time looking at the history of climbing Mt. Everest because of a note I found at the PBS Nova Web site. It seems that a new expedition is heading to Mt. Everest, but the goal of this expedition is rather different than others.

You see, we know that Sir Thomas Hillary was the first man to reach the top of Mt. Everest in 1953…or was he. There is conjecture that he might not have been first, that early Mt. Everest pioneers George Mallory and Andrew Irvine may have been first. However, as these two climbers failed to return, all that we know of the success or failure of their bid to reach the summit is as clouded as the summit was that day when they were last seen.

However, there is a chance, a small chance, that the knowledge of Mallory’s and Irvine’s quest did not die with them. Mallory carried a camera with him, a camera that has never been found.

Just think of it! What if someone could find this camera and could develop the film, even after all these years, and the film shows that instead of a person reaching Mt. Everest in 1953, another person reached the summit almost thirty years earlier, in 1924. What an incredible discovery. Regardless of the success of this quest, it is a fitting tribute to these earlier climbers who gave their lives for their own quest: to attempt to find the truth.

You can read more about the Mallory and Irvine research expedition at The MoutainZone’s Mallory & Irvine Research Expedition page. The expedition left in March, and daily updates are posted to this site from the expedition members Dave Hahn and Eric Simonson.

Nova is also following this expedition with their own Web site titled Lost on Everest. Nova’s coverage begins April 27th, and also includes an awesome 360 degree image of what the view is like from the summit. If you don’t go for any other reason, you have to go to the site for this. It will take your breath away.

Accuracy is the key

At the same time that the expedition to find the camera of George Mallory is underway, which occurs on the North face of Mt. Everest by the way, another expedition is also on a quest, though a completely different one. This quest is the Everest Millennium Expedition and the purpose of it is to measure Mt. Everest.

At this time, the most accurate assessment we have of the height of Mt. Everest is 29,028 feet, or about five miles up. The Millennium Expedition plans on using the latest technology, the Global Positioning System (GPS), to get the most accurate measurement of all.

Follow along with the expedition from the MountainZone’s Everest South Side Expedition Web site. This also includes frequent updates with expedition members, as well as photos and other multimedia.

The GPS equipment is being run for Brad Washburn, former Director of the Boston Museum of Science, and well known mountain photographer. The Museum of Science has a an excellent exhibit of Everest photos and memorobilia, as well as a scale model of Everest.

National Geographic is also covering this expedition, and you can view their site on it at Everest: Measure of a Mountain. Do watch the opening intro, it is worth it.

How’s the Weather up there?

Two expeditions seeking knowledge, are joined by a third expedition trying to answer the age old question: How’s the Weather up there? The Weather Channel follows two MIT graduate students, Matthew Lau and Chris Metcalfe as they join veteral researcher David Mencin to study weather on Mt. Everest, as well as place advanced telemetry at the peak for research.

The hope is that advances in the technology used for these instruments will allow them to function for a full year, being an invaluable resource for learning about the weather patterns at the world’s highest peak, but also providing information for climbers in hopes of having safer expeditions.

This expedition is covered at the Weather Channel’s Everest Web page. You can learn more about GPS at The NAVSTAR GPS homepage, or the University NAVSTAR Consortium.

 

Summary

The power of the Web is that folks like me, who will never climb something like Everest, can experience the next best thing by hearing from those who do and seeing the images they send back. My appreciation to them for giving me a glimpse of their quests.

Categories
Technology

Shopping Carts

Recently, someone, I’ll call him Joe, sent me an email and asked a question about maintaining shopping cart items using client-side cookies.

A shopping cart is basically a program that maintains a list of items that a person has ordered, a list that can be viewed and reviewed and usually modified.

Joe had run into a problem in that browsers limit both the size and quantity of cookies that can be set from a Web application, and this limited the number of items that could be added to his company’s online store’s shopping carts. He asked whether there was a client-side Javascript technique he could use that would update a file on the server instead of on the client so that customers could add an unlimited number of items to their carts.

Instead of trying to maintain the company’s shopping cart using cookies, I recommended that Joe check out his Web server’s capabilities, and see what type of server-side scripting and applications his server supported. Then he could use this technology to maintain the shopping cart items. I also recommended that he limit the use of the client-side cookie to setting some form of session ID so that the connection between the cart items and the shopper was maintained.

Shopping Cart Technology

Joe’s email did get me to thinking about how online stores use different techniques to maintain their shopping carts.

For instance, all of the stores I shop at, except one, use some form of a client-side cookie, but each store uses these cookies differently. Additionally, the stores use server-side techniques to support online shopping, though this support can differ considerably between the stores.

Client-side cookies were originally defined by Netscape for use in Navigator, though most browsers support cookies. Cookies are small bits of information stored at the client that can be maintained for the length of a session, a day, a week, or even longer.

The use of client-side cookies is rigidly constrained in order to prevent security violations. You can turn off cookies with your browser, but be aware that cookies do not violate the security of your machine and without the use of cookies your online shopping will be severely curtailed.

This YASD article does a little technology snooping of four online shopping sites and snoops out how each site uses a combination of server-side and client-side processing to maintain its carts.

Covered in this article are the shopping cart techniques used at the following online stores:

  • Amazon.com
  • Beyond.com
  • Catalogcity.com
  • Reel.com

Shop Til You Drop

To understand shopping cart maintenance, its important to understand customer shopping habits. We’ll start this article by looking at some online shopping habits.

First, online shopping has grown enormously in the last few years. Once folks found out that it was safer to send your credit card number over the Net at a secure site then to give it over a wireless phone, much of the hesitation about online shopping vanished.

What are the types of things that people buy online? Books, CDs, and Videos are popular, but so are kitchen utensils, computer hardware and software, photography equipment and film, food gifts, bathroom towels, and even irons.

People shop online because of necessity, convenience, and cost. We purchase books, CDs, and videos online because the online stores have a much larger selection than any local store could possibly have. We know that when we are looking for a book, even an out of print book, we will most likely be able to get the book from one of the online bookstores such as Amazon.

Some businesses we shop at, such as Dell, have no physical store location. This type of business provides service for their customers through mail or phone order, only. Many of us prefer to use online shopping for these types of stores rather than having to call someone up and go through a long list of items or manually fill out an order form, all the while hoping we put down the right item number. It is a whole lot simpler to look at an item and click on an image that says something like “Add to Shopping Cart”. An added benefit to online shopping is that we can review my order before it is sent, and can get a hard copy printout of the order for our records.

Normally, most of us only shop for a few items at a time, but the number of items can become large, especially with online grocery stores — a rising phenomena. However, it isn’t unusual for us to add some items to our shopping cart one day, a couple of more items another day, and so on, until we’re ready to actually place the order. At times we may even forget we have items in a shopping cart, until we add another item to the cart and the previous items show up.

We also change our mind at times, and may need to remove items from the shopping cart, or change the quantity of an item ordered. It’s also handy to have a running total for the order so we can make at least an attempt to stay within our budgets. If the shipping charges are also shown, that’s an added bonus.

Many of us may have more than one computer and may start a shopping session with a laptop and finish it at a desktop computer, though as you will see later in this article, this sometimes isn’t that easy. In addition, the use of both Netscape Navigator and Microsoft’s Internet Explorer on the same machine isn’t all that unusual for heavy Net users, and we may start a shopping cart with one browser and add to the cart from another browser.

Pulling requirements from these patterns of use, we come up with the following:

  • An item can be added to a shopping cart with the touch of a button
  • Shopping cart items need to persist for more than one shopping session
  • Some indication that there are items in the shopping cart should show, at least on the home page for the site
  • The store needs to provide a means to modify the shopping cart items, or to remove an item or all items
  • A running total needs to be maintained each time the shopping cart contents are reviewed
  • Showing standard shipping charges and other costs when the shopping cart is reviewed is an added bonus
  • The shopping cart needs to follow the client
  • Stores need to provide the ability to review an order before placed
  • Stores also need to provide the ability print out the contents of the shopping cart
  • Shopping carts should support an indefinite number of items, or the number of items should be limited by company policy, not Web technology limitations.

A pretty good list of requirements. Now, how do each of the stores measure up?

To determine when cookies are used at each of the sites evaluated, I set my browsers to prompt me when the online store wants to set a cookie. Using this approach I can see what kind of cookies the store uses, and get an idea of the cookie purpose.

Amazon.com

Probably the undisputed king of online bookstores is Amazon.com. This company began as a pure Net-based business, and has shown the world that online commerce not only works, it works very well, thank you.

Amazon has one of the better store interfaces, and some of the best account and order maintenance setups, but does it meet all of our shopping cart requirements? Let’s check it out.

First, all items that Amazon sells can be added to the existing shopping cart with the touch of a button, even items that are on order but not yet in stock. In addition, the shopping cart contents will persist even if you leave the site and return at a later time. In fact, Amazon tells you that the item will remain in the shopping cart for 90 days, if I read this correctly, a feature I found to be very nice.

Bonus technique: Let people know how long the shopping cart items will remain in the cart. The only surprise to pull on a customer is to let them know an item is on sale, or that they are the millionth customer and have won something. Anything else will lose you business.

Amazon also provides a feature to save the item for purchasing at a later time. This removes the item from the cart, but still keeps the item on a list for later purchase.

The shopping cart can be reviewed at any time, and there is an icon on every page that allows you easy access the shopping cart. You can also modify the cart contents by changing the quantity of an item you’re ordering, or removing an item altogether.

Amazon makes use of standard HTML technology, so the shopping cart review page should print out fairly well. Unfortunately, the shopping cart does not display shipping charges and does not display a running total for the items. However, Amazon does show a total, including shipping, that you can review before you place the order. This page can also be printed out.

So far so good. Amazon has met most of our requirements, but the real test of Amazon’s supremacy in shopping cart technology is whether the cart can follow the customer. Unfortunately, the company does not support this capability.

When you access Amazon from a browser at a specific machine for the first time, Amazon sets an ID that is used to track your shopping cart items. Access Amazon from the same browser and the same machine, and you will get the same shopping cart items. However, if you access Amazon from another machine or even another browser, you will not get access to these shopping cart items.

Is it important to maintain shopping cart persistence from browser to browser, machine to machine? You bet it is.

I, as with other folks involved with Web development and authoring, use both Navigator and IE. In addition, there are some sites geared more towards one of these browsers, so most folks who spend a lot of time on the Net have both browsers.

There are times when I am sure I have placed an item in the shopping cart, only to find out I did, but using a different browser or from a different machine. This happens more often than I would like, and is an annoyance every time.

Now the online stores have to ask themselves the question: Are people like myself part of a customer base they want to attract? Think of this: Who is more likely to be purchasing items online than folks who spend a large amount of their time, online. And who is likely to use more than one machine and more than one browser? Folks who spend a lot of time, online.

To summarize, Amazon uses client-side cookies to establish a persistent ID between the customer and the shopping cart. The company also uses this ID to establish a connection from the customer to the customer’s account information. The shopping cart items, however, are maintained on the server, persist for 90 days, and there is no limit to the number of items that can be ordered at Amazon, at least as far as I could see. Where Amazon does not meet the requirements is by not providing a running total on the shopping cart review page, and by not providing a shopping cart that moves with the customer.

Based on the requirements met by Amazon, I give them a score of 8 our of 10, for their support of shopping cart techniques.

Beyond.com

Beyond.com sells computer software and hardware and is a Net-only based company.

Beyond.com maintains a client ID in client-side cookies, which is used to track shopping cart contents for the session, only. Beyond.com does not persist the shopping cart contents outside of a specific browser session. Once you close the browser, the current shopping cart contents are gone.

In addition, it does look as if Beyond.com maintains the shopping cart totally within one client-side cookie, tagged with the name “shopcart”.

By maintaining the shopping cart on the client, Beyond.com has chosen one of the simplest approaches to maintain a shopping cart, and simple can be good. There is little or no burden on the server other than accessing the original item that is added to the cart. There is also less maintenance to this type of system, primarily because the Web developers and administrators do not have to worry about issues of storage of information on the server, or cleaning up shopping carts that become orphaned somehow. Additionally, Beyond.com is taking advantage of a client-side storage technique that is safe and simple to use.

However, there is a limitation with this approach in that the cookie is limited to a size of 4 kilobytes. It may seem that 4K is more than large enough to support a cart, but when you store information for each item such as cart item number, name of product, version, price, store identification number, quantity and price, you can reach an upper limit more quickly then you would think. Additionally, a limit is a limit, and you have to ask yourself if you really want to limit how many items a person can add to their shopping cart.

Most online stores could probably get away with shopping carts that have number of items limitations. After all, it might be a bit unusual to purchase 40 or 50 items from a software company at any one time.

If a store’s customers tend to purchase only a few items at a time, then it might not be cost effective to provide shopping cart technology that provides for virtually unlimited items.

Beyond.com also provides a quick look at the shopping basket from every page of the site. This quick view provides the first few letters of the cart item, the quantity ordered, and a running total for the cart. As much as I appreciate having this information, I found that I would have preferred having just the quantity of items in the shopping cart and the running total: listing all of the items actually became a distraction when I had more than a few.

Beyond.com uses standard HTML and the shopping cart page did print out using the browser’s print capability. In addition, you can review the order, including the shipping charges, before the order is fully placed.

To summarize, I liked Beyond.com’s support for shopping cart status display on the other Web pages. I also liked Beyond.com’s running total. The biggest negative to Beyond.com’s shopping cart was the lack of persistence outside of the browser session. I may not order more than 5 or 10 items at a time, but it isn’t unusual for me to add a couple of items at one time, close down the browser and return at a later time to add more items and finalize the order. In addition, it isn’t impossible for people to accidentally close down their browsers, which means they lose all of the items from their cart and have to start over. Based on the lack of persistence, I would have to give Beyond.com a 6 in shopping cart technology.

Catalogcity.com

CatalogCity is an interesting online business that presents the contents of catalogs from several mail order firms, allowing you to shop for everything from a new jacket to kitchen knives. Once you have placed your order for all of the items you’re interested in, CatalogCity then submits the orders for the individual items to the specific catalog company.

Of all the online shops I have seen, CatalogCity is one of the most clever. It provides both goods and a service, but without the hassle of maintaining inventories and supporting warehouses. I am sure that CatalogCity charges a fee to use their services to the catalog companies listed, but it is most likely more profitable for these companies not to hassle with online ecommerce issues. Even for those companies that have their own site and that use CatalogCity, they will get access to people who are looking to shop, but don’t necessarily know the catalog company’s name or Web site URL.

I do like to see effective and innovative uses of Web commerce. If I have a problem with the site, it is that not all of the catalog companies support online shopping in the catalog through CatalogCity. You can review the catalog and use the phone number provided to place your order. However, it’s just not the same as one button shopping.

CatalogCity uses cookies to set a customer id the first time you access their site. However, after that, all information about the shopping cart is stored on the server. There is no indication in the pages that you have shopping cart items, but you can access the shopping cart from a well placed icon on each site page.

The shopping cart page lists all of the items, provides the ability to change or remove an item, and provides a running total — sans shipping charges. It also provides a hypertext link from the item to the page where the item was originally listed, so you can review the item before making a final purchase.

The technology that CatalogCity uses for their shopping cart is fairly standard, so the cart page should print out easily. In addition, the company does provide the customer the ability to review the order, including shipping charges, before the order is placed.

The CatalogCity shopping cart is the most persistent of all of the shopping carts that I have seen. First, if you access the site but don’t set up an account, the cart will persist from browser session to browser session, but only with the same browser and machine. However, if you create an account with CatalogCity and sign in each time you access the site, the shopping cart will follow you from one browser to another, and from one machine to another. In fact, of all the sites I reviewed for this article, CatalogCity is the only one that provided this functionality.

To summarize the CatalogCity shopping cart technology, the company has provided the best support for shopping cart persistence of all the sites visited. In addition, the company also provides easy access to the cart, and provides a running total on the shopping cart page. CatalogCity also provides you with a change to review and modify your order as well as review the shipping charges before the order is placed. About the only non-positive aspect I found with this site’s shopping cart technology is that the site does not provide information that the shopping cart has items on the first page. If CatalogCity had provided this, I would have given the site a score of 10, but I’ll have to give it a score of 9.

Reel.com

Reel.com is an online video store that sells new, and used, VHS and DVD videos. It has an excellent selection of movies and a nicely organized site.

Reel.com uses cookies to set a user id when you first access the site. When you access a specific item, the site uses ASP pages and ASP (Microsoft’s server-side technology) sets a cookie with a session id when the first ASP page is accessed. After that, no other cookies are set. All shopping cart items are maintained on the server.

ASP or Active Server Pages, was the technology that seemed to be most used at the different online stores. ASP technology originated with the release of Microsoft’s Internet Information Server (IIS), but has since been ported to other Web servers and even to Unix from a company called ChiliSoft.

ASP provides for both server-side scripting as well as server-side programming with ASP components. In addition, ASP provides full support for data access through Microsoft’s Active Data Object technology.

One cookie that is set with ASP is the Session ID. When you access a site for the first time during a browser session, Microsoft tries to set a Session ID, to maintain a connection between your browser and the Web server. Without this, it is very difficult to maintain information about the session, such as shopping cart contents, from Web page to Web page.

Reel.com does not provide a running total for the purchases on the shopping cart page, and does not provide a visual indicator that items are in the shopping cart from any of the pages, except the shopping cart page. The company does provide easy text-based access to the shopping cart from each page and does allow you to change the quantity of an item ordered, as well as remove an item from the cart.

Reel.com provides shipping and full order information for the customer to review and modify before the order is placed, and the order page, as well as the shopping cart, can be printed out.

Reel.com does not provide persistence beyond the session. Once you close the browser, the shopping cart is gone.

To summarize, Reel.com did not score very high by meeting many of the requirements for a shopping cart. It didn’t provide a visual cue about shopping cart contents, at least on the first page of the site, nor did it provide a running total on the shopping cart page. The biggest negative, though, was that the site did not maintain the shopping cart persistently outside of the browser session. Reel.com did provide the ability to review and modify the order before the order was placed, but based on the requirements met, I would have to give Reel.com only a 4 for shopping cart technology.

Summary

Four different online store sites, each using different techniques to support the site’s shopping cart.

All of the sites used cookies to establish a connection between the browser session and the shopping cart. In addition, each site provided shopping cart pages that could be printed out, and provided the ability for the customer to review and modify, or cancel, the order before it was placed.

The highest scorer in my evaluation is CatalogCity, primarily because of their support for persistence across different browsers and machines. This was followed by Amazon, which provided for browser and machine specific persistence.

Both Reel.com and Beyond.com dropped drastically in points because of their lack of shopping cart persistence, in any form. However, Beyond.com did provide feedback as to shopping cart contents, something that CatalogCity and Amazon did not. Beyond.com may want to consider dropping their line item display of the shopping cart as this can be distracting. They were also the only online store maintaining their shopping cart in client-side cookies. While this approach has an advantage of being the quickest technique to displaying the shopping cart contents when the customer wants to review the shopping cart, and is the simplest technique to use, I still don’t see this approach as the way to go for online shopping.

If we could take CatalogCity’s persistence and add to it a running total with estimated shipping charges, and provided feedback on the cart contents in at least the home page of the site, we would have, in my opinion, the perfect shopping cart.

The next article in the Technology Snoop series will be on order processing and order status maintenance. In this you’ll see that Amazon shines. You’ll also read about how not to win customers and to lose the ones you have when I add some new online stores to the review.

Categories
Weather

The Ice Storm

Recovered from the Wayback Machine.

In 1997 I woke up with a new decision: I wanted to move to Vermont. I talked this over with my now ex-husband and we chatted and thought about it and decided that yes, we would move to Vermont from our home in Portland, Oregon; and move we did, in May of 1997 to a small farm located in Grand Isle, in the middle of Lake Champlain.

Photo of Ice Storm

Photo
 courtesy of NOAA

Our first winter was pure Currier and Ives, from soft snow covering the yard, to spotting our first cardinal, to watching a red fox hunting in the back yard. Snow fell about Halloween, and stayed from that point but it wasn’t a problem getting around. Until January that is.

It started as the oddest sort of rain, and we walked out into the yard to see it up close. We’d seen freezing rain in Oregon, but not like this, and not so heavy. A short time outside left us glazed; covered in a thin layer of transparent ice–like jelly beans after getting their confectioner’s coating.

We, very carefully, made our way back inside, and spent the evening looking at the ice building up on the deck, the power lines, and the trees dotting our property. Reflected in the outside light, the ice was beautiful, but we also noticed that the power lines were getting much lower than normal, and the tree branches seemed to be dipping too low to the ground. However, not much you can do about an ice storm, so we went to bed.

In the middle of the night a large crash woke us. We ran to the bedroom window, opened it, and looked outside to see what was happening. Another crashing sound came from a stand of trees that stood between us and one of our neighbors. Another crash sounded, and another, and another, and so on into the night, as tree branches, weighed down by the ice, broke and fell to the ground.

Nature had created an invisible monster that was decimating all of the trees around our house, and this monster worked tirelessly through the night. The next morning dawned cold, and clear, and most of the trees around us were decimated. Overhead a group of National Guard helicopters flew by, the only thing moving anywhere.

You would have thought we had been at war. Instead, this was the aftermath of the Great Ice storm of 1998.


What causes Freezing Rain and Ice Storms

Particles of moisture are present in all clouds, but this doesn’t result in any form of precipitation until the particles become too large (and heavy) and the state of equilibrium that the particles existed in is upset, and rain (or whatever) is the result.

Freezing rain is a specific form of precipitation, as is rain, drizzle, sleet, and hail. It is the result of water particles falling as snow, then encountering a layer of warm air where the particles melt into rain. However, before the rain hits the ground, it goes through a cold layer of air just before the surface of the ground. The rain is cooled, forming supercooled drops– drops that are cooled to below freezing but are not frozen.

When these supercooled drops hit the ground, they thin out and freeze, instantly, forming a thin film of ice.

There are actually two forms of this supercooled precipitation: freezing rain resulting from droplets of a size greater than or equal to 0.5 millimeters; freezing drizzle forms from droplets less than 0.5 millimeters in size. Both types, however, can be deadly.

Even when very thin, freezing rain–also known by the names of silver thaw or glaze–is dangerous; You’ve experienced this yourself if you’ve ever tried to walk on a surface covered with the ice, or worse, tried to drive on it. But the real danger with freezing rain is the build-up of ice on trees and power lines. The weight of this ice is enormous, and can cause tree limbs to break, fell large trees, and cause power lines to sag and break. A severe ice storm can be as deadly as a hurricane, though–usually–not as devastating to property.

Why was the Ice Storm of 1998 unique?

The Northeast does get hit with freezing rain in the winter, but the storm that occurred in January, 1998, was unique. Looking at the storm from the perspective of its impact in Canada, the amount of precipitation totaled 100 mm (about 4 inches) in Montreal. The last big ice storm in Canada occurred in 1961 and this one only dropped about 30-40 mm of ice. This is a significant difference. Think of 4 inches of ice coating a tree branch or street or power line, as compared to ice about an inch thick.

Add to this the extreme cold in the northern reaches of North American in January, and loss of power. If you don’t have a wood stove, you’ll freeze to death; several people did die of hypothermia–on our island, as well as elsewhere in the US and Canada. Others died from falling limbs and poles, or from carbon monoxide poisening from using karosene stoves in areas not adequately ventilated. More than 25 deaths were blamed on this storm.

All Storms End

The National Guard, power crews, and other public workers were outstanding, seemingly working days without taking a break to re-establish power lines, check to make sure people had firewood, or evacuate those folks in need of help.We even had crews come in from Hawaii.

The job of restoring power was enormous, with millions without power, than hundreds of thousands, then thousands, and finally, weeks later, down to the last few hundred.

As power was restored, we were asked to put signs in our yard if we saw that a neighbor had power and we didn’t so that crews could tell where work still needed to be done.

During this time, we picked up broken branches and piled them by the side of the road
for hauling — lots and lots and lots of broken branches and some pretty big piles. Luckily, the trees that were damaged could go until Spring until they had to be pruned, with splintered ends sawed neatly off and treated to prevent pests from further damaging the trees.

We, my husband and I and our cat, Zoe, were also lucky, in that we were without power for only three days. Additionally, we had a wood stove that could heat about half the state, and which could also be used for cooking. I even had my laptop and could check my email once a day (we did have phone service). This and our books and the solar powered radio (if you don’t own one, buy one) got us by.

Others did not fare as well as we did, and went for weeks without power, living in shelters, or trying to make do in their own homes.

My husband and I thought that this event would finally break through the reserve of our neighbors, as we stopped by several, offering to pick up supplies for them as we made a run into town as soon as the roads were passable. And the folks did seem grateful. But as soon as the ice cleared, they went back to being islanders, and we went back to being outsiders.

Seems only certain types of ice thaw.

Categories
Web Writing

Dynamic Web Publishing Unleashed: Chapter 37 – the Future of Web Publishing

Recovered from the Wayback Machine.

IN THIS CHAPTER

  • The Chaos of Change
  • The Current Technology Platform
  • A Summary Review of the Technologies Covered in the Book–Where Will They Be in a Year?
  • Client Scripting and CSS1 Positioning

With an industry that seems to go through a new revolutionary change at least four times a year, it’s hard to predict where Web publishing will be in four months, much less the next couple of years. However, taking a look at existing technologies and their state and at some new technologies can give us a peek through the door to the future, even though the crack we are peeking through might be pretty small.

First, many of the technologies covered in this book existed two years ago and will continue to be around two years from now. That means Java, which was introduced in 1995, HTML, the scripting techniques, and the basic Web page, which will continue to consist mainly of text and an occasional image. It’s hard to say if some new and incredibly different Web page development technique or tool will appear in the next two years, but regardless, people will also continue to use what is available now. In fact, the technologies introduced this year, such as Dynamic HTML and CSS1, will begin to become more familiar in 1998, and only begin to become mainstream technology in mid- to late 1998.

Web servers don’t seem to increase in technical capability exponentially the way Web page technology does. The real key to server technology is fast, reliable, and secure Web content access. The servers will become faster, hopefully more reliable, and the security should grow to meet the increasing demands placed on these servers to support commercial transactions. Additionally, there are new methods–particularly in the realm of commerce–that will work with existing server technology. Those are discussed in this chapter.

There are new technologies that have barely begun being explored this year. Channels and push technology started out with a bang and nearly ended with a whimper. Web consumers just didn’t buy into the new technology. However, with the built-in channel capability Netscape Navigator and Microsoft’s Internet Explorer have now, and with simpler channel development processes, channels can be considered down, but not out.

The real key to the future rests with standards as much as with implementation. The Document Object Model (DOM) working group of the W3C should present a first working draft of DOM by the end of 1997. DOM covers which HTML elements are exposed, and to an extent, in what way these same elements are exposed, what standard properties and events are, and how these elements relate to each other. If HTML doesn’t meet your needs, just wait–XML describes how to extend any markup language, to create an element such as <BUTCHER> or <BAKER> or, yes, even <CANDLESTICK_MAKER>. This chapter closes with a review of the new technologies currently under review and design.

The Chaos of Change

Sometimes you might feel you have to spend 24 hours a day just to keep up with the technology being released. Needless to say, this is both frustrating and discouraging, all at the same time.

Web development does seem, most of the time, as if it undergoes a revolution in technology every three months; many times one specific aspect of the technology undergoes a change only about once per year. However, preceding the release of the changed technology is a period when the technology is being reviewed, or the product is being beta-tested, or some form of pre-release activity is occurring. Then, the release of the standard or technology or product occurs, and there is a period of comparing it with its older version, or with other products. Then, you have to spend some time learning how the new technology works, how to migrate older pages or applications, or checking to see if existing pages or applications break with the new release. Finally, just when you think you are starting to become comfortable with the new or modified technology, the company or organization announces the new release of whatever the product or standard is.

Consider also that Web development is made up of several different technologies, including browsers, standards, embedded object technology, and server technology. Putting all of these aspects in one category–“Web Development”–and considering the multiple-phase delivery of most Web technology, provides what seems to be continuous change.

As an example, in 1997 it probably seemed as if a new browser were being released every quarter. Well, what actually happened is that there were minor bug fix releases of Netscape Navigator 3.x and Internet Explorer 3.x in the year’s beginning, and Netscape also released several different beta versions of Navigator 4.0 before it actually released Navigator 4.0. After the release, there have been several enhancement and bug fix releases of Navigator 4.0.

Microsoft also released two major beta releases of Internet Explorer and released the final version about the time this book went to editing. There will likely be enhancement and bug fix releases for IE 4.0 before the year is out.

Add the international releases with these other releases, and you have a browser release on the average of about every three weeks, not months.

Also consider that browser manufacturers themselves are at the mercy of the release of new standards or new versions of existing standards. The year 1997 saw the beginning of the DOM effort, a new version of the HTML specification, HTML 4.0, the rise in interest in XML, the passage of the ECMA standard for scripting, ECMAScript, and the recommendation of CSS1 for Web page presentation. And these are only some of the standards that impact browsers.

So, how do the browser manufacturers cope with the changing standards? The same way you can cope with all of the other changing technology: First, you define your Web development and Web client platforms. You determine what technologies make up each, including versions, and concentrate on these technologies, complete the effort you determine to complete with the defined platform, and then, and only then, begin to plan your next Web platforms.

The Current Technology Platform

For many companies and individual users, the current technology platform consists of Netscape Navigator 3.x or Internet Explorer 3.x for the browser, Apache 1.2, Netscape Enterprise Server 2.0 or 3.0, O’Reilly’s WebSite Pro, or Microsoft’s Internet Information Server 2.0 or 3.0.

Most Web pages contain a combination of text and images, and most of the images are static. Many sites use some form of scripting for page interaction, most likely a form of JavaScript. HTML tables are used to handle the layout of HTML elements, as shown in Figure 37.1.

As you can see from the figure, you can actually create a fairly organized page using HTML tables. The page also uses the font element to color the sidebar text white; the color attributes of the table header and contents are used to set the header to red and the contents to yellow.

Animation in a page occurs through the use of Java applets, animated GIFs, Netscape style plug-ins, or ActiveX controls.

The version of Java used with most applets is based on the first release of the JDK–JDK 1.0.

Server-side applications are used to create dynamic Web pages, to present database information, or to process information returned from the Web page reader.

A Summary Review of the Technologies Covered in the Book–Where Will They Be in a Year?

A good rule of thumb when working with content for multiple versions of a tool is to support the currently released product in addition to one previous release. Based on this, you can count on supporting pages that work with Netscape Navigator 3.x and 4.x, and Internet Explorer 3.x and 4.x. As both browser companies begin the rounds of creating version 5.0 of their respective products, the business world will be cautiously upgrading pages to work with the new 4.0 technologies, particularly CSS1, HTML 4.0, and Dynamic HTML. By the time they have made the move to 4.0 technology, the 5.0 release of the browsers should be close to hitting the street.

The browser companies themselves probably follow a similar reasoning in that they support a specific number of versions of a standard, such as HTML, before they begin to drop deprecated content from earlier releases of HTML.

Standards organizations rarely release more than one recommended version of a standard each year. Sometimes they might go longer than a year before a new release, rarely less than a year.

Based on this, the technology you read about in this book should be viable for two years after publication of the book, which takes the Web into the year 2000.

The following sections look at each of the discussed technologies, with an eye on where each is likely to be on the eve of 2000.

HTML 4.0, CSS1

To start with the basics, the foundation of Web publishing is HTML, and this technology was explored in the first part of the book. Additionally, the first part of the book also looked at Cascading Style Sheets (CSS1) and Dynamic HTML. Dynamic HTML’s future is covered in the next section.

As this book goes to press, HTML 4.0 is the version of HTML currently under draft review. This version provides for increased form and table support, deprecates several existing elements, and adds a few new element types and several new attributes, such as intrinsic events.

HTML 4.0 should become a recommended specification at the end of 1997. Currently, Microsoft has incorporated the HTML 4.0 draft specifications into the first release of IE 4.0, and Netscape has promised to adhere to the standard after it becomes a recommendation. Based on this, any changes to the HTML 4.0 draft will probably result in a minor revision release for IE 4.0. However, the HTML 4.0 changes for Netscape Navigator will probably be extensive enough for the company to incorporate these changes in the next release of Navigator, version 5.0. Following the Navigator 4.0 release schedule, you should begin to see the early beta releases of Navigator 5.0 in the spring of 1998.

No new activity is occurring with the CSS1 standard at this time, at least not in its impact on Web page presentation. Additional style sheet specifications are underway for speech synthesizers (ACSS, which is Aural Cascading Style Sheets), and a printing extension is underway for CSS, allowing for accurate page printing. This is in addition to the CSS-P, or Cascading Style Sheet Positioning.

At this time, tools that process, generate, or incorporate CSS1 in some form include HoTMetaL Pro 4.0 from SoftQuad (http://www.softquad.com), Microsoft’s FrontPage 98, Xanthus Internet Writer (http://www.xanthus.se/), Symposia Doc+ 3.0 from GRIF (http://www.grif.fr/prod/symposia/docplus.html), PageSpinner for the Mac (http://www.algonet.se/~optima/pagespinner.html), and others.

In the next year or two, Web pages will begin to incorporate CSS1 and HTML 4.0, though Netscape Navigator 3.x has a base of users wide enough to prevent most companies from using only CSS1 and HTML 4.0 to create Web pages. However, both of the major browser vendors have promised support of these standards, and many of the Web generation’s tools will integrate them into the new versions of their tools. As these new tools are appearing as beta releases now, they should all be released as products in 1998. By the year 1999, most companies that want to control the presentation of their Web pages should be using at least some form of CSS1, and begin the process of removing deprecated HTML elements from their pages.

You can keep up with the standards for HTML at http://www.w3.org/TR/WD-html40/. The standard for CSS1 can be found at http://www.w3.org/Style/.

Dynamic HTML and DOM

In 1997, with the release of version 4 of their browsers, both Netscape and Microsoft provided support for the first time for Dynamic HTML. Dynamic HTML is the dynamic modification and positioning of HTML elements after a Web page loads.

Dynamic HTML is a great concept and badly needed for Web development. With this new technology you can layer HTML elements, hide them, change their colors, their sizes, even change the elements’ contents. That’s the good news. The bad news is that Netscape and Microsoft have implemented different versions of Dynamic HTML–differences that are a little awkward to work with at best, and conflicting at worst.

Neither Netscape nor Microsoft has implemented broken versions of Dynamic HTML. When Netscape published Navigator 3.0 and exposed HTML images to scripting access, there was a great deal of discussion about Microsoft’s “broken” implementation of JavaScript 1.1, the version of JavaScript also published with Navigator 3.0. However, Internet Explorer 3.x was not broken, but the browser did not implement the same scripting object model as Navigator 3.x. Now, with IE 4.0 and Navigator 4.x, the scripting object models are even more disparate, making it difficult to create Dynamic HTML effects that work equally well with both browsers.

The solution to this problem could be found with the Document Object Model standardization effort currently underway with the W3C.

According to the W3C, the DOM defines an interface that exposes content, structure, and document style to processing, regardless of either the language used or the platform on which the DOM application resides. The functionality of Internet Explorer 3.0 and Netscape Navigator 3.0 is defined by the W3C to be level “zero” of the standard. You might assume it is that functionality that both of these browsers support, which means they would not include images.

At this time, the DOM working group has produced a requirements document, which includes items such as those in the following list:

  • All document content, elements, and element attributes are programmatically accessible and can be manipulated. This means that you can use script to alter the color of header text, or dynamically alter the margins of the document.
  • All document content can be queried, with built-in functions such as get first or get next.
  • Elements can be removed or added dynamically.
  • All elements can generate events, and user interactions can be trapped and handled within the event model.
  • Style sheets can be dynamically added or removed from a page, and style sheet rules can be added, deleted, or modified.

This list is just a sampling of the requirements for the DOM specification, but it is enough to see that when the DOM specification becomes a recommendation, the days of the static and unchanging Web page will soon be over.

To see more about DOM, check out the DOM working page at http://www.w3.org/ MarkUp/DOM/.

Client Scripting and CSS1 Positioning

Excluding the objects exposed to scripting access as members of each browser’s scripting object model, there aren’t that many differences between Netscape’s implementation of JavaScript and Microsoft’s implementation of JavaScript.

Scripting will continue to be used in the years to come, and, hopefully, the language will not get too complicated, or the ease of use of scripting languages will begin to diminish.

Again, the major impact on scripting occurs with the elements that become exposed by the DOM effort. However, this is not a guarantee that the same script block written for Netscape’s Navigator will work with Microsoft’s Internet Explorer.

Consider each browser’s implementation of dynamic CSS1 positioning. First, both companies support CSS1 positioning, a draft recommendation actually created by both companies. This standard provides style sheet attributes that control an element’s width, height, z-order (the element’s position in the stack if elements are layered), the location of the left side and top side of the element. The standard also provides an attribute to control the visibility of the object and the clipping area.

Figure 37.2 shows how well CSS1 positioning works by showing a Web page using this technology, opened in both IE 4.0 and Navigator 4.0. Note how the text aligns directly on top of the image (yes, the text and images are separate elements), and that the images are aligned in a vertical line along the left side of the page, without using an HTML table for layout control.

The example in Figure 37.2 is discussed in Chapter 8, “Advanced Layout and Positioning with Style Sheets,” and is located in the file images3.htm at this book’s Companion Web Site.

Using CSS1 positioning to control the layout of text and images.

If statically positioning elements using CSS1 positioning works equally well with both browsers, dynamic positioning does not. Both browsers create the same effects but use different techniques. Considering that standards usually define an effect or behavior but don’t necessarily define a specific technique, you probably won’t be seeing consistent scripting of HTML elements in the next couple of years.

Java

As this is being written, Sun is on the verge of releasing JDK 1.2, Netscape just created a minor release to cover most of the classes released with the JDK 1.1, and Microsoft also supports JDK 1.1 in its release of IE 4.0.

The use of JavaBeans–Java components that can be packaged, distributed, and used and reused in applications–is among the technologies supported with JDK 1.1. It’s a very good idea and one that has already achieved popularity among Java developers.

However, not all is positive in Java’s future, particularly when used with browsers. The browser companies are usually one version release behind the current Java class releases. That is not a problem. What is a problem is a situation that may make creating cross-browser applets virtually impossible.

The difficulties with the original release of Java had to do with the Advanced Windowing Toolkit or AWT classes. For the most part, interface development in Java was difficult and lacked sophistication. To resolve this, both Microsoft and Netscape began work with interface classes, called Application Framework Classes (AFC) by Microsoft and Interface Framework Classes (IFC) by Netscape.

Netscape joined Sun and combined Sun’s current efforts with its own IFC library to create the basis for the Java Framework Classes (JFC), due to be released with JDK 1.2. However, Microsoft had also spent considerable time with its own framework classes. At this time, the end result is Netscape and Sun supporting one set of classes and Microsoft supporting another.

To add to the problem, Sun also submitted Java to ISO (the International Standards Organization), to become a standardized language. They also asked to be designated a Publicly Available Submitter (PAS), or the group responsible for developing and maintaining the specification. At this time, the ISO working group, JTC 1, has voted against the Sun recommendation, with comments. Sun’s response, in effect, is that they will pull the language from ISO and treat it as a de jure standard, meaning that the company will retain control.

This is not a situation that is guaranteed to increase business confidence in the language. Add this to the difficulty of creating applets using any kind of framework, having the applet work with both IE and Navigator, and the increased sophistication of Dynamic HTML, and you may be in for a future decline of Java use for applets.

ActiveX

The new and exciting technology addition to ActiveX is DirectAnimation, DirectX technology extended for use with Java applets, controls, or scripting.

Being able to create ActiveX controls fairly easily using a variety of tools should lead to an increased popularity of these controls with companies whose intranets use Internet Explorer. The downside with the technology is that it is proprietary.

However, Microsoft also released several new filters that were originally ActiveX controls, but then were built in as style attributes. These controls can change the transparency of a Web page element, have a line of text become wavy, or add pinlights to a page. This technology is so fun and simple to use that the demand may likely be to add these to the DOM once it is released.

With this technology you can create rollover effects for menu images without the extra download of the rollover effect image.

CGI and Server-Side Applications

Server-side applicability is already at a point where most companies’ programming needs are met. CGI is still an effective server application technique and still works with most Web servers. If your company uses Internet Information Server, Active Server Pages is still a viable application option, just as LiveWire is for Netscape’s Enterprise Server.

One change you may see more of is the use of CORBA/COM technology and distributed processing, with Web servers acting as one hub within a distributed network. Browsers might become “interface containers” rather than Web page processing tools. With the increased sophistication of Dynamic HTML, it won’t be long before you might be creating a Web page as the front end for a company application, in addition to using tools such as Visual Basic or PowerBuilder.

VRML

VRML is a wonderful idea that’s still looking for that killer application to take it out of the artist’s realm and plunk it directly into business.

Consider VRML’s concept, which is that you send a simple text file to a VRML-capable reader, which then renders the text contents into incredible, interactive 3D “worlds.” This is Internet technology at its best, as you have seen already with HTML, and will probably see with XML.

With VRML 2.0, the living world specification and the capability to integrate Web page scripting and VRML worlds, you are going to see more of this technology in use, for store catalogs, Web site maps, educational tools, and yes, even to play games and have a little fun.

XML and Channels

Neither XML nor channel technology, which are related, has found a niche yet, but with the release of CDF technology from Microsoft and Netcaster from Netscape, this should change.

The concept of push technology started with a bang at PointCast’s release, and almost disappeared without even a whimper–a case of too much hype and not enough efficient technology. In addition, the channel content just wasn’t there.

The entry of Netscape and Microsoft into the channel technology can only boost the use of this technology. Already, there is an increased number of companies providing channels. Add in those companies that are using the Marimba Castanet technology, and you should see an increased number of channels from diverse Web sites in the next year.

XML is the Extended Markup Language standard that basically adds the concept of extending Web page HTML to include new objects–objects related to the company’s business, based on some topic, or even packaged for reuse.

Microsoft and Marimba have proposed the use of CDF (Channel Definition Format) with the use of XML for use with channel technology. Apple has used a precursor of XML to create 3D Web site maps that are generated automatically; your reader can traverse to determine which page to access.

You can read more about XML at the W3C site at http://www.w3.org/XML/You can read more about Microsoft and Netscape’s implementation at their respective sites or in Chapter 34, “XML.”

Summary

Where will you be in the future of Web development? Not all that far from where you are now. New technologies seem as if they are popping up all the time–but they need time to get absorbed into the mainstream of Web sites.

One technique to begin incorporating the new technologies is to create a special set of pages for your Web site–not your main pages, but those that can demonstrate a product or idea and use the new technologies. A set of pages for the Web site shown in Figures 37.3, Figure 37.4, and Figure 37.5 are viewable only by Netscape Navigator 4.0 and Internet Explorer 4.0; these are additional pages to a Web site, but the sites themselves use mostly mainstream technology, meaning some use of CSS1, scripting, some CSS1 positioning, HTML tables, and just plain text.

The Web pages use Dynamic HTML to hide and display content, as well as hide and display the menu. The page also uses CSS1 positioning to lay out the HTML elements.

The technologies discussed in this book are effective today, and will be effective into the year 2000. Best of all, they won’t break on New Year’s Day, 2000.