Categories
Critters Writing

A Tale of 2 Monsters Part 2: Cryptozoology

Recovered from the Wayback Machine.

Cryptozoology is a field of study that focuses on researching animals based on myths, eyewitness accounts, and legends.

Now, studying legendary animals is not as outlandish as it would first seem once you learn more about the studies, the people conducting the studies, and the discoveries of the past. For instance, the mountain gorilla was based purely in myth until its existence was proved in 1902; as recently as 1992, researches have found a new species of mammal, the large deer-like animal called the Spindlehorn.

Cryptozoology is not about biology or zoology as it isn’t a study of the known, nor is it involved in searching for new species based on scientific speculation. Cryptozoology is not related to mythology either, as the existence of the subjects of the myths and folklore are assumed to be real, and scientific means are usually used to investigate the possibility of existence of these subjects.

Focus of Cryptozoology

The word “cryptozoology” literally means “the study of hidden animals”, or the “study of hidden life”.

Cryptozoologist operate on the principal of “where there’s smoke, perhaps there’s fire”. In other words, when descriptions of an animal arise from several different unrelated sources, particularly when the descriptions re-occur over a long period of time, there is a chance that the existence of the animal, or a variation thereof, is real. They, the cryptozoologists, then study the descriptions and perform research based on the description using scientific tools, onsite investigation, and careful research.

Crytozoology isn’t just limited to the study of unknown species of animals, but also includes interest in species of animals rumored to be alive that were thought to be extinct, such as the Tasmanian Tiger and the New Zealand Moa. This is in addition to the study of known animals with species members of a vastly different size than accepted by science, such as giant anaconda snakes up to 60 feet in length, or crocodiles up to 30 feet in length.

Cryptozoology also includes animals that are known to exist, but reported in areas outside of their normal habitats, such as cougars being reported in the Eastern part of the United States. However, this latter field of study is more of a borderline study primarily because most instances of animals appearing outside their normal locale are doing so because of some extraordinary event, such as famine or drought, or due to the intervention of man.

What are the Cryptids?

The term cryptid is used to represent one of the cryptozoological creatures currently being studied. Among the cryptids, some of the the more famous are the following (pulled from a list at the International Society of Cryptozoology (ISC)2:

  • Nessie, the Loch Ness Monster
  • Bigfoot
  • the Yeti, aka the Abomindable Snowman
  • Champ, the Lake Champlaign Monster
  • Ogopogo, the Lake Okanagan Monster
  • Giant Octopuses and Squid
  • Sea Serpents

 

Who are the Cryptozoologists?

Many of the leading figures interested in cryptozoology come from largely scientific backgrounds. If one looks at the credentials of the current board for the International Society of Cryptozoology (ISC), one can see members that hold degrees in zoology, anthropology, biology, oceanography, biology, and a host of other sciences.

Among these Board members3 of the ISC are folk such as Bernard Heuvelmans, the President of the ISC, a zoologist who also coined the term “cryptozoology”; Roy Mackal, a biologist most interested in the Mokele-Mbembe, rumored to be a surviving member of the dinosaurs located in the Congo; and Grover S. Krantz, an anthropologist best known for his work with Bigfoot researches in the Northwest.

Other folks4 doing research within the realm of cryptozoology include Paul LeBlond, an oceanographer interested mainly in Caddy, the name given to a sea serpent spotted off the the coast of British Columbia and thought to be a surviving member of the species cadborosaurus 5.

Outside of the Board of the ISC, but listed as a Life Member, is one of the leading figures in cryptozoology, Loren Coleman6. Coleman has an undergraduate degree in anthropology, and a graduate degree in social work. He is currently working and teaching in the New England area. Coleman has been conducting research since the 1960’s, is a filmmaker as well as an author, and has written several books based on cryptozoology, such as “Cryptozoology A to Z: The Encyclopedia of Loch Monsters, Sasquatch, Chupacabras, and Other Authentic Mysteries of Nature”, and the recent “The Field Guide to Bigfoot, Yeti and Other Mystery Primates Worldwide”.

Loren has also written a biography of an early cryptozoology pioneer, Tom Slick, in a book titled “Tom Slick and the Search for the Yeti”.

Tom Slick sounded like an extraordinary individual. His father was a legendary oil wildcatter 7, an independent oil man who made millions, which Tom Slick, Jr. proceeded to spend researching various cryptids, such as the Yeti and the Loch Ness monster. In addition to his efforts in search of legendary beasts, rumors also have it that Tom Slick assisted in the escape of the Dali Lama8 from Tibet.

Nicolas Cage is set to produce and star in a movie about Tom Slick, titled “Tom Slick: Monster Hunter” 9, and based on Loren Coleman’s book.

The Skeptics

Okay, now let’s look at the definition of cryptozoology a little more closely. To repeat, cryptozoology isn’t just the search for new species of animals, leaving this area of study to zoologists and marine biologists. Cryptozoology is, however, the study of unknown animals observed by non-professionals, and whose observations form the basis of rumor, legend, and folklore. In fact, if you were to blend some of the practices of anthropology and archeology in with zoology and biology, you could have the scientific tools ideal to conduct research as a cryptozoologist.

That said, though, you can imagine that studying animals on the basis of folk lore and amateur sightings is not going to go unnoticed by the folks who don’t necessarily agree with the premise of cryptology. When researching crptozoology online, you will see descriptions of cryptology from the mildly skeptical to the downright vehement. For instance, the Skeptic’s Dictionary contains the following defintion of cryptozoology:

Cryptozoology relies heavily upon testimonials and circumstantial evidence in the form of legends and folklore, and the stories and alleged sightings of mysterious beasts by indigenous peoples, explorers, and travelers. Since cryptozoologists spend most of their energy trying to establish the existence of creatures, rather than examining actual animals, they are more akin to psi researchers than to zoologists. Expertise in zoology, however, is asserted to be a necessity for work in cryptozoology, according to Dr. Bernard Heuvelmans, who coined the term to describe his investigations of animals unknown to science10.

The leap between psi researches and investigating animals based on folklore seems a bit of a stretch, but at the least the definition provided isn’t nasty — more dismissive in nature.

Other folks aren’t dismissive as much as they are more interested in pursuing other fields of study.

One of the Web pages I visited while researching this article had an interview with a Dr. Jeanette Muirhead from the University of South Wales, and discussed another species of animal, the Tasmanian Tiger assumed to be extinct since 1936.

Dr. Muirhead has studied the Tasmanian Tiger or Thylacine and other carnivorous marsupials for a decade, though I could find no other verification of either her position at the University or her field of study.

The focus of the interview was that no physical documentation of the Tasmanian Tiger has been found since the last purported member of the species died, and perhaps a better use of resources spent investigating reportings of the Tiger would be the preservation of the environment where the tigers are supposed to exist, indirectly helping them if they are alive, and definately helping other possibly endangered animals:

If it does exist, the best resources we could probably put into keeping it alive would be to maintain its habitat. Also, because the money is not coming up with the goods to date, perhaps what would be better would be [for resources to be] going into preserving the habitat of other animals that appear to be on the brink right now. Rather than futile searches for things that probably don’t exist, we would be better off helping to retain the animals that are on the brink 12.

Dr. Muirhead has a valid point though the interview transcript was unnecessarily derogatory, with sideline quotes by Monty Python, damaging the credibility of the interview.

The gentler critics of cryptozoology are joined by those with much stronger views against this field, though these critics don’t necessarily attack cryptozoology per se, as much as they attack specific instances of cryptozoological research. For instance, the Skeptic’s Dictionary, which had a fairly mild statement about cryptozoology had less than mild, but pertinent, statements about the search for the Loch Ness Monster 13. This definition will be covered in more detail in Part 4, Nessie: The Loch Ness Monster, of this 4-part series.

Oddly enough, not all of the critics of cryptozoology are skeptics. A fairly explicit criticism of cryptozoology is the following:

SORRY FOLKS, CRYPTOZOOLOGY IS DEAD

And here is why: cryptozoology is just a child, feeding off the breast of moma zoology. Some would say a parasite. Finding new oxen, subspecies of monkeys, a new fox somewhere – these are just things that ZOOLOGISTS do. And the amateurs (“naturalists”) who help them.

As far as the major “CRYPTIDS” (Nessie,Bigfoot, Yeti,Black Cats,Black Dogs,etc) get this straight:

NONE, as in ZERO, NONE, NOT ONE,

have been found,dead, collected, at all, in the 40-2000 years that the searches for them have taken place.

“Cryptozoology” is a failure 14.

One can see that this person clearly has some very strong feelings about cryptozoology. In case your first reaction is that this person is an opponent to the belief in such things as Bigfoot and Nessie, think again. This person is a proponent, instead, of a field that they term “para-cryptozoology”, or the study of animals that aren’t just hidden, but instead are “dreamed” up by folks, and the dreams manifest into reality.

The field of cryptozoology does tend to get lumped in UFOs, astral projection, ghosts, and other fields of research into the paranormal, at least on the Net. Unfortunately, by grouping all of these separate fields together, those interested in researching the paranormal actually discredit themselves rather than promote their beliefs. The reason for this is because the focus then becomes one of belief rather than one of research.

It is the same as saying “I believe in the existence of Nessie, therefore I believe in UFOs and ghosts, and the ability to project one’s self out of one’s body.” However, in reality, you could be interested in cryptozoology and a supporter of cryptozoology, and still think astral projection is nothing more than doggie doo doo.

Science not Belief

If the field of cryptozoology has not gone unnoticed by skeptics and other critics, it also, unfortunately, hasn’t gone unnoticed by the Believers, either.

What are the Believers? These are folks that believe in certain theories regardless of the evidence against the theory, and who are not willing to discuss any evidence other than that which promotes their theory and thus their own belief. Unless you think this type of believer is restricted to the field of cryptozoology, think again. I have seen such Believers in action in my own field of computer science.

From my understanding of cryptozoology, those who work or study in this field don’t necessarily “believe” in the creatures they study. They didn’t wake up one morning and go, “I believe in Champ today. I think I’ll set out to prove that Champ exists.”

Instead, for the most part, a cryptozoologist is as likely to be happy at finding evidence that a the purported animal being studied does not exist or is a member of an already known species, as they would be to find that the creature does exist. In other words cryptozoologists, as with other scientists, conduct their researches with open, and relatively unbiased, assumptions about the subject of their studies.

A case in point is the “sea monster” found by a Japanese fishing trawler in 1977.

In 1977, the Japanese fishing trawler Zuiyo-maru accidentally dragged on board a large, decaying corpse unlike anything they had ever seen before. The captain of the boat decided to throw the carcass overboard rather than have it spoil the boat’s fish catch, but not before one member of the boat, Michihiko Yano, took several pictures of the corpse, as well as making measurements of the creature. Yano also took samples of tissue.

Excitement soon spread that what the Zuiyo-maru caught was a decaying carcass of a modern day plesiosaurus, a dinosaur that somehow managed to survive to this time. This belief was so widespread in Japan that the Japanese government actually issued a commemorative stamp celebrating the find. News of the “sea monster” spread throughout the worlds, covered in stories in the New York Times, Newswekk, and Oceans magazine, as well as other major newspapers and magazines.

However, calmer heads began to prevail16. First, many scienctists at the time believed that the carcass was that of a basking shark as it had the right dimensions and looked very similar to other basking shark corpses that had been found. In addition, examination of the samples that Yano took showed that the tissue had properties that were extremely similar or identical to other basking shark tissue samples. A team of scientists led by Dr. Tadayoshi Sasaki published papers that concluded that the corpse was most likely that of a basking shark, though without the corpse itself, their conclusions could not be exact.

Finding that a creature under investigation is not a new unknown species, or one thought to be extinct, and using scientific methods to determine this information is just as much a part of cryptozoology as proving the existence of the species.

Not All is Harmonious

As with any other field, there are those members of cryptozoology that have theories and anything and everyone questioning those theories is suspect. And as with any other field, there is infighting as well as cooperation within the ranks of those interested in cryptozoology.

For instance, in the online pages devoted to cryptozoology a great deal of respect is paid to certain pioneers of cryptozoology, such as Loren Coleman and Bernard Heuvelmans, and from what I can see of these gentlemen and their researches and efforts, the respect is rightfully deserved.

However, there is not a universal feeling of togetherness within the ranks of those who follow cryptozoology. In my wonderings about the Web I found a bit of name calling by two people based on one research trip to Norway in 1998, in search of the Sea Serpent of Lake Seljord.

The search for the Lake Seljord sea serpent is known as GUST17, which stands for Global Underwater Search Team. GUST 98 was headed by Jan Sundberg and included Dave Walsh of Blather18 fame, as well as a camera crew filming the results for Discovery. From the accounts given by John Grove, who headed up the film crew, the expedition started out harmoniously, but ended with some of the expedition members leaving in less than friendly circumstances. Additionally, both sides of the disagreement, primarily Dave Walsh20 and Jan Sundberg also indulged in a bit of web-based bashing of each other, though Jan Sundberf has pulled most of his critical pages in favor of posting pages for GUST99.

This expedition is the practice of cryptozoology at its worst. The leader of the expedition seems to lack the objectivity necessary for true scientific research. In addition, scientific equipment was used, but from accounts of the expedition, the members were not trained properly in the use of the equipment. Additionally, using scientific equipment or even scientific methods does not make for legitimate research if expedition members lack organization and a systematic plan of study.

In addition, the actual split in the expedition was over whether to sell a photograph that the team leader, Jan Sundberg, had taken, a photograph that he said showed the sea serpent, but which looked to the other members of the team to be a photograph of waves. Add to this a general disagreement over how the research was conducted and eventually Dave Walsh and Kurt Burchfiel21 left the expedition.

For the field of cryptozoology, this entire trip sounds to have been a farce, and the worst of it was, the whole thing was filmed by the Discovery Channel’s film crew. Not exactly a poster expedition for the legitimacy of the field.

Members of expeditions and other working groups do disagree, though most are careful to not publicize their disagreements. However, those that pursue a field of study such as cryptozoology, which is more controversial than not, can’t necessarily afford to have any adverse publicity about the practitioners or their methods.

Cryptozoology and our Friends: the Giant Squid and Nessie

So, how do the Loch Ness Monster and the giant squid relate to cryptozoology?

The Loch Ness Monster is probably one of the star creatures of cryptology, along with the Bigfoot and Yeti. In fact it is this association that tends to provide the most criticism, one of the other. For instance, if you don’t believe in Nessie, and think research of Nessie is bunk, you will tend to scoff at calling cryptozoology a legitimate field of study. Conversely, if you believe that cryptozoology ranks up there with belief in ghosts and astral projection, and you think both of these are hogwash, than you are likely to discount any cryptozoological findings about the Loch Ness Monster, even if the findings are worth at least a first glance.

The giant squid, on the other hand, has had physical verification and validation and there is no doubt of this creature’s existence. Still, the giant squid has not been observed, alive, in the wild, and its behavior and even estimates of the size of the creature are definately the focus of many tales. Because much of the knowledge of this creature is still based on supposition and in folklore and tales, the giant squid maintains at least an honorary position within the field of cryptozoology.

So just what is known about the giant squid, and what is some of the folklore about this creature? Find out in Part 3, A Tale of Two Monsters: The Giant Squid.

Categories
Critters Writing

A Tale of 2 Monsters Part 1: From the Legends

Recovered from the Wayback Machine.

Who is there that doesn’t love a monster?

What is a monster? One could say it is any creature bent on damage or destruction. This definition would then force us to include people in the category of “monster”.

However, when I think of “monsters” I think of the creatures of legends and tales, from the books and movies, and I think of the creatures that have entertained me for years. A definition of “monster” that I particularly like is from Webster’s Revised Unabridged Dictionary1:

Something of unnatural size, shape, or quality; a prodigy; an enormity; a marvel.

A marvel. That fits so well. We marvel at the unknown, we marvel at what may be around the door, under the water, out in space. With this definition, instead of a monster being something to fear, it becomes something that is really… marvelous.

This four-part Dynamic Earth article covers two famous monsters, the Loch Ness Monster and the giant squid: one creature known to be real, the other considered more myth than monster, depending on who you talk to.

Part One of the article focuses on what makes a monster, including an overview on what makes a legend, such as the tale of the great kraken, half octopus and half crab and large enough to destroy ships. Part Two then takes us from the realm of folklore to the realm of science, and discusses the controversial study of cryptozoology: the field of study devoted to investigating the possibility of the existence of animals from legends and folk lore.

Parts Three and Four then concentrate on the two stars of the article: the Loch Ness Monster and the giant squid. Both true marvels. Both true monsters.

The Monsters

Though covered in more detail in parts three and four of this article, I did want to talk briefly about Nessie and the giant squid, specifically as they relate to our topic of legends and legendary beasts.

The giant squid could originally be termed a cryptid — an animal based purely in legends and folk tales — but scientists now have physical evidence of these creatures, including entire specimens. Based on this, one would assume that the giant squid now fits comfortably within the more traditional sciences such as zoology and marine biology.

However, there is still much about giant squids that is unknown, including the size they can reach. General supposition is that the giant squid reaches up to 60 feet in length, weighing in within the 1-2 ton range. Nevertheless, eyewitness accounts of the giant squid have put it at sizes longer than 100 feet! Not only that, but the behavior of the giant squid is also based more in rumor and in legend than in scientific observation, with stories of these creatures attacking people in the water, as well as attacking whales and boats. Because little is known about squid behavior outside of these legends, the giant squid still maintains a foot, or should I say tentacle within the field of cryptozoology.

Studies of the Loch Ness Monster — or Nessie as it is affectionately termed — live solely in the realm of cryptozoology as there is no actual physical evidence of Nessie outside of some highly contested radar and other images in addition to eyewitness accounts. However, the eyewitness accounts of Nessie are numerous, the tales of Nessie have been told for years, and there is no direct physical evidence that Nessie does not exist. So the creature lives firmly entrenched in cryptozoology and not mythology as some would feel to be more appropriate.

Both Nessie and the giant squid are monsters in the truest sense. Both are large, much larger than people, both inhabit the world of water which is still foreign to most of us, and both are heroes of tales from throughout the centuries.

Tales of Monsters

We love to be frightened. We love scary movies, and ghost stories, and legends about evil beings, and movies with big monsters and aliens and other things that go bump in the night. Or day for that matter.

Does the best selling author of our time write books about romance or suburban angst? No, the best selling author is Stephen King, whose genre tends to be focused on horror. Is the favorite ride at an amusement park the merry-go-round? No, the favorite ride is most likely the roller coaster — the bigger, faster, higher, and the scarier, the better.

So, what scares us most of all? At least in that pleasant, shivery way we all seem to crave?

If you’re thinking a person wearing a mask, carrying an axe dripping with gore, you forget that I mentioned “pleasant, shivery”, not “grossed out and tense”. No, for the fun type of scare only one thing will do, and that thing is monsters. Preferably big ones that don’t stay in their own habitat but leave the water (ice, sky, ground, space) and come stomping with big oversize feet right at you. Well, not at you specifically, but the hero of the movie or book or story you are currently enjoying, the person who we can identify with because we are so caught up in said book, movie, or story.

How odd that we are frightened of these large creatures when we should be more frightened of the smallest creatures inhabiting the earth. After all, more of us die from disease and sickness caused by insects, bacteria, viruses, and mutated cells than from any other reason.

Very few people die from being *squished* by a large creature ala Godzilla.

Monsters have scared and entertained humanity since we first started drawing pictures of them on cave walls and telling stories about them around campfires. We liked stories of monsters so much, we even created legends about some of them, legends that survive to this day.

The thing about monsters, though, is they are best savored behind a curtain of ignorance. Once a monster is viewed up close, and looked at with the looking glass we call reason, it no longer has the power to scare us and we wonder that it ever did.

Imagine for a moment that you are an ancient sailor, sailing in a small boat about 20 feet or so in length. All of a sudden next to you appears this large behemoth of a creature, of a size that could make splinters of your boat and most likely of yourself.

The behemoth is a whale, a gray whale to be exact. Are you frightened? I’m asking the modern you this question. How can you be frightened of something that you might pay a ton of money to go see in boats as small or even smaller. But to that sailor of long ago — that’s you, too, remember — the whale must seem as a monster sent by the gods themselves to drag you to a watery death.

So you pray to the gods and you ask for salvation and forgiveness from whatever evil you had done to be sent such punishment. Lo and behold, the behemoth slowly moves away! You in this modern day and age know that the whale has moved away because you a) aren’t food, b) aren’t an enemy (yet), and c) aren’t very interesting. But to you the sailor of the past, you know in your heart of hearts that you have been saved by divine intervention.

You also know that you have a real kicker of a story to tell when you get into shore, and a legend is born.

How Legends Begin

What makes a legend? In the previous section we can see how legends are born whenever we are confronted by something outside our experience. However, there are other factors that go into making a legend.

First, many of our earlier events in history were not originally recorded in writing, but were, instead, told verbally, as stories. Sometimes accuracy was maintained…and sometimes the story teller embellished the telling, making the story more interesting to the audience or perhaps more flattering to the story’s subject.

For instance, Alexander the Great was a real person, a key figure in history. We know this is true. However, there are an enormous number of legends about Alexander, of which my favorite is the legend of the Gordian Knot.

The legend of the Gordian Knot is that there existed in the town of Gorium, in the ancient land of Phrygians, an ox cart tied to a post with a knot so complex, with both ends of the knot hidden, that no one person could untie it. Legend also has it that it was foretold that whoever would loose the knot would be conqueror of Asia.

Alexander the Great heard of the legend and decided to take a hand at undoing the Gordian Knot. After looking at it he takes his sword out and cuts it in two, thereby “loosing” the knot with one simple, clean cut. To this day coming up with a simple, clean solution to a supposedly complex and unsolvable problem is known as “cutting the Gordian Knot”.

Alexander the Great was taught by another well-known person of his time: Aristotle. One can’t help wondering what the teacher would have thought of the student’s solution. Would he have admired the innovative approach? Or would he have deplored the loss of a perfectly good rope.

Is the story true? Possibly. Or the story could have been fabricated as a form of propaganda from Alexander in order to provide popular justification for his aggressive tendencies. Regardless of its truth or not, the legend of the Gordian Knot remains to this day.

Another factor in the making of a legend is that humanity has never been especially graceful about admitting a lack of knowledge when faced with a new unknown, and can sometimes come up with the most outrageous explanations of an event or object.

As an example of dealing with an unknown, our earlier ancestors didn’t always have an understanding of planetary orbits, so an eclipse of the sun didn’t occur because the moon’s orbit brought it between the earth and the sun, blocking the view of the sun. No, the eclipse occurred because dragons were eating the sun. To stop these hungry reptiles, these same ancestors pounded on kettles and pots, making noise to chase the monsters away. If you think this is silly, think on this: the noise making worked and the sun did re-appear. Our ancestors may not have understood planetary orbits, but they did understand cause and effect.

The reason why our ancestors assumed dragons were eating the sun, or that gods controlled the weather and the seas is that they knew little about the world around them. Folks in the past didn’t understand that forces deep within the earth were responsible for earthquakes and volcanoes, and that weather was influenced by something such as the temperature of the ocean waters. To them, it would seem as if some external force was responsible for all unexplained events.

Consider some poor sailor in one of the small sailing craft that plied the sea centuries ago. The boat is moving along nicely and passing to one side of an island when all of a sudden, the island seems to be much closer than originally thought. Not only that, but huge teeth and arms seem to be reaching out of the water, grabbing the boat and dashing it into a million pieces.

We understand about things such as currents and lower tides exposing rocky shores, but our earlier ancestors may not have been as aware of such things. To them, it would look as if the island or sea was alive and a monster has suddenly grabbed the boat to tear it apart. If at least one seaman escapes with his life and tells this tale, he plants the idea in other seaman minds and a legend begins to form. Sound silly? Well, the legend of the creature as large as an island is real and the creature is known as the Kraken.

 

The Kraken

 

His ancient, dreamless, uninvaded sleep,The Kraken sleepeth
 

So goes a poem by Lord Tennyson titled The Kraken.

Depending on the source you read, the Kraken is fabled to be the last of the Titans as well as a Norwegian sea monster. It is described as having a thousand tentacles and being a cross somewhere between a crab and an octopus, but on a much larger scale.

The Museum of Natural Mystery 5 includes a reference to an earlier written description provided by a 16th century Norwegian Bishop. This description states that the Kraken is a “floating island”, over 1 1/2 miles wide! Now that would be a monster to see.

Later tales of the Kraken have shrunk the creature down to a more palatable size, but still maintained its ferocity and stories about the creature attacking ships, sometimes pulling the ships under water, have continued even into the 20th century.

Conjecture at this time is that the Kraken may actually have been a giant squid. If this seems farfetched consider that rumors exist that the giant squid can reach lengths of 100 feet or more. If so, a squid this size, weighing a couple of tons, could easily capsize a smaller sailing vessel. Even a squid 60 feet in length, the largest scientifically proven size, would not be something one would want to meet while going for a swim in the moonlight.

The Kraken has all the makings of a truly great monster. It’s large, it lives in the ocean, in the deepest parts of the ocean, and legends say that it has attacked people. Tales of these attacks aren’t frequent enough to become truly intimidating, just enough to give us that pleasant, shivery sensation.

Now if the legends of the Kraken are attributable to the giant squid, we haven’t lost anything in the exposure. So little is known about these creatures that they might as well exist in legend as outside of it. In fact, we are so attracted to the legends about the giant squid that we have made it a star. Or should we say that Jules Verne has made it a star in his classic tale 20,000 Leagues Under the Sea. And it is tales such as Verne’s and movies based on these tales that keep legends of monsters of the deep alive today.

Modern Day Legends: The Movies

Considering what was said earlier, that legends sometimes grow out of humanity’s ignorance as well as our fear of the unknown and you can see the basis of many of the old science fiction movies of the 50s. Two common themes dominate these movies: the first is our fear of The Bomb; the second was our fear of what exists in regions hostile to man — the sea and space.

Interest in science grew enormously in the 50’s, especially interest in outer space. The race was on to put the first man into space and we all dreamed of a time when we, humanity, would ride large ships to other stars, preferably uninhabited stars. And that was the contradiction of the times — as much as we wanted to explore the unknown, we were also afraid of what we would find.

So, we had movies such as the extremely well done War of the Worlds6 and the not so well done Plan 9 from Outer Space7. It is a wonder we could sleep at night, our movies had creatures from every corner of the galaxy ready to fly in and wipe us all out.

Even the plants were dangerous.

If you are a serious fan of science fiction then you also had to have seen the original Thing8, with a pre-Gunsmoke James Arness appearing as plant shaped like a man, strong, nasty, barbed, and with a thirst for human blood. This movie was an excellent example of the belief that if it was different, than it had to be evil and out to get us, us being relative. The movie also included subtle digs about scientists and their search for knowledge at the cost of endangering mankind.

The Thing highlighted the ambivalent attitude we had towards science in the 50s. As much as we loved science, we were also a bit frightened of it and those who were its practitioners. After all, if it was science that would send us into space, it was also science that brought us the very real horror of nuclear war.

Not all visitors to the planet had hostile intentions. One of the best movies made during the 50s was the Day that the Earth Stood Still9 , with the alien out to save us from ourselves. Our parents liked the message, we liked the big robot that could zap everyone to ashes.

It was the Bomb and our fear of the Bomb (at least in hands other than our own), that became the second major theme of most sci-fi films of the 50s. We, the general populace, didn’t know exactly what side effects could be generated by this deadly weapon so we made a few up. With a little help from the movie makers, of course. The two most common effects of the Bomb used in movies at that time were common creatures grown to a monstrous size, and extinct animals, primarily dinosaurs, being awaken.

If you grew up in the 50s and 60s, you were exposed to some wonderful movie monsters. By today’s standards the monsters probably seem clumsy and pretty fake, but in that time it seemed to be so simple to suspend your beliefs and let your imagination roam. A special favorite was Them!10, with its full size monster ants. Them! was far superior to another movie of the time called Tarantula11, with its images of real life spiders blown up and superimposed with the movie actors. However, both movies did share a common theme: insects growing to an enormous size because of radiation.

The monsters didn’t just crawl around on the ground. Another of the better movies of the time, It Came from Beneath the Sea12, featured a giant octopus that attacks San Fransciso, courtesy of special effects master Ray Harryhausen. In this movie, the creature surfaces to seemingly try out the munchies on dry land for a change in diet or some such thing. And guess who the munchies, were, hmmm? Just call us Octo-Crunchies!

Another Harryhausen movie, The Beast from 20,000 Fathoms13, was about a pre-historic creature that was awakened from a frozen state by the detonation of an atomic bomb in the Arctic. This beastie decided to visit New York, taking in the sites, tearing down a few buildings, noshing on one of New York’s Finest instead of pretzels in the Park.

With many of these movies, the special effects used was the best available, but the real key to the enjoyment of the movies was not the effects so much as it was the suspense, and the ability to generate that pleasant, shivery feeling. Particularly effective was the use of the music. There is a distinctive sound that these old movies used when a creature was approaching our heroes, one that can’t be described but if you hear it, you know it. By providing “hints” of what is about to happen, the film makers built anticipation, but also provided a gentle warning so that the movie viewer was surprised by the appearance of the monster, but not so surprised or startled as to pass from pleasure to discomfort.

Lest you think great monster movies were only created in the 50s, some current movies also have created wonderful monsters. Steve Speilberg’s Jurassic Park is one of the best movies of all time with its incredible effects, excellent story line and adherence to some of the older movie formulas — most specifically by not being too graphic. In addition, as with the 50s movies, man rather than beast is the true culprit, this time the scientists messed with DNA rather than the atom. The end result, though, is pretty much the same — big critter eats smaller critter, smaller critter is us.

Oddly enough, out of all the movies that feature “monsters”, the most plausible movie monster is probably the giant squid from the Disney movie 20,000 Leagues Under the Sea14. Based on the Jules Verne book of the same name, the movie features a giant squid attacking the submarine that is the focus of both the book and the movie.

Though fictional, the squid shown in the movie and discussed in the story is not so large as to actually be outside of reality. Based on folk lore and legend, the giant squid can reach sizes of 100 feet or more. Add to this eyewitness accounts of giant squids attacking submarines and other ships, and you move much closer to fact than fiction with this story.

In fact, Jules Verne himself had heard a story of a giant squid attacking a military ship and based his monster on this story. A case of legend possibly becoming fact, which is as good a lead in as any to Part Two of A Tale of Two Monsters, covering Cryptozoology.

Categories
Technology

Adding dynamic content for multiple browsers & versions

Originally published in Netscape World May 1997, archived at Wayback Machine

One concern facing all Web developers when Microsoft or Netscape release a new version of Navigator or Internet Explorer is how to incorporate some of the new technologies of the new versions, but still offer Web pages that are readable by older versions of the browser. The developer could consider forcing those people who view their pages use the newer versions of the browser, but this approach limits your audience, something most Web sites want to avoid.

Neither is it feasible for a Web site to limit their pages to those viewable only by one browser. Microsoft Internet Explorer has been steadily gaining market share since the release of IE 3.0. Most Web sites will need to develop content that will work with at least two versions (the current released version, and the version that is in preview release) of each of the major browsers. The good news is that there are techniques to use to enable this cross-browser, cross-version capability. The bad news is that each of the techniques will require additional and, at times, extensive effort.

This article demonstrates two techniques to manage browser and version differences at a Web site. The first uses scripting to determine the type and version of the browser, and re-directs the browser to load a specific page. The second uses scripting and style sheet techniques to generate content, in one page, viewable by different browsers and versions. The article will also detail some rules of thumb to follow when creating a browser friendly Web site.

There is more than one way to re-direct browser input. One tactic is to create a separate Web page for each browser and each browser version that they want to support, and then have a CGI program load the appropriate Web page based on which browser and version is accessing the page. This is not a bad approach, and it works well if you want to support browsers that don’t handle scripting.

A scripting approach based on browser and version, is to trap the onLoad event for the browser page, and re-direct the Web page output to another page. You trap the onLoad event in the <BODY> tag, using the following code:

<BODY onLoad="change_document()">

The change_document function uses the navigator properties of appVersion and appName to find out which brower and version are being used to access the page. This is then used to create a string containing the URL of the re-directed output:

<SCRIPT Language="JavaScript">
<!--
   function change_document() {
      var MS=navigator.appVersion.indexOf("MSIE")
	var MSVER = parseInt(navigator.appVersion.substring(MS+5, MS+6))
    	var NSVER=parseInt(navigator.appVersion.substring(0,1))
	var locstring
	if (MS > 0) 
		locstring = "diffie" + MSVER + ".html"
    	else if (navigator.appName == "Netscape")
		locstring = "diffns" + NSVER + ".html"

	window.location = locstring
   }
//-->			
</SCRIPT>

To see an example of script re-direction, try this diff sample.

The downside to this type of browser and version difference handling is this: if you want to support Netscape Navigator 2.x, 3.x, 4.x, in addition to Internet Explorer 3.x and 4.x, you will need to create five pages for each Web “page” at your site!

However, this approach is effective if you wish to apply it selectively at your site. Perhaps you want to have an interactive product page that makes use of all the fun dynamic HTML techniques each browser is implementing. This approach could be used for this product page only, giving you the freedom to use the specific browser/version technology to its fullest.

Another technique to handling browser/version differences is to use scripting and style sheets in one page and ensure that the scripting is directed at the appropriate browser and version.

I will demonstrate this with the next sample code. The example will change the background color for all of the browsers and versions. This is all it will do for Netscape 2.x. For Internet Explorer 3.x, a CSS1 style will also re-define the appearance of the <H1> tag. For Netscape 3.x, the image that displays when the document first opens is changed. For IE 4.0, the image is changed and the style sheet definition for the <H1> tag also changes (both the font size and color). Finally, for Netscape 4.0, the <H1> tag is also changed, but this time using Dynamic Style Sheets (DSS), meaning that JavaScript has been applied to JavaScript Style Sheet (JSS) elements.

First, I apply some style sheet definitions for the Web page. A JavaScript style sheet and a standard CSS1 (Cascading Style Sheets) definition are created:

<STYLE TYPE="text/JavaScript">

	classes.class1.H1.fontSize="24pt";
	classes.class1.H1.color="green";

	classes.class2.H1.fontSize="18pt";
	classes.class2.H1.color="red";

</STYLE>
<STYLE TYPE="text/css">
	H1.newstyle { font-size: 18pt ; color: red }
		margin-top: -.05in ; margin-left: 1.0in } 
	H1 { font: 24pt ; color: green }

</STYLE>

These style sheet definitions provide for new formatting for the <H1> tag, and will be used in JavaScript functions that will be created a little later in this article.

Next, global variables will be defined that contain the type and version of the browser accessing this page. As these are global in nature, they will be available anywhere that JavaScript is used in the page:

<SCRIPT Language=JavaScript>
<!--
var MS=navigator.appVersion.indexOf("MSIE")
window.isIE4 = (MS>0) && 
    ((parseInt(navigator.appVersion.substring(MS+5,MS+6)) >= 4) &&  
    (navigator.appVersion.indexOf("MSIE"))>0)

var NSVER=parseInt(navigator.appVersion.substring(0,1))

isNS4 = false
isNS3 = false

if (navigator.appName == "Netscape") {
   if (NSVER == 3) {
	isNS3 = true
	}
   else if (NSVER >= 4) {
	isNS3 = true
	isNS4 = true
	}
   }

The next script is a function, change_doc, which will change the background color for the Web page for all versions of the browsers that access it. Additionally, if the browser is Netscape 3.x or 4.x, it calls another JavaScript function, change_document3:

function change_document() {
  	document.bgColor="beige"

    	if (isNS3) 
		change_document3()
	}

The next JavaScript function is change_new which is only called by IE 4.0. This function will apply the new style definition for the <H1> tag that has an id of “myheader”, using Microsoft’s own version of Dynamic HTML:

function change_new() {
	var chgh1 = document.all.myheader
    	chgh1.className = "newstyle"
	}
//--> 
</SCRIPT>

The scripting block is closed as the other functions will be creating in different versions of JavaScript. First, using JavaScript 1.1. we create the change_document3 function which will change the image shown in the  page. At the end of the function the value of the isNS3 is tested, and if true the function change_document4 is called. Note from the global variable section that isNS3 is set to true for both Netscape 3.x and Netscape 4.x:

<SCRIPT Language="JavaScript1.1"> 
<!--
function change_document3() {

    	if (!isNS4) 
	     document.thisimage.src="sun.gif"
    	else
	     change_document4()
	
	}
//-->
</SCRIPT>

Using the JavaScript 1.1 specification means that any script within this block will only be executed by a browser that is capable of processing JavaScript 1.1 script. This includes Netscape 3.x and 4.x, as well as IE 3.0x and 4.x.

The next function is change_document4, which is created in a JavaScript 1.2 scripting block. This function will be called only for Navigator 4.x. The script uses a <LAYER> tag to encapsulate the original contents of the page. When this function is called, those contents are hidden, and new contents are created using a new LAYER object:

<SCRIPT Language="Javascript1.2">
<!--
  function change_document4() {
	document.layers[0].visibility="hide"

	// note with following...another technique would be to 
	// create a second layer, set to invisibile, and use 
	// conditional comments to block for non-layer browsers...
	// not implemented, yet
	newlayer = new Layer(600)
	newlayer.visibility="inherit"
	newlayer.document.write("<img src='sun.gif' width=76 height=76 alt='sun'>")
	newlayer.document.write("<H1 class=class2> Header for this example page </H1>")
	newlayer.document.close()
	}
//-->
</SCRIPT>

I close the <HEAD> section. In the <BODY> tag, we trap the onLoad event for the Web page. This event will check to see if the browser and version is Internet Explorer 4.0. If it is, the change_newchange_document, and change_document3 functions are called. For the other browsers/versions, only the change_document function is called. Additionally, I create the image definition and <H1> contents:

<BODY 
onLoad="if (window.isIE4) {change_new(); change_document(); change_document3();} else change_document();">
<LAYER>
<img src="rain.gif" width=76 height=76 alt="rain" name="thisimage">
<H1 id=myheader> Header for this example page </H1>
</LAYER>

Let’s take a look at this sample page in action.

This approach is just one of many that could be taken to determine the browser and version, and only execute the appropriate scripting. Different scripting blocks were used for the different versions of Navigator as there will usually be other functions and event handlers that will be coded and that are only implemented with the specific version. As an example, a new object for Navigator 3.x was the IMAGE object, for Navigator 4.x, it is the LAYER tag. Enclosing the code in these scripting specific versions ensures that a browser that is not capable of processing the object does not process the code.

One nice feature that Netscape implemented in Navigator 3.x, and I hope it continues to implement with version 4, is the ability to provide overloading of functions based on versions of JavaScript. As an example, I created this test page that splits the functionality completely by JavaScript version. Each version has a function called change_document(). Navigator 2.x and Internet Explorer 3.x will access and execute the script it finds in the topmost “JavaScript” block. Navigator 3.x will go for the section with the <SCRIPT LANGUAGE="JavaScript1.1"> tag.

As I write these words, Navigator 4.0 does not go for the section for JavaScript 1.2, but I will continue to test for this functionality and hope to see this in a future preview release, or the final release. IE 4.0 does execute the script in the JavaScript 1.2 section. Add in a little use of navigator.appName and you can duplicate the functionality created earlier by having the browser execute the right script by default.

Unfortunately, Microsoft does not seem to support the concept of versions with the use of VBScript (if it does, please let me know).

Browser-friendly Web page rulesHere are some good rules of thumb I use when creating browser-friendly Web sites.

1. Know your audience
Most of the people that visit my site are Web page developers, and I can alter and play with the contents knowing that most people viewing my site will be using the newest browsers, and most likely are using Navigator and Internet Explorer. If your Web site is for a bank, or a book company, or other non-computing related company, you may not want to use too much new technology.
2. Make a decision on browser support
After stating rule 1, I will now extend it by saying that you can’t please all the people all the time. You will want to make a decision as to whether the cost of providing support for a specific browser is worth the possible loss of visitors to your site.
3. Integrate new technology unobtrusively
If you create one page for multiple browser/versions, make sure you use technology carefully, and in such a way as to not take from the overall style and meaning of the page. With the example shown in the last section, the page dynamically changes based on the browser, but the overall content (what there is of it) is not changed. Reserve your wilder instincts for special fun pages and then implement the first technique given in this article to load browser specific pages.
4. Always provide at least one text based page for your site, if not for each Web page
On pages where navigation is crucial, make sure you offer text-only links for users of obscure or non-graphical browsers, such as Lynx. Again, though, balance this with your known audience.
5. Be aware of those with special physical challenges
Do not use image maps without providing a text-based menu. Always provide an ALT property for any images you use. Do not rely on the newer technologies as the only method for communicating an idea, a product, a service, or for site navigation.
6. Test your Web pages with your target browsers and versions
Once you decide which browsers and versions you are supporting, always test your Web pages with all of them. This may mean you have to use multiple machines, or a system with dual-booting operating systems.
7. Have fun
If you find yourself becoming incredibly frustrated with trying to get something that works easily with one browser or version, to work with another, you might want to stop, walk away, take a break and then try approaching your scripting challenge from a different perspective. If something will not work, then find what does work and find a way to apply it to your current problem.

Happy scripting!

 

Categories
Specs

Getting started with cascading style sheets

Originally appeared in Netscape World, now archived at Wayback Machine

Web page authors want to control more than what basic HTML provides, yet they also want their pages to display in the same manner across multiple browsers and multiple platforms. HTML provides the tools that allow us to create hypertext links, frames, tables, lists, or forms, but it does not provide fine control over how each object is displayed.

As an example, to create a hypertext link in a page, the Web page author would use the following syntax:

<A HREF="http://www.somecompany.com/index.html"> link page </A>

Interpreted by Netscape’s Navigator and Microsoft’s IE (Internet Explorer), the link would display on the Web page in whatever manner is determined by the browser.

To address this problem, Navigator and IE both support an extension to the <BODY> HTML tag that lets a Web page author change the color of unvisited, visited, and active links, as shown in the following statement:

<BODY link=#ff0000 alink-#00ff00 vlink=#0000ff>

This statement would display unvisited links in red font, visited links in green font, and active links (clicking on a link makes it active) as blue, unless the user overrides this in their browser. Many Web pages now define the color of the links for a page using this technique. However, the technique of providing specific display attributes for a tag becomes less workable when we consider tags such as the paragraph tag, which can be used many times in one page.

Following in the trend set by the <BODY> link attribute, we would need to create display attributes for the paragraph (<P>) tag and then apply these attributes whenever we wish to display text in some manner other than the default. This again, is workable, though the concept starts to become much more involved.

Where the idea breaks down is if the page author wants to change the color attribute of the paragraph, and then has to search through every web page to make look for where the attribute has been applied, and then make this modification. Additionally, if a company would like to provide a standard formatting of all paragraph tags for all web site pages, each web page creator would need to be aware of what the standard was, and remember to consistently apply it. A better solution would be to change the attribute once per page, or even once in a separate document and have it work on many pages.

Another option to apply formatting would be for the Browser creators to create new HTML extensions to be used for presentation, such as Netscape did with the <FONT> element (see “What’s wrong with FONT). This idea breaks down when one considers that other browsers viewing any material formatted by the new tag will not be able to see the material, or will see it in a manner that may make it illegible.

What is needed is a general formatting tag that one can use to create format definitions, which can be applied to one or more HTML elements. This technique of using one tag to cover extensions was used with the scripting (<SCRIPT>) tag and has worked fairly well.

Enter the concept of style sheets. Style sheets are methods to define display characteristics that can then be applied to all or some instances of an element, or multiple elements. Specifically, the W3C has recommended the adoption of CSS1 (Cascading Style Sheets).

What is CSS1Style sheets provide formatting definitions that can be applied to one or more HTML elements. An example of a style sheet would be the following, which sets all occurrences of the <H1> header tag to blue font:

H1 { color: blue }

CSS1 extends this by defining style sheets that can be merged with the preferences set by both the browser and the user of the browser, or other style settings that occur in the page. The style effect cascades between the different definitions, with the last definition of a style overriding a previous definition for an element.

As an example, the following will redefine the header tag <H1> to be blue, with a font of type “Arial”, size 24 point, and bold:

H1 { color: blue; font-family: Arial ; font-size: 24pt }

With this specification, any time the <H1> tag is used in a page, the text will be displayed in Arial, 24pt, blue font. We define a second style for the <STRONG> tag using an inline definition that will override the first:

<H1 STYLE="color: red">

The difference between this and the first definition is that the latter redefines the formatting for the <H1> tag, but only for that specific use of the tag. This will not impact on any other uses of the H1 tag in the document. If the style definition in the HEAD section had attached a weight to the style, using the important keyword, the original specification would have taken precendence over the second one:

H1 { color: blue ! important; 
         font-family: Arial ; font-size: 24pt }

Styles can be nested, as follows:

H1 EM { color: red }

Now, if the EM tag is used within a H1 header, the EM specification will apply to the text in addition to any other style specification given for the H1 tag. This type of style property is referred to in the CSS1 style guide as a contextual selector. Each element referenced in the line is analogous to an element within a pattern list, and the browser applys the style to the last element in the list that successfully matches the pattern it is processing

W3C has recommended two levels of compliance for CSS1: core and extended. The standard can be seen at this W3C Web page).

At this time, only IE has partially implemented CSS1 in version 3. However, both Netscape and Microsoft have committed to implementing at least the core specification of CSS1 in version 4.0 of their browsers.

The rest of this article will give examples of using CSS1 that will display using IE 3.x. Unix users can download a testbed client, named Amaya, that will allow them to see the results of the style sheets. Amaya was created by the W3C and can be downloaded from at this W3C Web page.

How CSS1 worksAs shown in the previous section, formatting information can be defined for an existing element and this formatting will apply to all uses of the element unless it is overridden or modified by other definitions.

The definition, delimited by new HTML tags of <STYLE> and </STYLE> can be inserted into the <HEAD> section of the page, into a separate document, or inserted in-line into the element itself.

As an example of embedding style information into the header of a document, the next bit of code will create a style sheet that will modify how paragraphs display in a page:

<STYLE type="text/css">
	P { margin-left: 0.5in; margin-right: 0.5in; margin-top: 0.1in; color: red }
</STYLE>

Now, with this definition, any paragraph on the page will have a margin of half and inch for both the left and right margins, a margin of one-tenth of an inch for the top, and will have a red background. No other formatting is necessary to apply this style to every use of the paragraph tag in the entire document.

Another method will allow the Web page author to define style sheets in a separate document that is then imported into or linked to a Web page. To import a Web page, the import keyword is used, as shown in the following syntax:

<STYLE type="text/css">
	@import url (http://someloc);
</STYLE>

The imported style sheet will merge with any styles defined directly in the existing page, or by the browser/user, and the resultant combined styles will influence page presentation. Note that IE 3.x does not support the import keyword, though this should be implemented in version 4.0.

The second method of including a style sheet file is using the LINK tag:

<LINK REL=STYLESHEET HREF="standard.css"
TYPE="text/css">

Using this type of tag will insert a style sheet into the existing Web page that overrides any other style definition for the page, unless style sheets have been turned off for the page. It is an especially effective approach to use when a company may require that all Web pages follow specific formatting.

Let’s see CSS workGranted, if you go a little crazy using CSS1, your page is going to end up looking like something that will land you in a Federal prison if you sent it through the US Mail. An example of this can be seen in a page I call “expressionism with an attitude.”

Impartial observers would call it “the ugliest page they’ve ever seen on the Web.” However, with a little restraint (and of course, we all use restraint in our Web pages), CSS1 can turn a bland page into a grabber.

I have a Web page on my Scenarios site that uses a combination of display properties as defined by Netscape, style sheets as defined by Microsoft, and HTML tables.

Stripping away all but the most basic HTML tags leaves a page that has a lot of content, but without formatting is cold and not very interesting.

Unless the viewer was highly motivated to view the contents, chances are they would skip the page.

The first change to make is to add both a background image and background color to brighten the document up a bit. The full implementation of CSS1 allows the Web page author to specify whether a background image should repeat, and if it does, whether it will repeat horizontally or vertically. This is welcome news for those who have created really long, thin graphics to be able to give that attractive sidebar look to a page. Unfortunately, IE 3.0 does not implement this attribute, nor is it implemented with Preview Release 2 of Netscape Navigator. Instead, the image used in the example is one that can repeat gracefully. The style sheet is:

<STYLE TYPE="text/css">
	BODY { background-image: URL(snow.jpg) ; 
		background-color: silver }
</STYLE>

With the image, the style sheet also adds a default color in case the person accessing the page has turned off image downloading.

Adding the background image is a start, but the text is still a bit overwhelming and rather dull looking (but not reading, of course).

It would be nice to add a margin to the document, as well as changing the overall font to Times 12pt. In addition, modifying the formatting for both the <H1> header and the <STRONG> tags would help add a bit of color and contrast to the document:

<STYLE TYPE="text/css">
	BODY { background-image: URL(snow.jpg) ; 
		background-color: silver ; 
		font-size: 12pt ; font-family: Times;
		margin-left: 0.5in ; margin-right: 0.5in ;
		margin-top: 0in }
	H1 { font: 25pt/28pt blue ; color: navy ; 
		margin-top: -.05in ; margin-left: 1.0in } 
	STRONG { font: 20pt/22pt bold; color: maroon ; 
		font-family: Helvetica ; font-style: italic}
</STYLE>

At the time this was written, Netscape Navigator Preview release 2, in Windows 95, only implemented size units of em and ex. The first unit definition is the height defined for the font, the second is the height defined for the letter ‘X’ in the font. The results are an improvement, but you still can’t easily spot two sidebars that are inserted into the document.

To correct this, a generic class is created and named “sides”, which will contain a formatting definition that can be applied to any element. This is done by naming an element prefixed with a period (‘.’) to represent the class:

.sides { background-color: white ; margin-left: 0.5in; 
	margin-right: 1.0in ;
	text-align: left ; font-family: "Courier New" ; 
	font-size: 10pt }

Looking at the page now, the sidebars stand out from the rest of the document.

The class is used with the <DIV> tag, which allows the formatting to span multiple elements until an ending </DIV> tag is reached:

<DIV CLASS=sides>
</DIV>

The class could also have been used in each individual paragraph tag that makes up the sidebar:

<P CLASS=sides>

Additionally, instead of a class, we could have created an ID attribute for the style:

#sides { background: white ; margin-left: 0.5in; 
	margin-right: 1.0in ;
	text-align: left ; font-family: "Courier New" ; 
	font-size: 10pt }

Using the identifier would be:

<DIV ID=sides>
</DIV>

W3C wants to discourage use of the ID attribute. The W3C wants people to provide classes for an existing HTML element that only applies to that element. Then if people which to cascade the effect they use the parent-child style specification as stated earlier with the <H1> header and <STRONG> tags.

Notice from the example, and only if you are using IE 3.01, that the background color for the class is only applied to the contents and not to the area represented by a rectangle that would enclose the contents. As this looks a bit odd in the example, it is removed from the definition.

In addition to removing the background color from the “sides” class, the next change to the document will add another definition for the STRONG tag to be used in the sidebars and formatting definitions for the hypertext links.

A hypertext link is referred to in the CSS1 standard as a pseudo-class because browsers will usually implement a different look for a visted link than one that has not been visited. This type of element can take a class style specification, but the browser is not required to implement the specification.

Another change will be a specific class definition of “sides” that differs from the original class definition and which will be used for specific paragraph tags:

.sides { margin-left: 0.5in; 
	margin-right: 1.0in ;
	text-align: left ; font-family: "Courier New" ; 
	font-size: 10pt }
 
STRONG { font-size: 22pt; color: maroon ; 
	font-family: Helvetica ; font-weight: bold}
STRONG.extended { font: 18/20pt bold; color: red ; 
		background-color : silver; font-style: italic }
 
P.sides { margin: 0.25in 0in 0in }
A:link { color : red }
A:visited { color : teal }

Using the <STRONG> tag with the extended style would look like:

<STRONG class=extended>

To use the original formatting, no class name is given.

The page is definitely improving.

The sidebars stand out and spacing has improved the ease with which the page can be read. Unvisited links stand out with the bright use of color, yet blend in to be non-obtrusive after the link has been visited.

A final change is made, which is to add formatting to the lists contained in the page. The Web page has both an ordered list, where the elements are numbers, and an unordered list, where the elements are bulleted. Styles are added to each of these list types to display them more effectively. Formatting is added to the generic paragraph tag to indent the start of every paragraph:

OL { margin: 0in 0.5in 0in; font-size: 10pt }
UL { margin: 0in 0.2in 0in }
 
P { text-indent: 0.2in }

The lists now have new formatting, and all paragraphs are indented. With the cascading nature of CSS1, the paragraphs that are defined with the “sides” style inherit the indentation from the parent style, which is denoted by the use of the ‘P’ classifier without any specific class or identifier selector.

The displayed Web page also makes use of several in-line styles definitions, strategically placed to override some of the generic formatting options. There are a few paragraphs that should not be indented in the first line. Overriding the original paragraph specification is an in-line one that sets the indent to ‘0’:

<P STYLE="text-indent: 0in">

This turns off the text indentation.

The paragraphs that label the two figures that are included in the document are defined to increase the left margin another half-inch. As styles inherit from the parent element in which they are embedded, the figure paragraphs will have a left margin set to one inch rather than a half as the new style is merged with the one specified for the entire document:

<P STYLE="margin-left: 0.5in; color: green; font-weight: bold">

The font for the figure paragraphs is also changed to be green and bold.

The paragraphs at the end of the document that contain the trademark and copyright information are also modified with an in-line style:

<P STYLE="text-indent: 0in ; font-size: 8pt; font-style: italic">

This style sets the font to be smaller, and italic.

Positioning the elementsOne improvement that would have helped the page is being able to position the sidebars to the side of the document and have the rest of the document “flow” around them, as happens with print magazines. Another would be to be able to specify a background color for the sidebars that would have “filled” the rectangle enclosing the contents, not just the contents themselves.

CSS1 defines formatting of elements but does not define positioning of them. To this end Netscape and Microsoft have collaborated (yes, you read that right) on a proposed modification to the CSS1 that would provide a standard specification for how elements can be positioned on the page.

The W3C proposal, “Positioning HTML elements with Cascading Style Sheets”, provides the ability to define areas for the content to flow into. These areas can then be positioned relative to each other, using “relative positioning” or in absolution position to each other using, what else, “absolute positioning.”

From the recommendation, an example of absolute positioning could be:

#outposition {position:absolute; top: 100px; left: 100px }

Using this style sheet in the document as follows:

<P> some contents
<span id=outposition> some contents defined for a different position</span>
</P>

This code will result in the contents enclosed in the SPAN tag to be positioned in an absolute space beginning at the position defined as 100 pixels from the left and 100 pixels from the top. The enclosing rectangle will extend until it hits the right margin of the parent element, in this case the document. The height will be long enough to enclose all the contents. However, both the width and height of the elements could also have been defined.

Relative positioning allows elements to be positioned relative to each other, even if this means the elements overlap:

#newpos {position: relative; top: -12px }

The contents formatted by this style sheet will position themselves above the rest of the contents, moving the other contents down.

In addition to positioning along the X- and Y-axis (horizontally and vertically on the web page), the elements can also be positioned to each other on the Z-axis. This means that web developers will be able to layer elements on top of each other. An example pulled directly from the positioning paper is:

<STYLE type="text/css">
<!--
.pile { position: absolute; left: 2in; top: 2in; width: 3in; height: 3in; }
-->
 
<IMG SRC="butterfly.gif" CLASS="pile" ID="image" STYLE="z-index: 1">
 
<DIV CLASS="pile" ID="text1" STYLE="z-index: 3">
This text will overlay the butterfly image.
</DIV>
 
<DIV CLASS="pile" ID="text2" STYLE="z-index: 2">
This text will underlay text1, but overlay the butterfly image
</DIV>

With this, the order of the elements would be the image on the bottom then the contents defined by the class “text2”, and finally the contents defined by the style “text1”. The elements are transparent meaning that the bottom elements will show through to the top, though this can also be changed using style sheet settings.

Another recommendation is the ability to define whether an element is visible or not, which would still maintain its position in the document, and whether the element is even displayed which would remove it from the display, including the space reserved from the element.

The ability to position HTML elements, to control their visibility, and to finally control how they overlap is a revolutionary change to HTML document design.

What’s next?With Microsoft and Netscape both committed to the support of CSS1, and both participating in an extension to the CSS1 proposal to provide for positioning of HTML elements, creating HTML pages that display effectively in both browsers should be a snap. However, there is one element that was not discussed in this article and which can tear down the browser truce flag: dynamic movement of HTML elements.

As can be seen with the release of Navigator 4.0, Netscape supports script based movement of elements with their LAYER tag and with a style sheet concept they call Javascript Style Sheets (JSS). With the release of Internet Explorer Preview in March, Microsoft supports dynamic content through their own version of Dynamic HTML, which uses CSS1 elements directly. Unfortunately, neither method will work with the other browser.

As with the problems that have been faced with JavaScript, mentioned in the Digital Cats’ article “Whose JavaScript is it, anyway?” until Microsoft and Netscape agree on a standard scripting Object Model, you and I will continue to have to work around browser differences if we want dynamic content. Or use Java applets, and forgo all uses of scripting.

 

Categories
Technology

A Simple Solution to the Complex Distribution Problem

Recovered from the Wayback Machine.

Any information system group that has a client base that is split geographically will have a problem with distribution: how do you notify the clients that a new version of the tool(s) they are using is out, what the version contains, and how to upgrade.

You can automate the upgrade process by using tools that determine that the application a person is accessing is now out of date and upgrading it accordingly. The problem with this approach is that the user will not have control of when the upgrade is occurring and may be wary of an update that they know nothing about.

You can take a more passive approach by sending an email or memo out to all of your clients that an update has occurred to the application and they then can access the update on a certain sub-directory. The problem with this is that unless the update is fixing a problem that the client specifically wants or asked for, they may not be as willing to take the time to make the upgrade to their own installation, and then you in the IS department are now faced with trying to support multiple installations using multiple versions of your product. Additionally, the application user will then have to find this upgrade, download it on their own, and install the upgrade, a multiple step process which can generate problems.

Is there a simple solution? There is a simple possibility.

Many companies are interested in porting applications to the Web in some form of an internal intranet just for this problem. Unfortunately as many IS departments are finding out, most applications will not port to the web that easily. However, this is not the only way the web can be used to solve migration and upgrade problems.

Another approach is to use techniques such as Server Side Includes and to use the concept of the personalized user page. This is a web document that the user will open every morning. The contents of the document have been personalized for the user and presents information from categories that they have chosen. A case in point is that a person who works with Group A in Department 1B for Division J for Company XYZ. They will have a web page that contains new information pertinent to Company XYZ at the top, then information pertinent only to Division J next, information for Department 1B next, and finally information for Group A.

A Server Side Include (SSI) embeds a command line in the HTML document given an extension which is usually .shtml or .stm (the webmaster can determine what this extension is). This type of extension tells the web server to parse the web document for SSI commands before sending the document to the browser. An example of one of these commands is:

<!–#include file=”company.html” –>

This SSI command will instruct the web server to open the file called company.html and load the contents of company.html at this point.

The advantage of this approach should be fairly obvious: the information is specific to the interests and needs of the individual and is presented in such a way that they can examine it for the entire day if they need to; the information can include links to other documents if the person wishes to pursue more in-depth knowledge; and the document can be saved using most browers SaveAs capability. In stead of a flurry of emails which may or may not contain information that is relevant and that can be ignored and difficult to follow or read you have one document that contains all the information.

Another advantage is that no programming is required, and the information for each department can be maintained by each department. Group A maintains the HTML document that is specific to them. If there is no new information a blank HTML document or one containing the words “No New Information” can be given. Division J can maintain their own HTML document, and so on. With the many many simple to use HTML tools this should not be a solution that requires programming intervention.

While presenting a unified approach to information presentation, each group maintains autonomous control over what information is presented.

For our distribution problem, if IS has upgraded software that is in use throughout Division J, a notice can go into the Division J section that a new version of the software is being created, what bugs it will fix and features it will add. When the software is ready, a link can be inserted into the document that will allow the person to download the upgrade with one click of the button. The user can then double click on this file as soon as it is on their machine and the upgrade process can occur.

What is the advantage of this technique over others? The information about the upgrade is presented in the same format and in the same context as the upgrade itself. Information about the upgrade and the upgrade itself are only given to those who are impacted by it.

How can this solve the multiple version problem? After all the user can continue to ignore the upgrade notice and continue on their merry way. Well, this is where the concept of something called a Netscape Cookie comes in.

A Netscape cookie, implemented by both Netscape and Internet Explorer browsers, is a tiny bit of information that is stored locally on the client’s machine and that can be accessed by the browser when a specific web document is loaded. When the document containing the information about the upgrade is loaded, this value is set for the first time. After that point every time the person accesses their personalized web page the cookie is accessed and the value is incremented or decremented. Information can be printed out to the effect that they have so many days to make the upgrade, and this value is decremented for each day.

If they make the upgrade, the cookie information is destroyed and the reader will no longer get the count down.

With additional sophistication, one can create the page in such a way that the download no longer shows once they make the upgrade. To use this approach, persisten information about what HTML documents one specific person will see is kept in a file on the server. When the person logs in and gives their user name and password this file is accessed. Instead of an HTML document the person accesses a file that is called index.cgi. This application will access the file containing the person’s preferences and the information is then used to determine how to build the page the person will see. The application does this by opening up the individual HTML documents that make up the person’s preferences and printing them back to the browser, in turn.

With this approach, after a person makes an upgrade their personal preference file is accessed and the entry that contains the upgrade information is removed. Not only will the person receive timely information that is pertinent to their needs, they will receive content that is dynamic and also matches their choices.

Finally, if the person still does not upgrade by a specific date an email can be generated automatically that will be sent to the IS department informing them of this information.

A link file containing sample Perl script that demonstrates the CGI based approach and that demonstrates the use of SSI can be found here.