Categories
Specs Technology

Browser, Browser Not

Originally published at O’Reilly

Recently, O’Reilly published a set of articles (Netscape Navigator 6.0 to Fail Standards ComplianceAn Update, and Netscape 6.0 Released), written by the popular author David Flanagan, about the release of Netscape 6.0, Netscape’s newest entry in the browser marketplace.

David presented several valid concerns about bugs still present in the release of Netscape 6.0. And it is true, Netscape 6.0 did release with several unfixed bugs, many of which will have an impact on support for W3C specifications.

Our reaction to the release, however, was somewhat different. Along with other application developers, we’ve been waiting for the public release of an application that uses Mozilla’s XPToolkit, a set of software components from which Netscape 6.0 and the upcoming Mozilla 1.0 were built. Now that Netscape 6.0, which uses this framework, has been publicly released, we’re delighted: testing of XPToolkit may begin in earnest.

While many are focused on the release of Netscape 6.0, some of us aren’t. We’re more interested in the application environment created by the Mozilla team to support the implementation of browsers in general. To us, this framework is more important than the release of a new browser will ever be.

The reason for this is the changing face of the Internet, itself.

The Changing Face of Internet Applications

Current Internet applications rely on a centrally located Web server to distribute HTML over HTTP to clients. Each client, or Web browser, renders the source and displays a human-readable page.

This architecture has become so popular that you can’t pick up a magazine or a newspaper without hearing about Web servers or the new business models based on them. Although this architecture is based around universally located resources, most application-level resources are centralized and many other resources are hard to find. Some Web sites help you find other Web sites or “resources.” Others go so far as to offer completely centralized applications, as Application Services Providers (ASPs).

New technologies will soon force us to rethink the way we use the Internet. Distributed systems, mobile agents, and peer-to-peer (P2P) applications may completely undermine the need for browser-based Internet access.

P2P applications are already stepping around the browser. The next step will be around the Web server.

Consider this: a P2P application that locates and downloads a new function. The simplest example here may be provided by a P2P execution framework that uses XML-based remote procedure calls between peers to marshal XML-encoded functions. Instead of hitting Web pages, each peer locates and accesses both data and functions among a network of peers. No Web servers.

This scenario is not going to be best served by the traditional browser. Why?

The Limitations of Browsers

The things that made the Web browser a success in the beginning are the things that make it ineffective for new application models.

The browser was built to render files stored on Internet sites so we didn’t have to muck about with FTP. As soon as content became more visible, people started publishing yet more content, so browsers rendered HTML, then XML, formatted with CSS or XSLT. However, the browser itself has a very limited interface, even with new advances in W3C specifications. Sophisticated browser pages mean using either complicated object models–leading to cross-platform and cross-browser idiosyncrasies that are usually the result of standards initiatives–or using page-embedded applications, such as Java applets and plug-ins.

Even when the browser follows standard specifications, working within a browser page to create a sophisticated interface isn’t a simple or uncomplicated task.

In addition to the browser becoming increasingly complex as the nature of content becomes so, use of it implies that applications ought to be served from one location, and in one manner. To do something such as make a remote procedure call, you would need to use a digitally signed Java applet or some other browser-specific and limited technique. This is something that won’t bother newer P2P applications.

Finally, browsers were designed to be safe, and operate in a protective sandbox. Web-based applications served via a browser have difficulty getting at the user’s machine. Though safe, this restriction also prevents behaviors that would have the application modify its user interface. And this dynamism is going to be necessary in an environment where new services require new application interfaces that can be downloaded as data.

An Internet Application Framework?

Mozilla made a tough decision a few years ago–to scrap the Netscape 4.x architecture in favor of one built from the ground up. In the process, this open source team created an application environment based on reusable and interchangeable components.

With this application environment in place, the team then proceeded to build a sophisticated browser. They threw in Internet Chat, a Web page composer, and other complex things, all of which were released recently as Netscape 6.0. Often forgotten is that a powerful application environment came with it. This environment is now usable by developers of other Internet applications.

What types of applications? Well, ActiveState, the company that provides popular implementations of Perl and Python for various operating systems, used Mozilla to create itsKomodo product, a visual IDE for working with Python and Perl code. The user interface provides, among other things, colored syntax, syntax checking, and source-level debugging.

So, we have a browser and an application that can be used to create and test Perl and Python applications, all built from the same application architecture.

This is exciting stuff! Much has been written about reusable code and component-based design, and now we have an open source application environment we can all use to build our own applications.

Even more exciting is the extensible user-interface language from Mozilla called XUL (pronounced “zool”). It’s based on XML, which means you can use XML to create a user interface. Combine this with the ability to make remote procedure calls, and you have a perfect place from which to commence building a bunch of P2P applications, based on the scenario mentioned above.

Now, instead of opening a browser, you can open an application built on the same framework as your browser, but with a sophisticated interface of dropdown menus and tabbed pages–all created using XML. You can access remote procedure calls at the touch of a button and when you’re ready to access a new service, click another button, and in a couple of minutes restart your application. New entries will be added to new or existing menus providing access to the new service. All this is accomplished without Java bytecode, a new plug-in, or a DLL.

You’ve just downloaded XML.

When you explore the possibilities of the XPToolkit from Mozilla maybe you’ll agree that Netscape 6.0 is more than just a standards-based, better-than-Navigator-4.x-browser. It’s the start of a new new way of doing things on the Internet.

Categories
Technology

Digital Play Dough: Designing Applications with XUL

Originally published in Web Techniques

The XML-Based User Interface Language (XUL) made its first appearance with the release of Mozilla, the Open Source browser used as the foundation for Netscape 6. Pronounced “zool,” the language gives developers and designers an easy way to describe the design and layout of application windows in Web browsers. By modifying a few files, you can change the entire look of your Web browser or of the windows that pop open while a visitor browses your site. Prior to XUL, this was only possible by modifying and re-compiling the browser’s underlying source code. And in that case, you would have to distribute the modified browser to all your site’s visitors — an unlikely event. Fortunately, all you need to change the look and feel of a Web browser today is an understanding of the XML and CSS specifications and a little ingenuity.

Architecture

XUL applications consist of XML files created with .xul extensions. The files define the content of the application. Additional application data is located in Resource Description Framework (RDF) files. CSS files provide formatting, style, and some behavior, for the application. JavaScript files provide scripting support. Multimedia files, such as PNG images and other audio/visual files, might also be needed for additional user interface information. All of the file types are specifications recommended by the W3C, and collectively are referred to as the XUL application’s “chrome” — the contents, behavior, and appearance of the application’s user interface.

The Mozilla browser is itself designed as an XUL application. To manage the chrome for your browser, both Mozilla and Navigator have subdirectories labeled chrome, located off each browser’s main directory. Within the chrome directory, separate XUL applications are packaged into separate subdirectories. Within each application directory, subdirectories further divide the application into content (containing the XUL, JS, and RDF files), skin (CSS files), and locale (DTD files).

To deploy your own XUL application on the Web, you can either place all of the files within the same subdirectory on a Web server, or use the suggested chrome directory structure on the server. Note though that you may lose some functionality, such as localization, when your application is not using the chrome directory structure. Also, all files should be local to the URL of the main XUL file, otherwise the application may not work due to violations of built-in security.

A Simple Application

To demonstrate the structure of an XUL implementation, I created an application that is essentially a window with two panes — one on the left with a menu of hypertext links, and one on the right where a Web page can be viewed. Figure 1 shows the application.

To understand how I created this application, look at the sample XUL file shown in Listing 1. The first line of an XUL file is the XML declaration, which includes a reference to the version of XML used. Following that, the file must include a reference to a CSS file, to provide formatting for the XUL contents. The file then defines the application window and sets several properties — namely, the title, width, and height. Namespaces are provided for the application’s elements. By default, all elements in the application are XUL elements, identified by the following namespace (required for all XUL files):

xmlns=”http://www.mozilla.org/keymaster/
gatekeeper/there.is.only.xul”

When completed, the application will use some standard HTML elements, and these are identified with the html namespace. Additionally, the application will have a few elements unique to the specific application, and these are identified by the Webtechniques namespace WT.

Widgets

The different sections of a XUL application are contained in widgets known as “boxes.” Boxes don’t have any visual appearance themselves; their only purpose is to encapsulate several widgets as a whole, and to provide layout orientation for their contents. The orientation is defined with the orient attribute, and other attributes can be used to set the box width and height.

To demonstrate the use of boxes and page layout, Figure 1 shows a first version of the application, with buttons on the left, and a space to display the content on the right. The XUL file for this page is found in Listing 2.

In the listing, I added a box to the window contents and gave it an orientation value of vertical. This orientation forces all of the content to be positioned vertically in the pane. Additionally, the box is given a flex value of 1. The flex attribute can be applied to several different types of XUL elements. It’s used to help the layout engine determine how to expand an element to fill available space. For the outermost box in the page, the contents expand to fill the entire window.

In Listing 2, the outer box contains a toolbar and another, nested box. The toolbar and this new box are layered vertically inside the outermost box.

The nested box is given a horizontal orientation and contains two more nested boxes. These boxes are displayed left to right, and form the main panes of the application. Each of these boxes is given a vertical orientation, which means their contents will be aligned from top to bottom. A XUL splitter is used to separate the contents of the two boxes. This allows the user to resize either side of the application page by dragging the splitter either to the left or to the right.

Notice in the code that the first content box doesn’t have a set flex value. This means the box will be sized according to the width and height of its contents. The second box has a flex value of 1, and it expands to fill the remaining horizontal space.

The first box contains several XUL buttons, surrounded by widgets called springs. The spring element is used to help position elements, and has no visual characteristic of its own other than to occupy space. In this example, a spring is used on either side of the page buttons, to center the buttons within the box.

Also notice in the listing that the second box contains an HTML IFRAME element. As each listing for an article is accessed, the contents of the listing (a Web page) are opened in this IFRAME element. Because the element is an HTML element and not a defined XUL widget, the html namespace must precede the element tag.

More Widgets

The application terminates when the user clicks on the Close Window button in the application’s toolbar. To add a button widget to a page, you’d use the button element as follows:

<button value="Close Window" onclick="window.close();" />

The default appearance for the button is provided by CSS files that are included with Mozilla/Navigator.

To provide feedback to the user whenever a page is being loaded in the right pane, I have decided to extend the code by adding a progressmeter widget (see Figure 2). The progressmeter widget can have a mode of determined or undetermined. A determined meter is one in which the developer knows the exact length of time of an operation, and controls the meter accordingly. The undeterminedmeter is used when the length of time for an operation is unknown. Accessing a Web page falls into the undetermined category.

To support the use of the meter, I add JavaScript code to activate the meter. To do this, I put my code in a JavaScript .js file, and save the file in the XUL application’s contents directory. The code is shown in Listing 3.

I then create a reference to the JavaScript file in the XUL document. When doing this, I have to modify the onclick event handlers for the menu buttons so that they call the JavaScript loadItem()function instead of jumping directly to the URL. Listing 4 contains the contents of the second version of the XUL application. Now, when the application loads a Web page, the meter signals that a page is loading.

Integrating RDF Files

In the previous versions of the XUL application, the URLs of the target Web sites were hard coded into the page. A better approach would be to add the URLs as application data from RDF files. RDF files are used with XML to define application specific data, and can be altered easily without the need to modify the main XUL application pages.

As shown in Listing 5, many of the pages in an RDF list can be from the same site, and are nested by URL location.

To facilitate working with RDF data, XUL has a template engine that integrates external data with the XUL application. The template engine is activated when the layout engine finds opening and closing template elements, and all of the contents contained within the tags then form the template pattern. This pattern is used to determine how to lay out repeating values found in the RDF file.

We’ll add a template to the XUL application and set the pattern to be one button for every listed item in the RDF file. To do this, the RDF data source file must first be attached to the XUL file, and this is done in the box surrounding the template with the following command:

<box orient="vertical" flex="1" datasources="sites.rdf" ref="urn:source:data" >

The datasources attribute specifies the RDF file, and the ref attribute refers to the location in the RDF file to pull in the sequence of data.

The template, given in Listing 6, contains a button widget with the attribute of uri="rdf:*", which indicates where in the RDF file to begin the template matching. An application specific attribute (with the WT namespace) is added to the button to capture the URL associated with the Web page. The page’s title will be shown as the button’s display value.

The JavaScript loadItem() function from Listing 3 must be modified to pull the URL for the Web site as an attribute of the item passed in from the event handler. The new function is given inListing 7.

Figure 3 shows the page that results from using the RDF file and the XUL template. Note that each nested item from the RDF file is actually embedded within the button of its containing parent. So, you see a top button containing two other buttons, and so on.

Trees

Buttons aren’t necessarily the best widget to use when designing an application that uses a repeating template. Other widgets that are better are menus and trees. An XUL tree is similar to an HTML table in that the tree is delimited with one tag (tree), the rows with another (treerow), and the tree contents with yet another (treecell). The treehead tag defines a header for the tree, and the treeitem tag is used to allow users to click and select tree items.

Unlike an HTML table, though, XUL trees provide sophisticated processing for nested items. By default nested items aren’t displayed until the user clicks on an item’s containing parent.

In Listing 8, I’ve replaced the buttons with an XUL tree structure. Individual tree cells hold each hypertext link, and nested data items are displayed as nested tree items. Clicking on the graphic associated with the top-level tree items (known as the “twisty”) displays the contained items. Clicking on any contained item opens the associated Web page in the right pane of the application, as displayed in Figure 4.

Skinning

One of the more controversial aspects of XUL is skinning — the ability to change the appearance of any of the application’s chrome, including the frame, the buttons, the toolbars, menus, and so on. You can change the entire look and feel of Mozilla and Navigator 6 just by downloading new chrome packages and installing their contents. With XUL, the only limit to skin design is the designer’s imagination.

So why is giving Web developers the ability to design different interface skins so controversial? Imagine for a moment that Salvador Dali or Picasso painted frames rather than the canvases. Now, try to visualize what painting could hold its own within these frames. Not many could. Application designers argue that well-designed user interfaces could be ruined by a third party’s poorly-designed skins. The other side of this argument is that being able to create a custom user interface is an attractive concept for developers and designers who don’t want to have to work within the rigid confines of the widgets defined by current operating systems.

To create a new skin for a XUL application, you provide a custom CSS file that defines the styles for the widgets in your application. If you wish to have access to the default appearance and behaviors for the widgets, you must import the global styles into your new CSS file with the following command:

@import url(chrome://global/skin/);

You can then add styles for the widgets used in your application.

Listing 9 shows the CSS file for the XUL application used in this article. In the file, I changed the style of the splitter so that it has a maroon background when inactive, and a teal one when active. I also changed the appearance of the button on the splitter (called the “grippy”). I modified the meter to have a gold background, and did the same for the selection highlight on the tree. I even added new GIF images for the toolbar grippy (the little downward arrow that’s used to hide or show the toolbar).

In this example, I modified the styles for XUL widgets directly in the CSS file. However, the preferred approach is to add new widget classes, and use them in your application as follows:

button.wt_button { ... }
<button class="wt_button".../>

With this approach, you still have access to the default appearance and behavior for each widget.

Once I modified the widgets, I opened my main XUL application file and replaced the reference to the global CSS file with a reference to the new CSS file, using the following line:

<?xml-stylesheet href="wt.css" type="text/css"?>

To access the completed application, create a page that has the following link on it, and load that page in Mozilla or Navigator 6:

<a href="#" onclick="window.open('wt.xul','example','chrome');">test</a>

If you use PR 1 of Navigator 6, note that the toolbar won’t show properly due to changes in the XUL schema that occurred after Navigator 6 PR 1 was released. It should work correctly with PR 2, and works with Mozilla build M16 (and up, hopefully).

One more thing to note is that you may have to add the XUL MIME type to your server’s configuration in order to make sure the application XUL page downloads correctly. The MIME type is simple text/xul.

Conclusion

You’ll find the complete application with all necessary files in the wt.zip file. With just a few changes of the XML file associated with the XUL application — adding a meter and using a tree instead of buttons — I changed application functionality and provided better user feedback and a better design. With a few adjustments to the CSS file for the application, I was able to create a new and different look to go with my application’s functionality. XUL promises to be a powerful tool for Web application development and deployment. I look forward to seeing future iterations and refinements of the technology in future browser versions.

Categories
Technology

PerlScript: A hot ingredient for ASP

Originally published at Web Techniques

Microsoft’s Active Server Pages (ASP) technology has grown very popular as a Web server-side technique, combining HTML, embedded scripts, and external components to deliver dynamic content. One feature of ASP is that different languages can be used for its scripting portion, though the most widely used ASP scripting language is VBScript—a subset of Microsoft’s Visual Basic.

However, just because VBScript is the most popular language used by ASP developers doesn’t mean it’s the only one, or even the best one to use in a particular circumstance. For instance, Perl has long been synonymous with Web server development, beginning with the earliest uses of the language by CGI, and is still one of the most popular languages among Web developers. But there hasn’t been much discussion of using Perl with ASP.

If your organization has been working with Perl, and you’re interested in developing for the ASP environment, you don’t have to give up your favorite language (or your existing Perl code) to make the transition to the new technology—you can employ Perl for ASP scripting through the use of PerlScript.

A (Very) Brief Overview of ASP

Microsoft originally introduced ASP technology was with the company’s own Web server, Internet Information Server (IIS). However, ASP has now been ported to other Web servers through the use of software such as ChiliSoft’s platform-independent version of ASP. In addition, ASP was created originally to work in a Windows environment, but again thanks to ChiliSoft and other companies, ASP now runs in non-Windows environments such as UNIX and Linux.

Still, the most popular use of ASP is within the Windows environment, with pages hosted on IIS. This environment—specifically Windows NT/IIS 4.0—is the one I’ll discuss.

ASP pages have an .asp extension, and are a mix of HTML and embedded script. When a client requests the page, the embedded script is accessed and processed. The results generated by the script are embedded into the Web page, which is then returned to the client browser.

The Response object—along with the other built-in ASP objects Server, Session, Request, and Application—provides access to the ASP and application environment for the ASP developer. The Response object provides a way to send information back to the Web-page client; the Request object provides access to information sent from the client, such as form contents; the Application object contains information that persists for the lifetime of the ASP application; the Session object contains information that persists for the lifetime of the particular ASP user session; and the Server object, among other things, lets the ASP developer create external components.

ActivePerl and PerlScript

A company named ActiveState was formed in 1997 to provide Perl tools for developers for all platforms. Among some of ActiveState’s more popular products is ActivePerl, a port of Perl for the Win32 environment.

ActivePerl is a binary installation containing a set of core Perl modules, a Perl installer (Perl Package Manager), an IIS plug-in for use with Perl CGI applications, and PerlScript. What is PerlScript? PerlScript is Perl, but within an ASP environment—it’s a Perl implementation of the Microsoft Scripting Engine environment, which allows Perl within ASP pages.

ActivePerl can be downloaded for free from ActiveState’s Web site (see the ” Online Resources“). To try it for yourself, access the ActiveState Web site and find the ActivePerl product page.

The installation process requires very little user input. You’ll need to specify where you want to put the files, and be sure to select the option to install PerlScript.

Using PerlScript Within ASP Pages

By default in IIS, all ASP scripting is VBScript, but you can change this using one of three different techniques. First, you can change the default scripting language for the entire ASP application using the IIS Management Console, and accessing the Properties dialog box for the Virtual Directory or Site where the ASP application resides. Click on the Home Directory (or Virtual Directory) tab, and then click on the Configuration button on the page. From the dialog box that opens, select the App Options page. Change the default scripting language from “VBScript” to “PerlScript.”

A second technique is to specify the scripting language in the scripting block itself. With this technique you could actually choose more than one scripting language in the page:

<SCRIPT language=”PerlScript” RUNAT=”Server”> . . . </SCRIPT>

A third technique is to specify the scripting language directly at the beginning of the Web page. This is the approach we’ll use for examples. Add the following line as the first line of the ASP page:

<%@ LANGUAGE = PerlScript %>

All scripting blocks contained in the page are now handled as PerlScript.

Accessing Built-In ASP Objects From PerlScript

To assist the developer, PerlScript provides access to several objects available only within the ASP environment. As mentioned earlier, these are the Application, Session, Response, Request, and Server objects.

The Application object is created when an ASP application first starts, and lasts for the lifetime of the application. The object has two COM collections, both of which can be accessed from script: the StaticObjects collection, with values set using the <OBJECT> tag within an application file called global.asa; and the Contents collection, which contains values set and retrieved at runtime. A COM collection is a group of similar objects treated as a whole, with associated features that let the developer access individual elements directly, or iterate through the entire collection.

In addition to the two collections, the Application object also has two methods, Lock and UnLock, which are used to protect the object against attempts by more than one application user at a time to change its values.

We’ll take a closer look at using the Lock and UnLock methods by setting a value in the Application’s collection in one ASP page, and then retrieving that same value from another page.

First, Listing 1 contains a page that sets a new value to Application Contents by first locking down the object, setting the value, and then unlocking the object. Notice that you don’t have to create the Application object—it’s created for you, and exists in the main namespace of the PerlScript within the ASP page (the same holds true for all of the ASP objects).

If you’ve worked with VBScript you’ve probably noticed that you have to use a different technique to set the value in the Contents collection with PerlScript. VBScript allows direct setting of collection values and properties using a shorthand technique, similar to the following:

Application.Contents("test") + val

PerlScript, on the other hand, doesn’t support this shorthand technique. Instead, you have to use the SetProperty method to set the Contents item:

$Application->Contents->SetProperty('Item', 'test', $val);

Additionally, you have to use SetProperty to set ASP built-in object properties with PerlScript, or you can use the Perl hash dereference to set (and access) the value:

my $lcid = $Session->{codepage};

Listing 2 contains another ASP page that accesses the variable set in Listing 1, prints out the value, increments it, and resets it back to the Application object. It then accesses this value and prints it out one more time. Accessing this page from any number of browsers, and from any number of separate client sessions, increments the same Application item because all of the sessions that access the ASP application share the same Application object.

In addition to the Application object, there is also a Session object, which persists for the lifetime of a specific user session.

This object is created when the ASP application is accessed for the first time by a specific user, and lasts until the session is terminated or times out, or the user disconnects in some way from the application. It, too, has a StaticObjects and a Contents collection, but unlike the Application object, you don’t lock or unlock the Session object when setting a value in Contents. But you do access the Contents collection in the exact same manner, both when setting a value:$Session->Contents->SetProperty('Item', 'test', 0);

as well as when retrieving the value:

my $val = $Session->Contents('test');

Additional Choices

There are also other properties and methods available with Session, including the Timeout property, used to set the session’s timeout value, and the Abandon method, used to abandon the current session. Each session is given a unique identifier, SessionID, and this value can be accessed in a script. But use caution when accessing this value if you hope to identify a unique person—the value is only unique and meaningful for a specific session.

In support of internationalization, there are Session properties that control the character set used with the page, CodePage, and to specify the Locale Identifier, LCID. The Locale Identifier is an international abbreviation for system-defined locales, and controls such things as currency display (for instance, dollars versus pounds).

Listing 3 shows an ASP page that sets the Timeout property for the Session object, and accesses and prints out both the CodePage and the LCID values. Retrieving this ASP page in my own environment and with my own test setup, the value printed out for CodePage is 1252—the character mapping used with American English and many European languages. The value printed out for LCID is 2048—the identifier for the standard U.S. locale.

The Application and Session examples used a third ASP built-in object, the Response object. This object is responsible for all communication from the server application back to the client. This includes setting Web cookies on the client (with the Cookies collection), as well as controlling whether page contents are buffered before sending back to the client, or sent as they’re generated, through the use of the Buffer property:

$Response->{Buffer} = 1;

You can use the Buffer property in conjunction with the End, Flush, and Clear methods to control what is returned to the client. By setting Buffer to true (Perl value of 1), no page contents are returned until the ASP page is finished. If an error occurs in the page, calling Clear erases all buffered contents up to the point where the method was called. Calling the End method terminates the script processing at that point and returns the buffered content; calling the Flush method flushes (outputs) the buffered contents, and ends buffering.

Listing 4 shows an ASP page with buffering enabled. In the code, the Clear method is called just after the first Response Write method, but before the second. The page that’s returned will then show only the results of the second Write method call.

The Buffer property must be set before any HTML is returned to the client, and this restriction also applies to several other Response properties, such as the Charset property (alters the character setting within the page header); the ContentType property (alters the MIME type of the content being returned, such as “text/html”); the AddHeader method, which lets you specify any HTTP header value; and the Status property, which can be used to set the HTTP status to values such as “403 Forbidden” or “404 File not found”. For example:

$Response->{Status} = "403 Forbidden";

If buffering is enabled for a Web page, then properties such as Charset and ContentType, as well as the AddHeader method can be used anywhere within the ASP page.

You can redirect the Web page using the Redirect method call to specify a new URL. As with the other properties and methods just mentioned, Redirect must also occur before any HTML content:

$Response->Redirect("http://www.somesite.com");

In addition to manipulating HTTP headers, the Response object also generates output to the page using the Write method, as demonstrated in the previous examples. You can also return binary output to the client using BinaryWrite. You can override whether an ASP page is cached with proxy servers through the use of the CacheControl property, as well as set cache expiration for the page by setting the Expires or the ExpiresAbsolute properties:

$Response->{Expires} = 20 # expires in 20 minutes

You can test to see if the client is still connected to the setting with the IsClientConnected property. Communication doesn’t just flow just from the server to the client. The Request object handles all client-based communication in either direction. This object, as with Response, has several different methods, properties, and collections. The collections interest us the most.

You can read Web cookies using the Request Cookies collection. And it’s possible to set and get information about client certificates using the ClientCertificate collection.

You can also process information that’s been sent from a client page using an HTML form, or appended as a query string to the Web page’s URL:

<a href=”http://www.newarchitectmag.com/documents/s=5106/new1013637317/somepage.asp?test=one&test2;=two”>Test</a>

The two collections that hold information passed to the ASP page from a form or a query string are: the QueryString collection, and the Form collection. Use QueryString when data is passed via the GET method, and use Form when data is passed via the POST method.

Regardless of which technique you use to send the name/value pairs, accessing the information is similar. Listing 5 shows an ASP page that processes an HTML form that has been POSTed:

The example ASP page pulls out the values of three text fields in the form: lastname, firstname, and email. It then uses these to generate a greeting page, including adding a hypertext link for the email address. If the form had been sent using the GET method (where the form element name/value pairs get concatenated onto the form processing page’s URL), then the page contents would be accessed from QueryString:

my $firstname = $Request->QueryString('firstname')->item;In addition to the Form and QueryString collections, the Request object also has a ServerVariables collection, containing information about the server and client environment. The ServerVariables collection is comparable to accessing %ENV in CGI.

You can access individual elements in the Server Variables collection by specifying the exact variable name when accessing the collection:

my $val = $Request->ServerVariables('PATH_TRANSLATED')->item;Or you can iterate through the entire collection. To do this, you can use the Win32::OLE::Enum Perl module to help you with your efforts. The Enum class is one of the many modules installed with ActivePerl, and provides an enumerator created specifically to iterate through COM collections such as ServerVariables.

Listing 6 shows an ASP page that uses the Enum class to pull the elements of the ServerVariables collection into a Perl list. You can then use the Perl foreach statement to iterate through each ServerVariables element, printing out the element name and its associated value.

If an HTML form contains a File input element—used to upload a file with the form—you can use the Request BinaryRead method to process the form’s contents. The TotalBytes property provides information about the total number of bytes being uploaded.

Break Out with COM

All of the examples up to this point have used objects that are built in to the ASP environment. You can also create COM-based components within your ASP pages using the built-in Server object. The Server object has one property, ScriptTimeout, which can be used to determine how long a script will process—a handy property if you want to make sure scripting processes don’t take more than a certain length of time.

The Server object also has a couple of methods that can be used to provide encoding, such as HTML encoding, where all HTML-relevant characters (like the angle bracket) get converted to their display equivalents. MapPath maps server paths relative to their machine locations. URLEncode maps URL-specific characters into values that are interpreted literally rather than procedurally:my $strng = $Server->URLEncode("Hello World");The result of this method call is a string that looks like:

Hello+World%21Although these methods and the one property are handy, Server is known generally as the object used to instantiate external ASP components, through the use of the CreateObject method. This method takes as its parameter a valid PROGID for the ASP component. A PROGID is a way of identifying a specific COM object, using a combination of object.component (sometimes with an associated version number):

simpleobj.smplcmpntAs an example, I created a new Visual Basic ActiveX DLL, named the project simpleobj, and the associated class smplcmpnt. The component has one method, simpleTest, which takes two parameters and creates a return value from them, based on the data type of the second parameter. This component method, shown in Example 1, has a first parameter defined as a Visual Basic Long value (equivalent to a Perl integer), and a second parameter of type Variant, which means the parameter could be of any valid COM data type—Visual Basic functions are used to determine the data type of the value.

A new page uses the Server CreateObject method to create an instance of this ASP component, and tests are made of the component method. As shown in Listing 7, the first test passes two integers to the external VB component method. The component tests the second parameter as a Long value, adds the two parameters, and returns the sum.

The next test passes a string as the second parameter. The component tests this value, finds it is not a number, and concatenates the value onto the returned string.

The script for the final test creates a date variable using the Win32::OLE::Variant Perl module, included with ActivePerl. The standard localtime Perl method is used to create the actual date value, but if this value isn’t “wrapped” using the Variant module, the Visual Basic ASP component will receive the variable as a string parameter rather than as an actual date—PerlScript dates are treated as strings by ASP components.When the Visual Basic component receives the date as the second parameter, the component finds that it is not a number, and concatenates the value onto the string returned from the function. When displayed, the returned string looks similar to the following:

3/15/1905 100I could have passed the date value directly instead of using Variant, but as I mentioned, the COM-based VB component sees the Perl date as a string rather than as a true date type. The Variant Perl module provides techniques to ensure that the data types we create in PerlScript are processed in specific ways in ASP components.

Summary

Perl is a mature language that has been used for many years for Web development. As such, there is both expertise with, and a preference for, using this language for future development with the newer Web development techniques such as ASP.

ActivePerl and PerlScript are the key tools for using Perl within the ASP environment. Perl can be used for ASP scripting through PerlScript, but the Perl developer also has full access to the objects necessary to work within the ASP environment: namely the ASP built-in objects such as the Response and Request objects.

Additional modules to assist Perl developers—such as Win32::OLE::Enum and Win32::OLE::Variant—are included with the ActivePerl installation, and help make PerlScript as fully functional within the ASP scripting environment as VBScript.

Best of all, with ActivePerl and PerlScript you can develop within an ASP environment and still have access to all that made Perl popular in the first place: pattern matching and regular expressions, the Perl built-in functions, and a vast library of free or low-cost Perl modules to use in your code. Interest in ASP is growing, and with PerlScript you can work with this newer Web technology and still program in your favorite programming language.

Categories
Specs

The Tyranny of Standards

Originally published at O’Reilly

Before proceeding into the core of this article, I want to say one thing to you: challenge your assumptions.

Challenge your assumption that all Internet services are provided by a Web server and consumed by a browser Challenge your assumption that chaos within a development environment is a bad thing. And challenge your assumption that standards must take precedence over innovation.

Several years ago, when the concepts of Web server and browser were first implemented, the Internet was introduced to a new state of chaos and, as the explosive growth of technologies that are “Web-enabled” demonstrates, innovation was not only the rule, it was the norm.

Over time, people decided that standards were a necessary adjunct to the growth of the Web, something with which I completely agree. Enter the W3C, the World Wide Web Consortium.

As the W3C organization will attest, they are not a standards body. As such, they don’t issue “standards” per se. Instead, the W3C issues recommended specifications. The only enforcement of these specifications has been through voluntary compliance on the part of the technology providers, and demand for said compliance on the part of technology consumers.

Thanks to the efforts of the W3C, we have specifications for HTML, XML, CSS, HTTP, and a host of other Web-enabling technologies. Thanks to those following the specifications, we have Web pages that can be viewed by different browsers and served by different servers.

Somewhere along the way, however, standards became less of a means for providing stability and more a means of containment. In some cases, standards have become a weapon used to bludgeon organizations for practicing the very thing that started the growth of Web applications in the first place: innovation.

The Importance of Innovation

Innovation is the act of improving what exists and creating something new. Though innovation does not always lead to something better (Remember push technology?), it is the thing that keeps us moving forward, always searching for a better way of doing things.

Innovation can work comfortably with standards; new XML-based specifications, such as MathML, are a case in point. There are also times when innovation actually bucks the standards.

For instance, Microsoft has been long criticized for adding its own “innovations” to a specification, particularly with its popular Web browser, Internet Explorer. One innovation was the support of a property called innerHTML that is used to access or easily replace the contents of a specific HTML element. Though innerHTML is not part of any of the W3C specifications, its use is so popular that Mozilla, the open source effort behind the new Netscape 6.0 browser, has adopted the use of innerHTML within its own layout engine.

Should Microsoft and Mozilla be bashed for lack of standards compliance because innerHTML is not a property supported by the W3C? Or should both organizations be commended for providing a useful tool that has become very popular with Web developers?

This leads to an additional question: How does one measure standards compliance? For example, if Internet Explorer and Mozilla both supported CSS attributes such as font size and color, and they also supported new attributes and properties like innerHTML, would both browsers be compliant? Or are they noncompliant because they’ve added new features to the underlying CSS/DOM/XML/HTML specifications? How exactly do we define “standards compliance,” especially when there are groups like the WSP (Web Standards Project) enforcing this compliance?

The WSP

I’ve long been a fan of the W3C, and I think that the Web and the Internet would be a much more chaotic environment without this organization. However, my fondness for the W3C does not necessarily extend itself to the WSP.

If you haven’t heard of the WSP, it is an example of what happens when standards enforcement is left to the masses. This organization’s intentions are pure: It’s a nonprofit organization of Web developers, designers, and artists who encourage browsers to support standards equally and completely. However, somewhere along the way, the WSP took on the aspect of a holy war, a Web jihad.

The WSP’s behavior is tantamount to lynch mob justice. After all, there are no gray areas of justice: only black and white, right or wrong. The same can be said of support for the enforcement of standards: A company supports standards 100 percent, or the company is noncompliantand, therefore, evil.

Note that I agree with the WSP in spirit: Our lives would be much easier if Microsoft and Mozilla and Netscapewould support the W3C specifications fully and equally. I’m more than aware of the cost of having to write different Web pages for different browsers because each has implemented technologies in a different way. I’ve been doing this for years.

However, I’ve also benefited when an organization has expressed an innovation that exists outside of a specification, such as the aforementioned innerHTML, or Mozilla’s support for XUL (Extensible User Interface Language). If having all browsers be 100 percent standards compliant means not having access to these innovations, then I’ll take noncompliance even if it does mean extra effort to compensate for differences.

I encourage Microsoft and Mozilla and Netscape to support the W3C specifications and other standards, but I also encourage these same organizations to continue their innovative efforts, even if the result is a bit of chaos in a world that would otherwise run smoothly, and without a wrinkle.

And who’s to say that a little chaos is such a bad thing?

The Chaos of Innovation or the Sameness of Compliance

In August 2000, CNET.com featured an article titled Why Open Standards are a Myth. The author of the article, Paul Festa, wrote that open standards only work when a company has a lead in a technology and then uses the standard as a means of ensuring that its competition doesn’t exceed its own ability. The support for standards, then, becomes a means of disabling a competitor’s innovation.

In this context, the sameness of compliance to standards becomes less a tool to help developers and businesses and more a weapon against competition. The sameness of compliance also becomes a measure of ensuring that all participants reach one level, are kept on this level, and that there are no bumps in the road of compatibility.

Is this smooth path of total compliance the Internet of the past? And is this the Internet we want in the future?

In the End

Standards are essential to doing business between companies. They are necessary to ensure that, for example, CD players can play all CDs, and elevators don’t crash to the first floor from the tenth. Our lives are protected by standards and our laws are based on them.

However, standards were never meant to be a weapon against innovation, as a tool for beating a company into submission, particularly within the free-spirited environment of the Internet.

Should we encourage the adoption of standards? A resounding yes! But not at the expense of what makes working on the Internet so challenging and exciting: The promise of something new coming through the router. Something different. Something interesting. Something innovative.

Categories
JavaScript

Implement a DHTML Mouseover effect with DOM

Originally published in WebBuilder magazine. Found courtesy Wayback Machine.

The DOM, or Document Object Model, is a specification recommended by the World Wide Web Consortium (W3C) primarily to help eliminate cross-browser dynamic HTML differences. It is implemented with Microsoft’s Internet Explorer (IE) 5.x, and will be implemented with Netscape’s Navigator 5.x when it is released. You probably haven’t seen that many demonstrations of the DOM and its impact on DHTML implementations, and the ones you have seen probably have been fairly complicated. At times you might even think it would be less complicated and would require a lot less code to implement the DHTML using the technologies supported in the 4.x browsers and just deal with the cross-browser problems.

However, you will find that in the long run, the DOM, in addition to XML (Extensible Markup Language), HTML 4.0, and CSS (Cascading Style Sheets), will simplify your development efforts once you have grown accustomed to the technology. In fact, using the DOM can actually make your coding a whole lot easier and cut down on the number of lines of code you need, depending on what you hope to accomplish.

This article will show you how to create a text-based menu mouseover effect, complete with menu tips that will work with IE 5.x and with the August, 1999 M9 build of Gecko available at Mozilla.org (as tested in a Windows environment). Before learning how to use the DOM specification to create a mouseover effect, you might find it useful to get a little history on mouseovers as they are implemented without using the DOM. This next section will highlight why the DOM is an improvement over the existing implementations of DHTML.

Pre-DOM Mouseover Effects
One of the first implementations of “dynamic” HTML occurred when Netscape exposed images for access from an images array from the HTML document object, and then allowed you to modify the source of the image through the src attribute. For instance, this line of code uses JavaScript to replace the existing source of a displayed image with a new image source:


document.images[0].src = "somenew.gif";

A popular use of this dynamic HTML technique was to implement the mouseover effect. The mouseover effect gives a visual cue to the user that the mouse’s cursor is over a certain element in a Web page. The cue remains visible until the cursor moves away from the element. The mouseover effect has long been considered one of the classic implementations of dynamic Web page effects.

Most commonly, you use mouseover effects to highlight menu items. A problem with using the image changing approach for this purpose is that you have to use graphics for the menu, adding to the overall page download times, and the effect won’t work with anything but images. If you wanted to provide a help message for the menu item, you would need to include this message as a part of the image or use some other technique such as Java applets.

These limitations were resolved when CSS positioning and styles, and exposure of the browser document object model, were released under the term of “Dynamic HTML” (DHTML) in Microsoft’s Internet Explorer 4.x and Netscape Navigator 4.x. With the introduction of DHTML, changing the image source wasn’t the only approach you could take to generate a mouseover effect. You could use dynamic positioning, including hiding and showing elements to display the associated menu item text.

This example shows a menu item with a hidden menu text tip. By capturing the onMouseOver and onMouseOut event handlers, you change the style of the menu text to show the tip when the mouse is over the menu item; otherwise you return the text to its original appearance to hide the tip:


<DIV id="one" style="width: 150; z-index: 1" 
   onmouseover="this.style.color='red';onetext.style.visibility='inherit'"
   onmouseout="this.style.color='black';onetext.style.visibility='hidden'">
Menu Item One
</DIV>
<DIV id="onetext" style="visibility: hidden; margin: 20px">
This is the associated text for Menu Item One
</DIV>

However, this approach did not work as intended because the implementation of DHTML included with the 4.x browsers only supported element hiding when the element was positioned. Also, the style setting would not work with Navigator 4.x. Navigator 4.x does not allow you to modify the script of an element’s CSS1 style setting after the element has been rendered (displayed) to the page.

To get around the cross-browser limitations and differences, you could create two different absolutely positioned versions of the elements, and hide one of them. The hidden element would then have the “highlighted” CSS style setting and would be shown when the mouse was over the element and hidden otherwise:


<DIV id="one1" style="z-index: 1"
   onmouseover="switch_on('one')">
Menu Item One
</DIV>
<DIV id="one2" style="color: red;
   font-weight: 700; z-index: 2; visibility:hidden"
   onmouseover="switch_off('one')">
Menu Item One <br>
This is the associated help text to display with menu item one
</DIV>

This approach again worked with IE, but not with Navigator, because Navigator and IE supported different event models and event handlers. To make sure event handling worked with both browsers, and to be consistent, you would use a link to surround the menu item and the mouse events would be captured in the link:


<a href="" onclick="return false" onmouseover="switch_on('one')">

With this workaround, the mouse events are being captured correctly, but there’s still one more problem remaining, which I call the “phantom mouseover effect.” Normally, a user moves the mouse cursor over an element, triggering the process to hide the regular menu item and show the highlighted version. When the user moves the mouse cursor away, the effect is reversed. However, if the person moves the mouse too quickly, the original element gets both the mouseover and mouseout events before the highlighted menu item is even shown. When this happens, the highlighted element stays visible even when the mouse is moved out of the area because it didn’t receive the mouseout event, leaving what is virtually a phantom effect. The user must move the mouse’s cursor over the item again, more slowly, to trigger the regular menu item to appear.

To avoid this phantom effect, you can employ another technique that uses a third, invisible element. In this case, you use a small transparent GIF image and size it to fit over the menu item. The invisible element traps both the mouseover and mouseout events, and invokes the functions to hide the regular and highlighted menu items accordingly. Here is an example of this type of mouseover handling that works with Navigator 4.x and up and IE 4.x and up. First, you create the menu item, its highlighting, and the event capturing blocks:


<!-- menu item one -->
<DIV id="one" style="left: 140; top: 140; z-index: 2">
<a href="" onclick="return false" 
   onmouseover="switch_on('one')"
   onmouseout="switch_off('one')"><img src="blank.gif" 
width=150 height=30 border=0></a>
</DIV>

<DIV id="oneb" style="left: 150; top: 150;
   z-index: 1">
Menu Item One
</DIV>

<DIV id="onec" style="left: 150; top: 150; 
   z-index: 1; visibility:hidden"
   class="highlight">
Menu Item One -
This is the associated help text to display with menu item one
</DIV>

Next, you create the script that processes the menu highlighting:


// set items visibility using 
// specific browser technology
function SetObjVisibility (obj, visi) {
   if (navigator.appName == "Microsoft Internet Explorer")
        document.all(obj).style.visibility=visi;
   else
        document.layers[obj].visibility=visi;
}

// switch highlighting on
function switch_on(theitem) {
   SetObjVisibility(theitem+"b", "hidden");
   SetObjVisibility(theitem+"c","inherit");
}

// switch highlighting off
function switch_off(theitem) {
   SetObjVisibility(theitem+"c", "hidden");
   SetObjVisibility(theitem+"b","inherit");
}

To overcome cross-browser document object model differences, you use an eval function to evaluate and invoke the visibility setting for the element being hidden or displayed. This page will work with Navigator 4.x and up and IE 4.x and up. However, the workarounds to the cross-browser problems make the code much larger and more complex than you’d want for such a simple effect. Instead, you should consider using the DOM to create a simple mouseover menu effect.

Enter the DOM
DOM Level 1 is the most recent recommended specification for DOM from the W3C. The DOM supports a browser-neutral specification that, when implemented within a browser, lets you dynamically access the elements within the Web page, using an approach that will work consistently across browsers and across platforms.

Without getting into too much detail on the DOM, the specification groups the elements of a Web page into a hierarchy, and you can obtain a reference to an element by first accessing its parent and then accessing the element from the parent’s element list. For instance, an HTML table would contain rows, the rows would contain cells, and the cells would contain the data that is displayed. To access a specific cell’s data, you would first need to access the table, then the row containing the cell, the cell, and then access the cell’s contents.

Another key aspect to the DOM is that instead of defining every single HTML element within the specification, it defines a fairly generic set of elements and then defines how to work with the elements directly, and as they relate to each other. Additionally, the W3C has provided an ECMAScript binding for the core elements of the DOM, and the HTML-specific API based on the DOM.

The example in this article uses the HTML version of the document object, or HTMLDocument. This version provides a method, “getElementById”, which allows you to access an element within the document by its “ID” attribute. Additionally, Navigator 5.x and IE 5.x both support HTML 4.0 and CSS2 (for the most part), which means both support the onmouseover and onmouseout event handlers within tags such as DIV tags. Also, both browsers expose the style object so you can dynamically modify the CSS style attribute of an element. Here, you define the two menu items and their associated menu tips:


<!-- menu item one -->
<DIV id="one" style="height: 30; width: 140"
   onmouseover="on('one')" onmouseout="off('one')">
Menu Item One
</DIV>

<DIV id="onetext" 
   style="display:none; width: 140; margin: 10px; 
   font-size: 10pt; background-color: white; color: red">
This is the text associated with the first menu item
</DIV>

<!-- menu item two -->

<DIV id="two" id="two" style="height: 30; width: 140"
   onmouseover="on('two')" onmouseout="off('two')">
Menu Item Two
</DIV>
<DIV id="twotext"
   style="display:none; width: 140; margin: 10px; 
   font-size: 10pt; background-color: white; color: red">
This is the text associated with the second menu item
</DIV>

Because you define the menu items as DIV blocks that are not absolutely positioned within the Web page, they will appear in the upper left corner of the document. Also, notice that the menu tips aren’t hidden with the visibility property; you remove them out of the context of the document with the display CSS attribute set to “none”.

Next, you create the script that processes the menu highlighting. This script does a couple of things. First, it uses the type attribute for the SCRIPT element to define the language used for the script block.


<SCRIPT type="text/JavaScript">

Then the script creates functions to highlight the menu item (“on”) and turn off highlighting (“off”). The functions themselves access the menu item and tip by using the DOM method getElementById. This method returns a reference to the element you want to modify:


// get specific div item, identified by node index
var itm = document.getElementById(val);
var txt = document.getElementById(val+"text");

The functions turn the display for the menu tip on or off, depending on whether the mouse is over the menu or has moved away from the menu item. Because you use display instead of visible, the other elements of the page are moved to accommodate the newly displayed item. Visible hides an element but leaves the “box” that the element occupies within the document flow; display set to “none” removes the element completely from the page flow:


// turn on menu tip display
txt.style.display="block";

…

// turn off menu tip display
txt.style.display="none"

In addition to altering the display of the menu tip, you can also change the CSS style on the menu item. For example, you can increase the font weight and modify the font and background color of the element. Notice that no cross-browser code is present in this example. With the 5.x releases of Navigator (as demonstrated in the M9 release of Gecko that you can obtain at Mozilla.org) and IE, both browsers now support exposing CSS attributes through the style object and dynamically modifying these attributes:


// set style properties
itm.style.backgroundColor="green";
itm.style.color="yellow"
itm.style.fontWeight = 700;

By using the DOM (and browsers that support HTML 4.0 and CSS), you can halve the amount of code required to create the mouseover effect, as you can see from the complete example.