Categories
Technology

Australian Censorship bill could impact on P2P

Originally published at O’Reilly

Australia’s been in the news before about Net censorship legislation, but the South Australian Parliament may have gone a little extreme even for this Net-conservative country.

A bill introduced in November would make it illegal for content providers to post material that is considered “objectionable viewing material” for children. What’s objectionable viewing material? Anything that the police — the police, mind you — would consider as falling within the R, NC, or X ratings categories of the film industry. Ostensibly this would cover material such as child pornography or content advocating breaking the law. However, the bill is general enough that it could also cover material on topics such as abortion, suicide, drug use, sexual behavior and other sensitive topics that could be termed “adult topics” and therefore R-rated.

Even more alarmingly, under this bill posting this material is illegal even if access to the material is restricted or password-protected. Compounding the problem, content providers would have no way of knowing whether their material would fall under one of the prohibited classifications before posting it; if the material is judged by the police to be within the parameters of this bill, you’d be charged. No warning and no second chance. And the fines aren’t cheap: as much as 10,000 (Australian) dollars per offense.

According to an alert issued by Electronic Frontiers Australia, this bill would actually make material that’s legal offline, illegal once posted online.

The impact of this bill on Web-based businesses is obvious — the level of censorship implied would give even the most conservative businesses pause when it comes to posting content on their Australian-based Web sites. What may not be so noticable, though, is the impact of this bill on peer-to-peer applications and services. You see, the wording of the bill doesn’t focus on Web-based content; it concerns content distributed via the Internet.

Consider the following scenario: You’re a subscriber to a file-sharing P2P service such as Napster. You make a request for material that could be considered “objectionable” because of the language used — for instance one of the more explicit songs from Alanis Morissette’s album “Jagged Little Pill,” or practically anything from Guns N’ Roses or Eminem. Once you’ve downloaded an “objectionable” song, it’s now on your machine for your personal use. However, in this process, you’ve also “posted” this content for access by other clients through the Internet: P2P is based on the fact that any node within the network can be both a client and server. According to this bill, you would be in violation of the law.

If you’re a subscriber to a decentralized service such as Freenet or Gnutella, the potential problems with this type of bill are even more extreme. With these types of P2P networks, if a file request is made from node A to node B, and then from node B to node C, that file is returned to node B as the intermediary first, and finally to node A. Now, not only is the peer located at C in violation of the law, so are A, who originally requested the file, and B, who did nothing more than subscribe to the conditions of the P2P service that states files may be stored on the client’s machine as a method of disseminating popular files throughout the network.

By its very nature, Freenet hides the identity of nodes supplying or requesting files, making it difficult to ascertain who was the originator of the material or the request. Because of this, it becomes difficult to ascertain who is legally responsible for “posting” the file if it is deemed to fall within the parameters of this censhorship bill. So, what could happen is that the intermediary node containing the file is the one charged with violating the law, rather than the originator, regardless of the technical and legal semantics that form the basis of anonymity within a Freenet network.

At the very least, applying this censorship law to the Freenet or Gnutella network would become a legal nightmare to the South Australian court system. All it would take to demonstrate the unfeasibility of the law is to introduce one highly popular but objectionable file to Freenet, potentially turning all or most South Australian Freenet users into criminals. This issue goes beyond considerations of copyright law.

According to the UK-based Register the South Australian’s politicians must have gone “barking mad” — in other words, the bill’s sponsors may want to reconsider the bill on its own merits.

Read the pertinent sections of the censorship bill at Electronic Frontiers and then join discussions at Slashdot and South Australia’s Talking Point

Categories
Technology

Speaking at the P2P Conference

Recovered from the Wayback Machine. In some ways, what we described was the origins of a pseudo-blockchain functionality. But Michael went back to Sydney, and I went on to other things.

I’ll be speaking at the first P2P conference, presented by O’Reilly and being held in San Francisco in February.

I’m co-speaking with Michael Hitz, from Skyfish The focus of the presentation will be:

Managing power grids (such as Florida Light and Power’s) and mass transit systems (such as the new light rail system to the airport in Hong Kong) each require sophisticated control systems. The sale of these large scale complex systems often requires an international marketing and engineering effort that demands the input of many different people, many of whom live in different countries and speak different languages. Such a sales process is fraught with an engineering challenge of its own that demands accurate price estimates, bills of material, forecast manufacturing orders and communication across sales, engineering and manufacturing teams in multiple time-zones.

 

This talk focuses on a development effort currently underway to create an automated configuration tool for such systems; one which will allow a number of distributed participants to collaborate on the description of a complex system of distributed parts. Output are various stages of quotation, requests for approval, and an automatically generated bill of materials (BOM).

 

In order to facilitate the geographical separation of people using the tool, the creators will be using P2P technologies to locate and access distributed services based on the needs of both configuration and user, at each stage of the configuration cycle. Streaming data will be used to dynamically generate the user interface, based on work in progress by one or more active participants and each engineer’s locale.

Peer-to-Peer is more than just another buzz word or a technology such as Napster. Ray Ozzie, the creator of Lotus Notes, has started a company and a technical framework called Groove that encompasses P2P technologies and concepts, and no one can say that he doesn’t understand either technology or the current market or how to merge the two.

P2P technologies such as distributed computing, collaboration, and shared services and resources will change the way we access functionality and use the Internet in the next few years. To be blunt, the last time I was this excited about technology was when I first used this new thing called the “web”, years ago.

Here’s your chance to get in at the start of a new way of doing things — attend the P2P conference in February, and attend our presentation Friday, the 16th of February in the afternoon.

Categories
Specs Technology

Browser, Browser Not

Originally published at O’Reilly

Recently, O’Reilly published a set of articles (Netscape Navigator 6.0 to Fail Standards ComplianceAn Update, and Netscape 6.0 Released), written by the popular author David Flanagan, about the release of Netscape 6.0, Netscape’s newest entry in the browser marketplace.

David presented several valid concerns about bugs still present in the release of Netscape 6.0. And it is true, Netscape 6.0 did release with several unfixed bugs, many of which will have an impact on support for W3C specifications.

Our reaction to the release, however, was somewhat different. Along with other application developers, we’ve been waiting for the public release of an application that uses Mozilla’s XPToolkit, a set of software components from which Netscape 6.0 and the upcoming Mozilla 1.0 were built. Now that Netscape 6.0, which uses this framework, has been publicly released, we’re delighted: testing of XPToolkit may begin in earnest.

While many are focused on the release of Netscape 6.0, some of us aren’t. We’re more interested in the application environment created by the Mozilla team to support the implementation of browsers in general. To us, this framework is more important than the release of a new browser will ever be.

The reason for this is the changing face of the Internet, itself.

The Changing Face of Internet Applications

Current Internet applications rely on a centrally located Web server to distribute HTML over HTTP to clients. Each client, or Web browser, renders the source and displays a human-readable page.

This architecture has become so popular that you can’t pick up a magazine or a newspaper without hearing about Web servers or the new business models based on them. Although this architecture is based around universally located resources, most application-level resources are centralized and many other resources are hard to find. Some Web sites help you find other Web sites or “resources.” Others go so far as to offer completely centralized applications, as Application Services Providers (ASPs).

New technologies will soon force us to rethink the way we use the Internet. Distributed systems, mobile agents, and peer-to-peer (P2P) applications may completely undermine the need for browser-based Internet access.

P2P applications are already stepping around the browser. The next step will be around the Web server.

Consider this: a P2P application that locates and downloads a new function. The simplest example here may be provided by a P2P execution framework that uses XML-based remote procedure calls between peers to marshal XML-encoded functions. Instead of hitting Web pages, each peer locates and accesses both data and functions among a network of peers. No Web servers.

This scenario is not going to be best served by the traditional browser. Why?

The Limitations of Browsers

The things that made the Web browser a success in the beginning are the things that make it ineffective for new application models.

The browser was built to render files stored on Internet sites so we didn’t have to muck about with FTP. As soon as content became more visible, people started publishing yet more content, so browsers rendered HTML, then XML, formatted with CSS or XSLT. However, the browser itself has a very limited interface, even with new advances in W3C specifications. Sophisticated browser pages mean using either complicated object models–leading to cross-platform and cross-browser idiosyncrasies that are usually the result of standards initiatives–or using page-embedded applications, such as Java applets and plug-ins.

Even when the browser follows standard specifications, working within a browser page to create a sophisticated interface isn’t a simple or uncomplicated task.

In addition to the browser becoming increasingly complex as the nature of content becomes so, use of it implies that applications ought to be served from one location, and in one manner. To do something such as make a remote procedure call, you would need to use a digitally signed Java applet or some other browser-specific and limited technique. This is something that won’t bother newer P2P applications.

Finally, browsers were designed to be safe, and operate in a protective sandbox. Web-based applications served via a browser have difficulty getting at the user’s machine. Though safe, this restriction also prevents behaviors that would have the application modify its user interface. And this dynamism is going to be necessary in an environment where new services require new application interfaces that can be downloaded as data.

An Internet Application Framework?

Mozilla made a tough decision a few years ago–to scrap the Netscape 4.x architecture in favor of one built from the ground up. In the process, this open source team created an application environment based on reusable and interchangeable components.

With this application environment in place, the team then proceeded to build a sophisticated browser. They threw in Internet Chat, a Web page composer, and other complex things, all of which were released recently as Netscape 6.0. Often forgotten is that a powerful application environment came with it. This environment is now usable by developers of other Internet applications.

What types of applications? Well, ActiveState, the company that provides popular implementations of Perl and Python for various operating systems, used Mozilla to create itsKomodo product, a visual IDE for working with Python and Perl code. The user interface provides, among other things, colored syntax, syntax checking, and source-level debugging.

So, we have a browser and an application that can be used to create and test Perl and Python applications, all built from the same application architecture.

This is exciting stuff! Much has been written about reusable code and component-based design, and now we have an open source application environment we can all use to build our own applications.

Even more exciting is the extensible user-interface language from Mozilla called XUL (pronounced “zool”). It’s based on XML, which means you can use XML to create a user interface. Combine this with the ability to make remote procedure calls, and you have a perfect place from which to commence building a bunch of P2P applications, based on the scenario mentioned above.

Now, instead of opening a browser, you can open an application built on the same framework as your browser, but with a sophisticated interface of dropdown menus and tabbed pages–all created using XML. You can access remote procedure calls at the touch of a button and when you’re ready to access a new service, click another button, and in a couple of minutes restart your application. New entries will be added to new or existing menus providing access to the new service. All this is accomplished without Java bytecode, a new plug-in, or a DLL.

You’ve just downloaded XML.

When you explore the possibilities of the XPToolkit from Mozilla maybe you’ll agree that Netscape 6.0 is more than just a standards-based, better-than-Navigator-4.x-browser. It’s the start of a new new way of doing things on the Internet.

Categories
Technology

Digital Play Dough: Designing Applications with XUL

Originally published in Web Techniques

The XML-Based User Interface Language (XUL) made its first appearance with the release of Mozilla, the Open Source browser used as the foundation for Netscape 6. Pronounced “zool,” the language gives developers and designers an easy way to describe the design and layout of application windows in Web browsers. By modifying a few files, you can change the entire look of your Web browser or of the windows that pop open while a visitor browses your site. Prior to XUL, this was only possible by modifying and re-compiling the browser’s underlying source code. And in that case, you would have to distribute the modified browser to all your site’s visitors — an unlikely event. Fortunately, all you need to change the look and feel of a Web browser today is an understanding of the XML and CSS specifications and a little ingenuity.

Architecture

XUL applications consist of XML files created with .xul extensions. The files define the content of the application. Additional application data is located in Resource Description Framework (RDF) files. CSS files provide formatting, style, and some behavior, for the application. JavaScript files provide scripting support. Multimedia files, such as PNG images and other audio/visual files, might also be needed for additional user interface information. All of the file types are specifications recommended by the W3C, and collectively are referred to as the XUL application’s “chrome” — the contents, behavior, and appearance of the application’s user interface.

The Mozilla browser is itself designed as an XUL application. To manage the chrome for your browser, both Mozilla and Navigator have subdirectories labeled chrome, located off each browser’s main directory. Within the chrome directory, separate XUL applications are packaged into separate subdirectories. Within each application directory, subdirectories further divide the application into content (containing the XUL, JS, and RDF files), skin (CSS files), and locale (DTD files).

To deploy your own XUL application on the Web, you can either place all of the files within the same subdirectory on a Web server, or use the suggested chrome directory structure on the server. Note though that you may lose some functionality, such as localization, when your application is not using the chrome directory structure. Also, all files should be local to the URL of the main XUL file, otherwise the application may not work due to violations of built-in security.

A Simple Application

To demonstrate the structure of an XUL implementation, I created an application that is essentially a window with two panes — one on the left with a menu of hypertext links, and one on the right where a Web page can be viewed. Figure 1 shows the application.

To understand how I created this application, look at the sample XUL file shown in Listing 1. The first line of an XUL file is the XML declaration, which includes a reference to the version of XML used. Following that, the file must include a reference to a CSS file, to provide formatting for the XUL contents. The file then defines the application window and sets several properties — namely, the title, width, and height. Namespaces are provided for the application’s elements. By default, all elements in the application are XUL elements, identified by the following namespace (required for all XUL files):

xmlns=”http://www.mozilla.org/keymaster/
gatekeeper/there.is.only.xul”

When completed, the application will use some standard HTML elements, and these are identified with the html namespace. Additionally, the application will have a few elements unique to the specific application, and these are identified by the Webtechniques namespace WT.

Widgets

The different sections of a XUL application are contained in widgets known as “boxes.” Boxes don’t have any visual appearance themselves; their only purpose is to encapsulate several widgets as a whole, and to provide layout orientation for their contents. The orientation is defined with the orient attribute, and other attributes can be used to set the box width and height.

To demonstrate the use of boxes and page layout, Figure 1 shows a first version of the application, with buttons on the left, and a space to display the content on the right. The XUL file for this page is found in Listing 2.

In the listing, I added a box to the window contents and gave it an orientation value of vertical. This orientation forces all of the content to be positioned vertically in the pane. Additionally, the box is given a flex value of 1. The flex attribute can be applied to several different types of XUL elements. It’s used to help the layout engine determine how to expand an element to fill available space. For the outermost box in the page, the contents expand to fill the entire window.

In Listing 2, the outer box contains a toolbar and another, nested box. The toolbar and this new box are layered vertically inside the outermost box.

The nested box is given a horizontal orientation and contains two more nested boxes. These boxes are displayed left to right, and form the main panes of the application. Each of these boxes is given a vertical orientation, which means their contents will be aligned from top to bottom. A XUL splitter is used to separate the contents of the two boxes. This allows the user to resize either side of the application page by dragging the splitter either to the left or to the right.

Notice in the code that the first content box doesn’t have a set flex value. This means the box will be sized according to the width and height of its contents. The second box has a flex value of 1, and it expands to fill the remaining horizontal space.

The first box contains several XUL buttons, surrounded by widgets called springs. The spring element is used to help position elements, and has no visual characteristic of its own other than to occupy space. In this example, a spring is used on either side of the page buttons, to center the buttons within the box.

Also notice in the listing that the second box contains an HTML IFRAME element. As each listing for an article is accessed, the contents of the listing (a Web page) are opened in this IFRAME element. Because the element is an HTML element and not a defined XUL widget, the html namespace must precede the element tag.

More Widgets

The application terminates when the user clicks on the Close Window button in the application’s toolbar. To add a button widget to a page, you’d use the button element as follows:

<button value="Close Window" onclick="window.close();" />

The default appearance for the button is provided by CSS files that are included with Mozilla/Navigator.

To provide feedback to the user whenever a page is being loaded in the right pane, I have decided to extend the code by adding a progressmeter widget (see Figure 2). The progressmeter widget can have a mode of determined or undetermined. A determined meter is one in which the developer knows the exact length of time of an operation, and controls the meter accordingly. The undeterminedmeter is used when the length of time for an operation is unknown. Accessing a Web page falls into the undetermined category.

To support the use of the meter, I add JavaScript code to activate the meter. To do this, I put my code in a JavaScript .js file, and save the file in the XUL application’s contents directory. The code is shown in Listing 3.

I then create a reference to the JavaScript file in the XUL document. When doing this, I have to modify the onclick event handlers for the menu buttons so that they call the JavaScript loadItem()function instead of jumping directly to the URL. Listing 4 contains the contents of the second version of the XUL application. Now, when the application loads a Web page, the meter signals that a page is loading.

Integrating RDF Files

In the previous versions of the XUL application, the URLs of the target Web sites were hard coded into the page. A better approach would be to add the URLs as application data from RDF files. RDF files are used with XML to define application specific data, and can be altered easily without the need to modify the main XUL application pages.

As shown in Listing 5, many of the pages in an RDF list can be from the same site, and are nested by URL location.

To facilitate working with RDF data, XUL has a template engine that integrates external data with the XUL application. The template engine is activated when the layout engine finds opening and closing template elements, and all of the contents contained within the tags then form the template pattern. This pattern is used to determine how to lay out repeating values found in the RDF file.

We’ll add a template to the XUL application and set the pattern to be one button for every listed item in the RDF file. To do this, the RDF data source file must first be attached to the XUL file, and this is done in the box surrounding the template with the following command:

<box orient="vertical" flex="1" datasources="sites.rdf" ref="urn:source:data" >

The datasources attribute specifies the RDF file, and the ref attribute refers to the location in the RDF file to pull in the sequence of data.

The template, given in Listing 6, contains a button widget with the attribute of uri="rdf:*", which indicates where in the RDF file to begin the template matching. An application specific attribute (with the WT namespace) is added to the button to capture the URL associated with the Web page. The page’s title will be shown as the button’s display value.

The JavaScript loadItem() function from Listing 3 must be modified to pull the URL for the Web site as an attribute of the item passed in from the event handler. The new function is given inListing 7.

Figure 3 shows the page that results from using the RDF file and the XUL template. Note that each nested item from the RDF file is actually embedded within the button of its containing parent. So, you see a top button containing two other buttons, and so on.

Trees

Buttons aren’t necessarily the best widget to use when designing an application that uses a repeating template. Other widgets that are better are menus and trees. An XUL tree is similar to an HTML table in that the tree is delimited with one tag (tree), the rows with another (treerow), and the tree contents with yet another (treecell). The treehead tag defines a header for the tree, and the treeitem tag is used to allow users to click and select tree items.

Unlike an HTML table, though, XUL trees provide sophisticated processing for nested items. By default nested items aren’t displayed until the user clicks on an item’s containing parent.

In Listing 8, I’ve replaced the buttons with an XUL tree structure. Individual tree cells hold each hypertext link, and nested data items are displayed as nested tree items. Clicking on the graphic associated with the top-level tree items (known as the “twisty”) displays the contained items. Clicking on any contained item opens the associated Web page in the right pane of the application, as displayed in Figure 4.

Skinning

One of the more controversial aspects of XUL is skinning — the ability to change the appearance of any of the application’s chrome, including the frame, the buttons, the toolbars, menus, and so on. You can change the entire look and feel of Mozilla and Navigator 6 just by downloading new chrome packages and installing their contents. With XUL, the only limit to skin design is the designer’s imagination.

So why is giving Web developers the ability to design different interface skins so controversial? Imagine for a moment that Salvador Dali or Picasso painted frames rather than the canvases. Now, try to visualize what painting could hold its own within these frames. Not many could. Application designers argue that well-designed user interfaces could be ruined by a third party’s poorly-designed skins. The other side of this argument is that being able to create a custom user interface is an attractive concept for developers and designers who don’t want to have to work within the rigid confines of the widgets defined by current operating systems.

To create a new skin for a XUL application, you provide a custom CSS file that defines the styles for the widgets in your application. If you wish to have access to the default appearance and behaviors for the widgets, you must import the global styles into your new CSS file with the following command:

@import url(chrome://global/skin/);

You can then add styles for the widgets used in your application.

Listing 9 shows the CSS file for the XUL application used in this article. In the file, I changed the style of the splitter so that it has a maroon background when inactive, and a teal one when active. I also changed the appearance of the button on the splitter (called the “grippy”). I modified the meter to have a gold background, and did the same for the selection highlight on the tree. I even added new GIF images for the toolbar grippy (the little downward arrow that’s used to hide or show the toolbar).

In this example, I modified the styles for XUL widgets directly in the CSS file. However, the preferred approach is to add new widget classes, and use them in your application as follows:

button.wt_button { ... }
<button class="wt_button".../>

With this approach, you still have access to the default appearance and behavior for each widget.

Once I modified the widgets, I opened my main XUL application file and replaced the reference to the global CSS file with a reference to the new CSS file, using the following line:

<?xml-stylesheet href="wt.css" type="text/css"?>

To access the completed application, create a page that has the following link on it, and load that page in Mozilla or Navigator 6:

<a href="#" onclick="window.open('wt.xul','example','chrome');">test</a>

If you use PR 1 of Navigator 6, note that the toolbar won’t show properly due to changes in the XUL schema that occurred after Navigator 6 PR 1 was released. It should work correctly with PR 2, and works with Mozilla build M16 (and up, hopefully).

One more thing to note is that you may have to add the XUL MIME type to your server’s configuration in order to make sure the application XUL page downloads correctly. The MIME type is simple text/xul.

Conclusion

You’ll find the complete application with all necessary files in the wt.zip file. With just a few changes of the XML file associated with the XUL application — adding a meter and using a tree instead of buttons — I changed application functionality and provided better user feedback and a better design. With a few adjustments to the CSS file for the application, I was able to create a new and different look to go with my application’s functionality. XUL promises to be a powerful tool for Web application development and deployment. I look forward to seeing future iterations and refinements of the technology in future browser versions.

Categories
Technology

PerlScript: A hot ingredient for ASP

Originally published at Web Techniques

Microsoft’s Active Server Pages (ASP) technology has grown very popular as a Web server-side technique, combining HTML, embedded scripts, and external components to deliver dynamic content. One feature of ASP is that different languages can be used for its scripting portion, though the most widely used ASP scripting language is VBScript—a subset of Microsoft’s Visual Basic.

However, just because VBScript is the most popular language used by ASP developers doesn’t mean it’s the only one, or even the best one to use in a particular circumstance. For instance, Perl has long been synonymous with Web server development, beginning with the earliest uses of the language by CGI, and is still one of the most popular languages among Web developers. But there hasn’t been much discussion of using Perl with ASP.

If your organization has been working with Perl, and you’re interested in developing for the ASP environment, you don’t have to give up your favorite language (or your existing Perl code) to make the transition to the new technology—you can employ Perl for ASP scripting through the use of PerlScript.

A (Very) Brief Overview of ASP

Microsoft originally introduced ASP technology was with the company’s own Web server, Internet Information Server (IIS). However, ASP has now been ported to other Web servers through the use of software such as ChiliSoft’s platform-independent version of ASP. In addition, ASP was created originally to work in a Windows environment, but again thanks to ChiliSoft and other companies, ASP now runs in non-Windows environments such as UNIX and Linux.

Still, the most popular use of ASP is within the Windows environment, with pages hosted on IIS. This environment—specifically Windows NT/IIS 4.0—is the one I’ll discuss.

ASP pages have an .asp extension, and are a mix of HTML and embedded script. When a client requests the page, the embedded script is accessed and processed. The results generated by the script are embedded into the Web page, which is then returned to the client browser.

The Response object—along with the other built-in ASP objects Server, Session, Request, and Application—provides access to the ASP and application environment for the ASP developer. The Response object provides a way to send information back to the Web-page client; the Request object provides access to information sent from the client, such as form contents; the Application object contains information that persists for the lifetime of the ASP application; the Session object contains information that persists for the lifetime of the particular ASP user session; and the Server object, among other things, lets the ASP developer create external components.

ActivePerl and PerlScript

A company named ActiveState was formed in 1997 to provide Perl tools for developers for all platforms. Among some of ActiveState’s more popular products is ActivePerl, a port of Perl for the Win32 environment.

ActivePerl is a binary installation containing a set of core Perl modules, a Perl installer (Perl Package Manager), an IIS plug-in for use with Perl CGI applications, and PerlScript. What is PerlScript? PerlScript is Perl, but within an ASP environment—it’s a Perl implementation of the Microsoft Scripting Engine environment, which allows Perl within ASP pages.

ActivePerl can be downloaded for free from ActiveState’s Web site (see the ” Online Resources“). To try it for yourself, access the ActiveState Web site and find the ActivePerl product page.

The installation process requires very little user input. You’ll need to specify where you want to put the files, and be sure to select the option to install PerlScript.

Using PerlScript Within ASP Pages

By default in IIS, all ASP scripting is VBScript, but you can change this using one of three different techniques. First, you can change the default scripting language for the entire ASP application using the IIS Management Console, and accessing the Properties dialog box for the Virtual Directory or Site where the ASP application resides. Click on the Home Directory (or Virtual Directory) tab, and then click on the Configuration button on the page. From the dialog box that opens, select the App Options page. Change the default scripting language from “VBScript” to “PerlScript.”

A second technique is to specify the scripting language in the scripting block itself. With this technique you could actually choose more than one scripting language in the page:

<SCRIPT language=”PerlScript” RUNAT=”Server”> . . . </SCRIPT>

A third technique is to specify the scripting language directly at the beginning of the Web page. This is the approach we’ll use for examples. Add the following line as the first line of the ASP page:

<%@ LANGUAGE = PerlScript %>

All scripting blocks contained in the page are now handled as PerlScript.

Accessing Built-In ASP Objects From PerlScript

To assist the developer, PerlScript provides access to several objects available only within the ASP environment. As mentioned earlier, these are the Application, Session, Response, Request, and Server objects.

The Application object is created when an ASP application first starts, and lasts for the lifetime of the application. The object has two COM collections, both of which can be accessed from script: the StaticObjects collection, with values set using the <OBJECT> tag within an application file called global.asa; and the Contents collection, which contains values set and retrieved at runtime. A COM collection is a group of similar objects treated as a whole, with associated features that let the developer access individual elements directly, or iterate through the entire collection.

In addition to the two collections, the Application object also has two methods, Lock and UnLock, which are used to protect the object against attempts by more than one application user at a time to change its values.

We’ll take a closer look at using the Lock and UnLock methods by setting a value in the Application’s collection in one ASP page, and then retrieving that same value from another page.

First, Listing 1 contains a page that sets a new value to Application Contents by first locking down the object, setting the value, and then unlocking the object. Notice that you don’t have to create the Application object—it’s created for you, and exists in the main namespace of the PerlScript within the ASP page (the same holds true for all of the ASP objects).

If you’ve worked with VBScript you’ve probably noticed that you have to use a different technique to set the value in the Contents collection with PerlScript. VBScript allows direct setting of collection values and properties using a shorthand technique, similar to the following:

Application.Contents("test") + val

PerlScript, on the other hand, doesn’t support this shorthand technique. Instead, you have to use the SetProperty method to set the Contents item:

$Application->Contents->SetProperty('Item', 'test', $val);

Additionally, you have to use SetProperty to set ASP built-in object properties with PerlScript, or you can use the Perl hash dereference to set (and access) the value:

my $lcid = $Session->{codepage};

Listing 2 contains another ASP page that accesses the variable set in Listing 1, prints out the value, increments it, and resets it back to the Application object. It then accesses this value and prints it out one more time. Accessing this page from any number of browsers, and from any number of separate client sessions, increments the same Application item because all of the sessions that access the ASP application share the same Application object.

In addition to the Application object, there is also a Session object, which persists for the lifetime of a specific user session.

This object is created when the ASP application is accessed for the first time by a specific user, and lasts until the session is terminated or times out, or the user disconnects in some way from the application. It, too, has a StaticObjects and a Contents collection, but unlike the Application object, you don’t lock or unlock the Session object when setting a value in Contents. But you do access the Contents collection in the exact same manner, both when setting a value:$Session->Contents->SetProperty('Item', 'test', 0);

as well as when retrieving the value:

my $val = $Session->Contents('test');

Additional Choices

There are also other properties and methods available with Session, including the Timeout property, used to set the session’s timeout value, and the Abandon method, used to abandon the current session. Each session is given a unique identifier, SessionID, and this value can be accessed in a script. But use caution when accessing this value if you hope to identify a unique person—the value is only unique and meaningful for a specific session.

In support of internationalization, there are Session properties that control the character set used with the page, CodePage, and to specify the Locale Identifier, LCID. The Locale Identifier is an international abbreviation for system-defined locales, and controls such things as currency display (for instance, dollars versus pounds).

Listing 3 shows an ASP page that sets the Timeout property for the Session object, and accesses and prints out both the CodePage and the LCID values. Retrieving this ASP page in my own environment and with my own test setup, the value printed out for CodePage is 1252—the character mapping used with American English and many European languages. The value printed out for LCID is 2048—the identifier for the standard U.S. locale.

The Application and Session examples used a third ASP built-in object, the Response object. This object is responsible for all communication from the server application back to the client. This includes setting Web cookies on the client (with the Cookies collection), as well as controlling whether page contents are buffered before sending back to the client, or sent as they’re generated, through the use of the Buffer property:

$Response->{Buffer} = 1;

You can use the Buffer property in conjunction with the End, Flush, and Clear methods to control what is returned to the client. By setting Buffer to true (Perl value of 1), no page contents are returned until the ASP page is finished. If an error occurs in the page, calling Clear erases all buffered contents up to the point where the method was called. Calling the End method terminates the script processing at that point and returns the buffered content; calling the Flush method flushes (outputs) the buffered contents, and ends buffering.

Listing 4 shows an ASP page with buffering enabled. In the code, the Clear method is called just after the first Response Write method, but before the second. The page that’s returned will then show only the results of the second Write method call.

The Buffer property must be set before any HTML is returned to the client, and this restriction also applies to several other Response properties, such as the Charset property (alters the character setting within the page header); the ContentType property (alters the MIME type of the content being returned, such as “text/html”); the AddHeader method, which lets you specify any HTTP header value; and the Status property, which can be used to set the HTTP status to values such as “403 Forbidden” or “404 File not found”. For example:

$Response->{Status} = "403 Forbidden";

If buffering is enabled for a Web page, then properties such as Charset and ContentType, as well as the AddHeader method can be used anywhere within the ASP page.

You can redirect the Web page using the Redirect method call to specify a new URL. As with the other properties and methods just mentioned, Redirect must also occur before any HTML content:

$Response->Redirect("http://www.somesite.com");

In addition to manipulating HTTP headers, the Response object also generates output to the page using the Write method, as demonstrated in the previous examples. You can also return binary output to the client using BinaryWrite. You can override whether an ASP page is cached with proxy servers through the use of the CacheControl property, as well as set cache expiration for the page by setting the Expires or the ExpiresAbsolute properties:

$Response->{Expires} = 20 # expires in 20 minutes

You can test to see if the client is still connected to the setting with the IsClientConnected property. Communication doesn’t just flow just from the server to the client. The Request object handles all client-based communication in either direction. This object, as with Response, has several different methods, properties, and collections. The collections interest us the most.

You can read Web cookies using the Request Cookies collection. And it’s possible to set and get information about client certificates using the ClientCertificate collection.

You can also process information that’s been sent from a client page using an HTML form, or appended as a query string to the Web page’s URL:

<a href=”http://www.newarchitectmag.com/documents/s=5106/new1013637317/somepage.asp?test=one&test2;=two”>Test</a>

The two collections that hold information passed to the ASP page from a form or a query string are: the QueryString collection, and the Form collection. Use QueryString when data is passed via the GET method, and use Form when data is passed via the POST method.

Regardless of which technique you use to send the name/value pairs, accessing the information is similar. Listing 5 shows an ASP page that processes an HTML form that has been POSTed:

The example ASP page pulls out the values of three text fields in the form: lastname, firstname, and email. It then uses these to generate a greeting page, including adding a hypertext link for the email address. If the form had been sent using the GET method (where the form element name/value pairs get concatenated onto the form processing page’s URL), then the page contents would be accessed from QueryString:

my $firstname = $Request->QueryString('firstname')->item;In addition to the Form and QueryString collections, the Request object also has a ServerVariables collection, containing information about the server and client environment. The ServerVariables collection is comparable to accessing %ENV in CGI.

You can access individual elements in the Server Variables collection by specifying the exact variable name when accessing the collection:

my $val = $Request->ServerVariables('PATH_TRANSLATED')->item;Or you can iterate through the entire collection. To do this, you can use the Win32::OLE::Enum Perl module to help you with your efforts. The Enum class is one of the many modules installed with ActivePerl, and provides an enumerator created specifically to iterate through COM collections such as ServerVariables.

Listing 6 shows an ASP page that uses the Enum class to pull the elements of the ServerVariables collection into a Perl list. You can then use the Perl foreach statement to iterate through each ServerVariables element, printing out the element name and its associated value.

If an HTML form contains a File input element—used to upload a file with the form—you can use the Request BinaryRead method to process the form’s contents. The TotalBytes property provides information about the total number of bytes being uploaded.

Break Out with COM

All of the examples up to this point have used objects that are built in to the ASP environment. You can also create COM-based components within your ASP pages using the built-in Server object. The Server object has one property, ScriptTimeout, which can be used to determine how long a script will process—a handy property if you want to make sure scripting processes don’t take more than a certain length of time.

The Server object also has a couple of methods that can be used to provide encoding, such as HTML encoding, where all HTML-relevant characters (like the angle bracket) get converted to their display equivalents. MapPath maps server paths relative to their machine locations. URLEncode maps URL-specific characters into values that are interpreted literally rather than procedurally:my $strng = $Server->URLEncode("Hello World");The result of this method call is a string that looks like:

Hello+World%21Although these methods and the one property are handy, Server is known generally as the object used to instantiate external ASP components, through the use of the CreateObject method. This method takes as its parameter a valid PROGID for the ASP component. A PROGID is a way of identifying a specific COM object, using a combination of object.component (sometimes with an associated version number):

simpleobj.smplcmpntAs an example, I created a new Visual Basic ActiveX DLL, named the project simpleobj, and the associated class smplcmpnt. The component has one method, simpleTest, which takes two parameters and creates a return value from them, based on the data type of the second parameter. This component method, shown in Example 1, has a first parameter defined as a Visual Basic Long value (equivalent to a Perl integer), and a second parameter of type Variant, which means the parameter could be of any valid COM data type—Visual Basic functions are used to determine the data type of the value.

A new page uses the Server CreateObject method to create an instance of this ASP component, and tests are made of the component method. As shown in Listing 7, the first test passes two integers to the external VB component method. The component tests the second parameter as a Long value, adds the two parameters, and returns the sum.

The next test passes a string as the second parameter. The component tests this value, finds it is not a number, and concatenates the value onto the returned string.

The script for the final test creates a date variable using the Win32::OLE::Variant Perl module, included with ActivePerl. The standard localtime Perl method is used to create the actual date value, but if this value isn’t “wrapped” using the Variant module, the Visual Basic ASP component will receive the variable as a string parameter rather than as an actual date—PerlScript dates are treated as strings by ASP components.When the Visual Basic component receives the date as the second parameter, the component finds that it is not a number, and concatenates the value onto the string returned from the function. When displayed, the returned string looks similar to the following:

3/15/1905 100I could have passed the date value directly instead of using Variant, but as I mentioned, the COM-based VB component sees the Perl date as a string rather than as a true date type. The Variant Perl module provides techniques to ensure that the data types we create in PerlScript are processed in specific ways in ASP components.

Summary

Perl is a mature language that has been used for many years for Web development. As such, there is both expertise with, and a preference for, using this language for future development with the newer Web development techniques such as ASP.

ActivePerl and PerlScript are the key tools for using Perl within the ASP environment. Perl can be used for ASP scripting through PerlScript, but the Perl developer also has full access to the objects necessary to work within the ASP environment: namely the ASP built-in objects such as the Response and Request objects.

Additional modules to assist Perl developers—such as Win32::OLE::Enum and Win32::OLE::Variant—are included with the ActivePerl installation, and help make PerlScript as fully functional within the ASP scripting environment as VBScript.

Best of all, with ActivePerl and PerlScript you can develop within an ASP environment and still have access to all that made Perl popular in the first place: pattern matching and regular expressions, the Perl built-in functions, and a vast library of free or low-cost Perl modules to use in your code. Interest in ASP is growing, and with PerlScript you can work with this newer Web technology and still program in your favorite programming language.