Categories
Technology

Today’s Unix: New all over again

Originally published at O’Reilly onlamp.com, recovered from the Wayback Machine.

It used to be that Unix was for the geeks, while the rest of the world used less command-intensive, and usually less powerful, operating systems such as Windows or the Mac OS. Even with the advent of native GUIs such as the X Window system, Unix was not for the uninitiated. If you didn’t understand that pwd returned the name of the working directory, ls listed the contents of a directory, and that you accessed help with man, then you needed to carefully remove your hands from the keyboard and back away slowly, trying not to touch anything as you moved.

Well, the times they are a-changing. Today’s Unix is sexier, friendlier, and is moving in completely different neighborhoods than yesterday’s Unix. Gone is the tough, geeks-only image, wrapped in discussions of kernels and cron jobs, communication interspersed with esoteric terms such as awkseddaemons, and pipes. In its place is a kinder, gentler, more easily accessible Unix, wearing new clothes in soft undertones of unintimidating blue.

Still, the basic core of the old Unix remains, forming a hybrid of its older, powerful functionality that’s been integrated with modern conveniences. In many ways, today’s Unix, with its blend of old and new technologies, open source support, and bright shiny new interfaces, reminds me of that old wedding attire rhyme, “something old, something new, something borrowed, and something blue.”

Something Old

There is a commonality to all popular flavors of Unix — Solaris, Linux, FreeBSD, the new Mac OS X Darwin, and so on. Mac OS X may have a very unique user interface, but if you bring up a terminal (access the Finder menu, select Go, then Applications, then Utilities), you’ll be able to enter Unix commands exactly the same as you would in command line mode in OpenBSD or HP-UX. In addition, the overall boss of the system is still root, though how root is managed can change, dramatically, from Unix box to Unix box, and among individual installations.

Each flavor of Unix is based on the same principles of shell access and kernel system control, and each comes with a minimum functionality that allows you to communicate with the operating system. For instance, you can with assurance go into any Unix box and type vi at the command line, and the vi editor will open. Being a longtime vi fan (successfully resisting any urge to move over to emacs over the years) it’s good to see my old friend regardless of whether I’m accessing vi on my FreeBSD server, a Solaris box at work, the Linux dual boot on my Dell laptop, or within the Terminal window of my Titanium Powerbook.

Additionally, no matter your version of Unix, you’ll interact with the operating system through a shell; multiple users can share system resources because the operating system supports preemptive multitasking (the ability to run tasks seemingly simultaneously without clashes over resources); you can work with files, directories, and resources with common commands such as cdlsmkdirgrepfind, and so on; you’ll have access to a wide variety of open source and freely-available utilities such as the previously mentioned vi; the smarts of the system is the kernel; and root is still the superuser that can take everything down in one command.

However, after having just reassured you that there is little difference between old and new versions of Unix, I’ll now contradict myself and tell you that today’s Unix isn’t exactly the same as the old Unix, as this old dog has learned some new tricks.

Something New

One of the biggest differences I’ve found with the newer forms of Unix, or even more modern versions of old classics, is how much easier it is to do things. For instance, one of the most complicated and nontrivial tasks within the Unix world used to be, at least for me, installing software.

Not that long ago, after downloading and uncompressing the software you wanted, you’d most likely find that the package contained source code, which you’d then have to compile and install. Unfortunately, it seemed as if no two Unix installations were ever alike, so you’d have to work with the source code’s Makefile (basically an instruction file for architecting the build); tweaking it, changing the libraries, the locations, the flags used, and so on, in order to get a successful build and install. The POSIX Unix standards effort helped with some of this, but for the most part, you just had to work the build and installation through, and unless you were really lucky, the process could take a frustrating amount of time.

(Really, it was no wonder Unix people became so clever working with basic Unix functionality — no one wanted to go through the hassles of installing new utilities and tools.)

Today, spending hours and days tweaking Makefiles is virtually a thing of the past — most flavors of Unix now come with tools that not only tweak the Makefile for you, but can download the software and build and install it, all in one command.

If you’re a Linux user, you use RPM to manage your software; if you’re a Debian user, you’ll use dpkg. FreeBSD users utilitize the ports to access and install software. Even the Mac OS X has a version of ports called Fink. In addition, all Unix users can use three new utilities — autoconfautomake, and libtool — to literally analyze your environment and generate a Makefile that’s customized to your machine.

Software installation isn’t the only area of improvement to today’s Unix. Getting a software package, or anything over the Internet, is easier with new tools and utilities, such as GNU’s wget utility, which can download a file through either the HTTP or FTP protocols. Though I’ll always remain true to vi, the more modern vim has lured users away from vi‘s simple (but elegant) functionality, not to mention users of that other editor, emacs. For scripting, you have access to some shiny new scripting/programming languages such as Python and Ruby, in addition to that favorite, Perl. And very few people use the old C shell any more — most use the more modern bash (Bourne Again Shell), or newer variants.

No matter what new toy or tool you use, much of this simplified and friendlier Unix environment is due to the operating system’s implicit partnership with the open source community. In fact, this partnership is the main reason that Unix is such a thriving, vibrant operating system thirty years after its introduction.

Something Borrowed

Though Unix has received support from major corporations such as HP and Sun, it’s still the open source developers who spent time creating utilities and tools, as well as working on more portable versions of Unix that kept what was a cryptic and difficult operating system around long enough for it to mature into the powerful, elegant, and user-friendly OS we have today. Unix owes a huge debt to the open source community for providing applications such as those mentioned in the last section.

Need a database for your FreeBSD box? You can download and use MySql for no charge as long as your use is personal. Interested in serving up Web pages? Download and install the number one Web server, Apache. Once Apache is installed, you can develop with Java using Tomcat, or you can use an embedded scripting approach with PHP — again, downloadable, open source, free software. It’s true that many of these products also work on other operating systems, such as Windows. But the concepts behind each began with Unix.

One can literally fill books just trying to list all of the software that’s freely available, or available for a small price, and most of it is open source and runs on the majority of Unix platforms.

(If you have a spare day or two, you can access Source Forge and browse through all of the open source projects it manages. In addition, access FreeBSD software at the FreeBSD Web site; Mac OS X downloads at www.apple.com/downloads/macosx/; more on Linux at www.linux.org/; and general Unix utilities and tools at the GNU site.

In the last decade, we’ve also seen a blend of traditional open source effort and corporate management, with releases of Linux such as Red Hat or Mandrake, and Apple’s Mac OS X, with its proprietary interface built on an open source Unix known as Darwin. It’s this combination of open source effort and corporate support and stability that’s taking Unix to the desktop and laptop machines of the world; a move that’s taking our old friend out of the basement and giving it new purpose.

Something Blue

One of the first things I tried when I received my new Powerbook was to access the Terminal application and attempt to log in as root using the well-known su -l command. It was then that I discovered that Mac OS X was more than a smart GUI layered on to a Unix kernel — the integration between the two is much more extensive.

If you’ve accessed the Terminal in Mac OS X, then you’re most likely aware that the root user is not enabled by default. To enable root, you must access the Netinfo Manager application (from the Finder menu, access Go, then Applications, then Utilities), clicking on the Security menu and then choosing “Authenticate” to authenticate that you’re the computer’s administrator. Once authenticated, you can then enable or disable the root user.

Having to manually enable root is just one of the many twists to the integration of Unix (Darwin) with Apple’s sophisticated user interface, known as Aqua. The reason for the disabled root is security — if the traditional superuser is disabled, it’s much more difficult to crack into the Mac OS X kernel functionality and wreak havoc on the system. Since most Mac users will never access the command line, or will ever need true root access, a decision was made to disable it by default.

You’ll bump into another twistie when you attempt to compile downloaded software, even downloaded software that makes use of the automated configuration tools. Chances are you’ll run into an error similar to the following:

configure: error: installation or configuration
 problem: C compiler cannot create executables

This less-than-helpful message occurs because you have to specifically install developer tools that come on a separate disk from the Apple Mac OS X installation disk.

In spite of the differences you might encounter working through the Aqua user interface, one common connection between this new environment and more traditional Unix flavors is your ability to use X Window (X11)-based software within the Mac OS X environment, even though the two could be considered competitive GUIs.

I wrote this article using OpenOffice, an open source office application that’s freely available for Linux, Solaris, and Windows. Recently, the OpenOffice organization released a beta version of the application for Mac OS X. Rather than attempt to port OpenOffice directly into Aqua, the organization took an interim step and ported the source to Darwin with a X Window user interface. The next step once the first port is successfully tested will be to port to Apple’s Quartz graphical interface, and finally to Aqua.

To run OpenOffice, I needed to download and install an X Window system, and it just so happens there’s one available — XDarwin. Additionally, there’s a X11 window manager that works with XDarwin — OroborOSX. To install both, it was literally a matter of downloading the packages, uncompressing them with Stuffit, and double-clicking on each installation package — first XDarwin and then OroborOSX. No muss or fuss with paths or parameters or settings of any form (based on default installation). Once the X Window environment was in place, I then used the same procedure with OpenOffice. The total time to download and install all three packages was less than fifteen minutes; a shorter amount of time then it takes to install a certain other office application product. A person could get used to this.

Blue seems to be a lucky color with Unix, because Red Hat’s newest Linux installation, Red Hat Linux 8.0, now comes with a spiffy new GUI the company calls Bluecurve. Having installed Linux many times over the years, I was relieved when the installation program was able to detect and configure all of my peripherals, including my wireless mouse and keyboard. I was also very pleased when I was able to add and configure my wireless network card with just a few clicks of the mouse.

Of course, activating the wireless connection failed at first, and I’m having to do some research to find a solution, and I’ll most likely have to do some tweaking to get it to work. But, hey! It wouldn’t be Unix if all the challenges were removed, now would it?

Categories
Technology Weblogging

Comment and trackback spamming

The discussion continues on comment spamming and a couple of people have taken my initial quick fix and expanded on it nicely.

Jennifer from Scripty Goddess has taken to solution into the MT tmpl files, adding the hidden field to processing.tmpl.

Brad Choate came up with a fairly complex solution that, while not keeping a determined spammer out, would force the person to work for their spam.

Joni Electric has a good re-cap of effort to date.

(Found through trackback, by the way.)

Categories
Diversity Technology Weblogging

Links at twenty paces

Recovered from the Wayback Machine.

Christine staged a Blog Debate, during which Ciscley commented about guys being reluctant to move to Moveable Type because it’s popular. She wrote:

I think (I *know* in my personal blogging circle and I’m generalizing from there) that most of the people that are uncomfortable with the popularity of MT are guys. It’s like it’s a dirty blog word to every guy I know. They use phpWeblog (though I still have to design their layouts for them cause the interface only goes so far). They use geeklog. They’ve thought about pMachine. They’re willing to try anything and everything but MT.

Is it because so many women use and love MT? Is it because MT, if you don’t actually use it and know what a huge part of it Ben does, appears to be the creation of a woman? Is it taking something away from the all male tech industry to consider that a product inspired by or significantly designed by a woman is the best option out there?

Jonathon picked up on it, writing:

There are so many things to like about Movable Type—reliability, elegant interface, customizability, MySQL support, vibrant user community—but what could be more intriguing than Ciscley’s hypothesis of gendered MT use? Has Mena’s contribution influenced the software to the extent that it attracts a disproportionately high proportion of female users?

Christine picked up on both Ciscley’s and Jonathon’s comments, so it will be interesting to see if there is any form of debate on this.

A gender bias with Movable Type just isn’t something I’ve seen. I would imagine that there is a strong gender bias with the other weblogging tools that Ciscley has mentioned, but not with MT.

Any initial reluctance to adopt MT is based on the installation, which can be a hassle for non-techies. However, this seems to effect both men and women equally, and is really dependent on how comfortable the person is with Perl and CGI. Once installed and used, though, MT users can be fanatical in support, regardless of gender. I know — I’m a fanatical MT user.

(“Hello, my name is Burningbird, and I’m addicted to Moveable Type.”)

Why do I like MT? Because it’s a lovely, lovely piece of software. Powerful enough for all my needs, hooks that allow one to tweak if we wish, and now it has the MySql backend, which for a data person such as myself, is pure heaven, with little chocolate sprinkles on top.

Hmmm. Come to think of it, if Movable Type is an example of software resulting from a paired man/woman collaboration team, then I think it’s time for the software industry to look at its development practices.

(Notice how I didn’t once use “—ism”? I’m getting better. And Christine, I have Trackback enabled. Do I get a cookie? Sorry for the double ping, but MT went crazy — it pinged you three times, blo.gs, weblogs.com, and my mother. It also scritched my kitty underneath her neck, and washed the dishes in passing.)

Categories
Technology Weblogging

Comment spam quick fix

Recovered from the Wayback Machine.

Both Sam Ruby and Phil Ringnalda had good advice — don’t spend a lot of time on developing a solution to fixing the comment spam problem. Whatever I can do within the form, it’s a relatively simple matter for a spammer to read any form value and duplicate it in his spam blast.

I appreciate both their help in gently pointing out that I was spinning my wheels (but I have to get practice for ice driving).

So, here’s a quick fix — it will keep out the lightweights at least. It’s a start as other efforts are underway.

This approach will require you modifying the following MT templates:

Individual data entry
Comment Listing Template
Comment Preview Template
Comment Error Page

You’ll be adding the following field, on the line before the </form> tag:

 

<input type=”hidden” name=”snoop” value=”goaway” />

 

You can change both the name and the value field, as long as you’re consistent with the name throughout the templates and the code.

Next, open your mt-comments.cgi (or mt-comments.pl) file and add the following code just after the “use strict;” line:

 

use CGI qw(:standard);

if ($ENV{‘REQUEST_METHOD’} eq “POST”) {

my $data = param(‘snoop’);

die unless ($data);
}

 

Most everyone should have the CGI.pm perl module installed. Make sure to change ‘snoop’ to whatever your little secret field is (let’s all use different fields, make the spammer’s job a little tiny bit harder.

That’s it.

What happens is that when you post a comment, the code checks for a form field of “snoop”. If it doesn’t find it, it dies. Nothing fancy at all. This will show in your error log or web log file as a premature end to the script. It doesn’t prevent others from using the application, and doesn’t crash anything.

Again, this isn’t fancy, but it’s a start. Holler if you have questions. If you’re uncomfortable modifying mt-comments, let me know and I’ll help you. If you have a better solution, or see problems with mine, please let me know.

Again — thanks to Phil and Sam for advice, help, suggestions.

Update:

Mark has put together a nice re-cap on the whole comment spamming thing. What I just created is a ‘club’. I’m going in for an interview tomorrow and when they ask me what was the last application I worked on, I’ll answer “A club”. .

Categories
Technology Weblogging

Comment spam problem continued

Recovered from the Wayback Machine.

In regards to the comment spam problem mentioned earlier, one idea kicked around was checking the http_referer to make sure that the comment post came from the same server as the form.

We talked about the possibility of empty http_referers — not all browsers send a referrer and proxy servers can strip out the referrer. The solution would be to allow empty referrers in addition to referrers from the server. Unfortunately, though, allowing for empty http_referers will also allow in the comment spammer.

The reason why allowing empty referers opens the door to the spammer is the comment spamming code would invoke my comment code directly, not through a link from an HTML page. In this case http_referer would be empty.

I could become more restrictive, remove the permission for empty referrer, but if I do, I won’t be letting some of you through (as you’ve been kind enough to let me know via email tonight).

Sam Ruby had some good ideas such as putting hidden form fields into the comment forms and testing for these and this will be a next step. This means adding form fields to all templates related to comments, and then adding code to mt-comments.cgi. Doable, and many appreciations to Mr. Ruby for excellent ideas. (If you don’t know Sam, he works on some weird sounding things such as “Comanche” and “SOUP” — stuff like that).

A really nifty and difficult to crack approach (IMO) would be to take the person’s login name and the comment id for each comment and use these to create an encrypted value. Stuff this into an HTML form field. When the form is processed, test to see if the encrypted value checks out. If the person’s login name isn’t exposed, which is should NEVER be, it becomes a ‘key’ for the encryption, easily accessible to the MT program and the MT user, NOT to the spammer. And the different comment identifiers would make sure that the encrypted values changed with each comment.

Only problem with this solution is it would require cracking into the MT internal code.

Question: what do you think of this as a solution, and is it worth the time to do it?

(However, by now, Phil or someone else of like cailber will have found and coded a solution and have it half way distributed throughout the world. I should just leave these little challenges to others — what do I know?)