Categories
Technology

Self-documenting technology

Danny Ayers points to a Jon Udell article about dynamic documentation managed by the folks who use a product, rather than relying on stuffy old material provided by the organization making the software. In it, Jon writes:

Collectively, we users know a lot more about products than vendors do. We eventually stumble across every undocumented feature or quirk. We like to maintain the health of the products we’ve bought and we’re happy to discuss how to do that with other users.

The problem is that vendors, for the most part, do a lousy job of encouraging and organizing those discussions. Here’s an experiment I’d like to see someone try: Start a Wikipedia page for your product. Populate it with basic factual information, point users there, then step back and let the garden grow. Intervene only to repair vandalism, make corrections, and contribute useful new facts.

I had to pause when I read the words …we users know a lot more about products than vendors do. I was reminded about finding information about how to convert my Nikon 995 camera to RAW format, based on the helpful advice of just such a user, only to find out warnings at the Nikon site that if you do, you null and void the warranty on it because this could have Serious Consequences in the continued Usability of the Product.

I remembered reading a weblogger, brand spanking new in their use of SQL, telling everyone how they could fix a problem in their weblog just by running a certain SQL command, and then frantically sending the person an email saying that if the users do, there’s a good chance they’ll lose half their data. Then I was reminded that there is nothing more dangerous with a user who just knows they have the answer, and who also has absolutely no stake in whether you break your copy of the product or not.

Still, I have been helped numerous times by other users when I get into situations not documented in the product manual, and I can agree with Jon, and with Tim Bray that it’s much easier to interactively look for help than to read through static documentation when you run into problems.

Jon Udell suggests a new documentation strategy for technology vendors; rather than going on publishing incomplete, out-of-date, poorly written manuals, they could just set up a per-product Wiki and let the customer base fill it up with problems, fixes, workarounds, tips & tricks.

Which is probably why most company provided formal documentation isn’t focused on problem resolution as much as it is problem prevention. For instance, all the interactive help in the world isn’t going to help a new user set up a Movable Type weblog if they have to go query for each stage in the installation process. That’s why a company like Six Apart provides a quite nice installation guide that covers 99% of the situations most people would run into. By providing step by step instructions, the majority of people are able to get their weblogs up and running, without too many problems.

On the other hand, OsCommerce a heavily used open source free product for managing ecommerce sites, has little or no formal documentation, other than that provided by the users in provided forums. However, other sites have sprang up providing other documentation, including a wiki, and multiple site that provide, ta dah!, formally written, structured documentation for how to use the product.

Why the need for the latter? Because for the most part, OsCommerce is used by people who don’t have that much technical background or experience, and they, for the most part, are very uncomfortable without having structured documentation that they can follow, step by step, in how to actually use the product. Not troubleshoot, but actually use the app.

Still, Jon and Tim aren’t recommending that companies not provide this information — they’re saying provide this (for all those who can’t connect the dots through Google, I imagine), but then provide areas where users can provide additional documentation and help each other.

Jon, mentioned a wiki, and Danny pointed to the WordPress wiki as an example of this type of documentation project. Once upon a time, I also thought that a wiki would be a good tool to use for open documentation efforts. However, that was before I tried to get several people–non-technical people–interested in providing information at a wiki I set up. They were willing, but many were also intimidated about the environment. It was then that I realized that a wiki requires not only a great deal of knowledge about how to edit the pages, but also familiarity with the culture. In other words, wiki is for those who are wiki primed.

In fact, this has become a problem out at Wikipedia — most of the editors are technical people, and therefore the skew of information tends to be towards technical subjects. The organization has taken to actively promoting non-tech topics in hopes of attracting enough contributors to these other topics to start achieving some balance in coverage.

But then, Wikipedia has the necessary, critical element to make this work — it has achieved enough momentum to be able to direct attention to obscure topics and know that there should be enough members of the audience with knowledge of this topic, and willingness to dive into what is a fairly structured culture, to provide at least a bare minimum coverage of the topic. Most wikis will not.

That’s why when Jon proposes that vendors provide a hands-off wiki for users, and then uses Wikipedia as an example of how well a wiki can work, I winced. Too many people point to the Wikipedia as a demonstration of how a wiki can work, without realizing that the Wikipedia is unique in its use, purpose, and community. In other words, if the only wiki we can point to as a demonstration of how wikis work is Wikipedia, then perhaps what we’re finding is that wikis don’t work. Or don’t work without a great deal of organization on the part of the wiki administrators, and an already existing community of willing contributors.

Of course, technology users can be heavily motivated to support the products they use, as we’ve seen with weblogging technology. There is nothing more loyal than a weblog tool user–unless it’s a Mac user. You take your life in your hands when you take a critical bite out of Apple.

Based on the assumption of interest on the part of tech users, let’s return to the WordPress wiki that Danny pointed out. Checking out the recent changes, we find that on January 6th, a user who calls himself GooSa, has added a bunch of spam pages. I imagine that the pages will be removed by the time you look as this, but I copied this person’s ‘user’ page entry:

Goo Sa is an evil, evil spammer. Plus he smells like eggs.

Other than that, there isn’t that much activity on this wiki, because it’s no longer the WordPress wiki. No, that’s now the Codex wiki. As you can see in recent changes at this site, there is a great deal of work being done, as well as less spam content. Of course, I don’t believe this is linked any where from the main WordPress site, so it could be that the spammers haven’t found it yet.

If they do, there seems to be enough organization to help keep the site clean of the obvious spam, but what about the not so obvious destructive actions? For instance, the malicious editor who adds in a helpful tidbit that could actually cause harm to the users, but only a very experienced technical person would be able to recognize that this causes harm? (Or the user who has just enough knowledge about a topic to make them scary as hell.)

If this tip was out at a forum or email list, the user might be (should be) wary enough of the tip to perhaps get it vetted first; but this is the ‘approved’ wiki for the product, which implies trust in the material contained. Does this mean, then, that the WordPress developers vet every bit of information in the wiki? If the developers are busy providing code for the product, this is unlikely.

Of course, the thing about wikis is that they are self-correcting. However, it can take time for a correction to take place, and in the meantime, I’m fielding emails from half a dozen WordPress users about why their comments have stopped working, or what happened to their data, all because they followed information at the ‘official’ WordPress wiki.

Now, the Wikipedia doesn’t have many of these problems, because there is a good, formal procedure with enough people to monitor the site in place to route around damage. However, as the administrators of the site warn on a fairly regular basis — believe what you read their at your own risk. A wiki that’s ‘authorized’ by a vendor to provide documentation for a product can’t afford to be this lose in what information gets released under under its corporate umbrella.

Security and validity of the data aside, wikis foster a certain form of organization that may not be comfortable for all people. The information contained tends to float about in pieces, rather than flow smoothly, as more formal documentation does. Now, this might suit many of the more technical folks who want to know what’s going on with WordPress; I have a feeling, though, that those less technical folks who read it are going to feel cut adrift at times, as they read a discrete bit of information here, and one there, but without the experience to understand how the two pieces of information are related.

Of course, again, thats where the organizers come in, by helping to move things about and point out gaps in the flow–but at what point does it become obvious that if the organizers had just spent their time doing the documentation themselves, it would have taken less time than it takes to continuously prevent harm to the material?

Issues of wiki aside, let’s return to Jon Udell’s request for a new way of managing community documentation. He talks about not being able to find information at the vendor site to solve his problem, so he searches on Google and in user forums, he found the answer he needed. He then says, something new needs to be done to enable user access to community information.

Now, go back and re-read this last paragraph a couple of times.

As Danny points out, much of this user information is already available, but what might be missing is a way of accessing it:

But wait, those discussions with considerable information is already available – the end-user’s don’t really need any extra encouragement, they’re motivated as it is. But what’s lacking is the organisation of that information. Google is a very blunt instrument. Yes, a company Wiki could act as a focus for people, but they’re still be plenty of info on mailing lists and blogs that could be far more accessible.

But as Jon points out, whether intentionally or not, Google does work, and worked for him in this context. More, what he and Tim Bray and to a lesser extent Danny are looking for is more of the type of documentation that favors the geek, when it’s the geeks who don’t need the help — they know how to dig this information out, and how to vet its usefulness as compared to its harm.

What about the non-tech? The non-geek? The very documentation that Udell and Bray scathingly reject is the very documentation most of the non-techs need, and that’s well constructed, clearly written, stable, and vetted user documentation about how to use the tools. Throw in searchable user support forums for troubleshooting, Google and blogs and online interest sites, and Babs is your aunt. So when Tim writes:

Like many great ideas, it’s obvious once you think of it. I’m quite sure it’ll happen.

I have to scratch my head in confusion because seems to me that the mechanisms behind this ‘new idea’ are already in place, and have been for some time. Now if we could just convince the open source community and supporters like Jon and Tim and Danny that rather than spending time on creating a new style of hammer they need to provide nails and just start hammering away, we might be set.

Still, I don’t want to completely discount Jon’s wiki suggestion: it could be humorous to see what happens out at, say, a security wiki for Windows software. Might be better than a raree show, indeed.

Categories
Technology Weblogging

Close your trackbacks

Recovered from the Wayback Machine.

After a couple of test trackbacks yesterday, I knew that today most likely we would start seeing trackback spam, and so it has proved.

I would suggest that people turn off trackback capability if they’re concerned about receiving spam, until safeguards are put on this other rather huge, gaping hole into our sites.

Update

Matt also noted the trackback problems today, and mentioned about the only viable approach to the problem being a content-specific solution. Which means blacklists based on words.

We’ve already seen that these aren’t particularly effective. They’ve blocked legitimate comments based on fractions of words triggering the filter; they don’t stop crapflooding; they add processing burden on to already over burdened systems; and they’re too easily manipulated to filter on legitimate domains.

Now, we’re looking at doing the same with trackback.

What’s the best approach? Well, with trackbacks, you have a lot more ability to add intelligent safeguards because no one trackbacks anonymously. One can check back at the site to make sure the trackbacked URL is legitimate, or send an email to the track backer confirming the trackback. If this doesn’t defeat the trackback spammer who actually build sites with the permalinks included, then just plain moderate trackbacks.

Unlike comments, trackbacks are removed from the flow of conversation, so if one doesn’t show up immediately, no harm.

In addition, even the most popular post doesn’t receive more than 20-30 trackbacks, at most. It’s no burden to manage the posting of trackback manually. And of course, trackbacks, as with comments, should have throttles in place to prevent being crapflooded.

Why do we have to make things more complicated then they have to be? The more moving parts, the more we’re at the mercy of the spammers.

Trackback moderation — simple, easy, works.

Categories
Technology

Coding on the hoof

Recovered from the Wayback Machine.

This last week I’ve been heavily into development, trying to get OsCommerce to agree to operate a certain way, in addition to doing some really strange things to the version of WordPress in use at Bb (changes which will then get ported to Wordform). In fact, you may see things break here and there, off and on, as I’ve decided to do all my coding ‘live’ — directly to my working weblogs.

This is contrary to traditional development practices. Normally the rule is to code in a separate space and then roll out a nicely polished, tested, finished, fairly stable working copy. However, I thought it might be interesting for non-technologists to see what an application looks like as it undergoes changes.

I sometimes think we, the techs, hide what goes on behind the scenes too much–fostering a myth that an application is solid-state when really its bits and pieces stuck together. Hopefully, we manage to stick the bits together in a way that they actually do something useful, but that’s not always the case.

It’s frustrating for users to hit a bug in software, and when they do, they wonder how this bug could be missed and/or why isn’t the developer just “…putting out a quick fix”. What happens, though, with many bugs, is that trying to fix the code in one spot can break it in three other places because the code is really bits and pieces, stuck together in a way that hopefully works, but in this case, doesn’t.

It then becomes equally frustrating for the developer to try to explain to the user that there are so many moving parts to an application of any size, there is no such thing as ‘bug free’ code. In addition, the ‘quick fix’ the user asks for could take a month of developer time because it’s connected to half a dozen other bits of code all of which need to be changed–so they shouldn’t hold their breath.

For the next month, as I work at creating all sorts of new goodies (for WordPress, Wordform, and other weblogging tools), you can watch me break, repair, break, and then repair again my own weblog installations; all the while comfortably knowing its my site, my weblogs, my code that’s falling apart, not yours. Sort of like you being an observer behind one-way glass, and me being the insane patient under treatment.

Speaking of coding on the hoof, you saw it here first: iProngs and prodcasting.

Categories
Technology

Comment goodies

Another package I’m working on is a set of files that will add all my comment functionality to a WordPress 1.22 installation. This includes the functionality developed by others–live preview (from Chris Davis) and spellchecking (incorporated from Cold Forged) — in addition to my own modifications, such as being able to edit a saved comment. I’m also adding the rich HTMLEdit functionality to the edit page for the saved comments.

I am unsure about some of the other functionality, though. For instance, I won’t be able to provide my individual moderated comments because this requires changes in the admin page. That kind of functionality you’ll get from Wordform when I’m finished with the pre-alpha-beta-newbie-baby 0.0001 release, by the end of this week, or beginning of next.

But I can provide my backend comment spam prevention that, first of all, renames the backend comment posting file to something new; it then checks to see if more than so many comments have been posted in the last hour, and last day (limits that can be modified to your own preferences). If so, throttles kick in and the comments are rejected. This will prevent the problems with “10,000 comments, all at once” problem that has plagued Movable Type so severely.

In addition, I have functionality that will test the age of the post, and if it’s over so many days old (again configurable), it will put the next comment into moderation, and then close the post.

With both of these controls in place, I have very little problem with spam. Well, other than the new breed of spammer that is now running mock weblogs, with hidden links to porn sites and making individual comments, just like a real weblogger.

Would this backend protection be of use, and should I add it to the package?

The comments package is for WordPress 1.22 — the new Floating Clouds theme for 1.3 will have some of this included, automatically. It should be easily configurable, drop in and play, though it will require replacing your existing wp-comments.php page.

Categories
Technology

Packaged goodies

Per long overdue requests from several people, I’ve finished up a WordPress 1.22 template featuring the ‘floating clouds’ design. Additionally, I created a separate package that just provides the files necessary to do the random background image. You can see the basic floating clouds design at the development site, at http://word122.forpoets.com. I’m not linking this directly, as I will be blowing this site away when finished all my development. You can copy the complete template, or just the background image portion.

To install the complete template, make a backup of your index.php and wp-comments.php file. Then download and unzip the gzip-tar file; no worries if you’re not into the Unix thing– Mac’s Stuffit and Winzip can handle the file format. Just make sure you save the file with Firefox, and not have the browser open it directly — this can be troublesome at times.

From the material I’ve provided, copy the files index.php and wp-comments.php, in addition to the ‘look’ subdirectory, to your main WordPress 1.22 directory. You’ll also want to copy the plugin file, recent-comments.php, to your WordPress plugin directory (it should be wp-content/plugins off the main WordPress installation).

Once the files are in place, go into your admin, and activate the plugin, “Get Recent Comments”. This provides the functionality for getting recent comments in the sidebar, which is not provided in WordPress 1.22.

(If you haven’t upgraded to 1.22, you should do this first — there are security fixes in 1.22 you’ll need.)

After that, open the main index.php file in your browser, and you should have a site that looks like the one mentioned earlier. You can then add items to the sidebar, remove items, and modify the colors and look as you want.

If you’re just interested in the background image functionality, access this file instead and again unzip it, either to your server or your local PC. The file contains several images, which I’ve provided just as test images until you get the background switcher going. After that, you can replace the images with your own. Just make sure to update the photos.txt file to reference your images, not the ones provided.

For each page you want to use the dynamic background, add a link to the clouds.php file, using the following, but changing it for your domain and URL:

<link rel=”stylesheet” media=”screen” href=”http://weblog.burningbird.net/look/clouds.php” type=”text/css” />

This link should follow any other stylesheet you’re using for your site. When you next access your site, you should start to see the dynamic imaging take effect. If not, check to make sure that the photos.txt and images are in place, and the image files are named correctly in the file.

If you want the image to appear somewhere other than the upper left corner, adjust the CSS in the clouds.php file to whatever you prefer — lower right, upper middle, whatever. Totally up to you.

I achieve the effect I do with my site by adding a vignette to a photograph, in addition to adding some transparency. When using Photoshop, this is achieved by using the Elliptical Marquee tool to create an oval selection on the photo, and then choosing Select and Feather to ’smudge’ the edges. I cut the image and create a new one, with the appropriate background color. I copy the cut image from the original photo, and then adjust the transparency of the pasted image.

However, you don’t have to have Photoshop for this. There are several free and shareware applications that will allow you to add vignetting and transparency to a photo.

Stay tuned for a 1.3 theme based on Floating Clouds. Note that this design will validate as transitional XHTML, as long as the text in the posts is valid.