Categories
Burningbird Technology Web

A major site redesign

I’ve finished the re-organization of my web site, though I have odds and ends to finish up. I still have two major changes featuring SVG and RDFa that I need to incorporate, but the structure and web site designs are finished.

Thanks to Drupal’s non-aggressive use of .htaccess, I’ve been able to create a top-level Drupal installation to act as “feeder” to all of the sub-sites. I tried this once before with WordPress, but the .htaccess entries necessary for that CMS made it impossible to have the sub-sites, much less static pages in sub-directories.

Rather than use Planet or Venus software to aggregate feed entries for all of my sites, I’m manually creating an excerpt describing a new entry, and posting it at Burningbird, with a link back to the full article. I also keep a listing of the last few months stories for each sub-site in the sidebar, in addition to random display of images.

There is no longer any commenting directly on a story. One of the drawbacks with XHTML and an unforgiving browser such as Firefox, is that a small error is enough to render the page useless. I incorporate Drupal modules to protect comments, but I also allow people to enter in some markup. This combination handles most of the accidentally bad markup, but not all. And it doesn’t protect against those determined to inject invalid markup. The only way to eliminate all problems is not allow any markup, which I find to be too restrictive.

Comments are, however, supported at the Burningbird main site. To allow for discussion on a story, I’ve embedded a link in every story that leads back to the topmost Burningbird entry, where people can comment. Now, in those infrequent times when a comment causes a problem with a page, the story is still accessible. And there is a single Comment RSS feed that now encompasses all site comments.

The approach may not be ideal, but commentary is now splintered across weblog, twitter, and what not anyway—what’s another link among friends?

I call my web site design “Silhouette” and will release it as a Drupal theme as soon as it’s fully tested. It’s a very simple two column design, with sidebar column either to the right (standard) or easily adjusted to fall to the right. It’s an accessible design, with only the top navigation bar coming between the top of the page and the first story. It is valid markup, as is, with the XHTML+RDFa Doctype, because I’ve embedded RDFa into the design. It is not valid, however, when you also add SVG silhouettes, as I do with all but the top most site.

The design is also valid XHTML 5.0, except for a hard coded meta element that was added to Drupal because of security issues. I don’t serve the pages up as HTML 5, though, because the RDFa Doctype triggers certain behaviors in RDFa tools. I’m also not using any of the new HTML 5 structural elements.

The site design is plain, but it suits me and that’s what matters. The content is legible and easy to locate, and navigate, and that’s my second criteria. I will be adding some accessibility improvements in the next few months, but they won’t impact on the overall design.

What differs between all of the sites is the header graphic, and the SVG silhouettes, which I changed to suit the topic or mood of the site. The silhouettes were a lot of fun, but they aren’t essential, and you won’t be able to see them if you use a browser that doesn’t support SVG inline. Which means you IE users will need to use another browser to see the images.

I also incorporate some new CSS features, including some subtle use of text-shadows with headers (to add richness to the stark use of black text on pastel graphics) and background-color: rgba functionality for semi-transparent backgrounds. The effects are not viewable by browsers that don’t yet support these newer CSS styles, but loss of functionality does not impact access to the material.

Now, for some implementation basics:

  • *I manually reviewed all my old stories (from the last 8 years), and added 410 status codes for those I decided to permanently remove.
  • For the older stories I kept, I fixed up the markup and links, and added them as new Drupal entries in the appropriate sub-site. I changed the dates to match the older entries, and then added a redirect between the old URL and the new.
  • By using one design for all of the sites, when I make a change for one, it’s a snap to make the change for all. The only thing that differs is the inline SVG in the page.tpl.php page, and the background.png image used for the header bar.
  • I use the same set of Drupal modules at all sub-sites, which again makes it very easy to make updates. I can update all of my 7 Drupal sites (including my restricted access book site), with a new Drupal release in less than ten minutes.
  • I use the Drupal Aggregator module to aggregate site entries in the Burningbird sidebar.
  • I manually created menu entries for the sub-site major topic entries in Burningbird. I also created views to display terms and stories by vocabulary, which I use in all of my sub-sites.
  • The site design incorporates a footer that expands the Primary navigation menu to show the secondary topic entries. I’ve also added back in a monthly archive, as well as recent writings links, to enable easier access of site contents.

The expanded primary menu footer was simple, using Drupal’s API:


<?php
$tree = menu_tree_all_data('primary-links');
print menu_tree_output($tree);
?>

To implement the “Comment on this story” link for each story, I installed the Content Construction Kit (CCK), with the additional link module, and expanded the story content type to add the new “comment on this story” field. When I add the entry, I type in the URL for the comment post at Burningbird, which automatically gets linked in with the text “Comment on this story” as the title.

I manually manage the link from the Burningbird site to the sub-site writing, both because the text and circumstance of the link differs, and the CCK field isn’t included as part of the feed. I may play around with automating this process, but I don’t plan on writing entries so frequently that I find this workflow to be a burden.

The images were tricky. I have implemented both the piclens and mediaRSS Drupal Modules, and if you access any of my image galleries with an application such as Cooliris, you’ll get that wonderful image management capability. (I wish more people would use this functionality for their image libraries.)

I also display sub-site specific random images within the sub-site sidebars, but I wanted the additional capability to display random images from across all of the sites in the topmost Burningbird sidebar.

To get this cross-site functionality, I installed Gallery2 at http://burningbird.net/gallery2, and synced it with the images from all of my sub-sites. I then installed the Gallery2 Drupal module at Burningbird (which you can view directly) and used Gallery2 plug-ins to provide random images within the Drupal sidebar blocks.

Drupal prevented direct access from Gallery2 to the image directories, but it was a simple matter to just copy the images and do a bulk upload. When I add a new image, I’ll just pull the image directly from the Drupal Gallery page using Gallery2’s image extraction functionality. Again, I don’t add so many images that I find this workflow to be onerous, but if others have implemented a different approach, I’d enjoy hearing of alternatives.

One problem that arose is that none of the Gallery2 themes is XHTML compliant because of HTML entity use. All I can say is: folks, please stop using &nbsp;. Use &#160; instead, if you’re really, really generating XHTML, not just HTML pretending to be XHTML.

To fix the non-compliant XHTML problem, I copied a version of my site to a separate theme, and just removed the PHP that serves the page up as XHTML for XHTML-capable browsers from this “Silhouette for HTML” theme. The Gallery2 Drupal modules allow you to specify a different theme for the Gallery2 pages, and I use the new HTMLated theme for the Gallery2 pages. I use my XHTML compliant theme for the rest of the site. Over time, I can probably add conditional tests to my main theme to test for the presence of Gallery blocks, but what I have is simple and works for now.

Lastly, I redirected the old Planet/Venus based feed locations to the Burningbird feed. You can still access full feeds from all of my sub-sites, and get full entries for all but the larger stories and books, but the entries at Burningbird will be excerpts, except for Burningbird-only posts. Speaking of which, all of my smaller status updates, and general chit-chat will be made directly at Burningbird—I’m leaving the sub-sites for longer, more in-depth, and “stand alone” writings.

As I mentioned earlier, I still have some work with SVG and RDFa to finish before I’m completely done with the redesign. I also have some additional tweaks to make with the existing infrastructure. For instance, I have custom 404403, and 410 error pages, but Drupal overrides the 403 and 404 pages. You can redirect the error handling to specific pages, but not to static pages, only to pages within the Drupal system. However, I’m not too worried about this issue, as I’m finding that there’s typically a Drupal module for any problem, just waiting to be discovered.

I know I must come across as a Drupal fangirl in this writing, but after using the application for over a year, and especially after this site redesign, I have found that no other piece of software matches my needs so well as Drupal. It’s not perfect software—there is no such thing as perfect software—but it works for me.

* This process convinced me to switch fully from using Firefox to using Safari. It was so much more simple to fix pages with XHTML errors using Safari than with Firefox’s overly aggressive XHTML error handling.

Categories
Web

Cite not link

I do have considerable sympathy for 1Thomas Crampton, when he discovered that all of his stories at the International Herald Tribune have been pulled from the web because of a merger with the New York Times.

So, what did the NY Times do to merge these sites?

They killed the IHT and erased the archives.

1- Every one of the links ever made to IHT stories now points back to the generic NY Times global front page.

2- Even when I go to the NY Times global page, I cannot find my articles. In other words, my entire journalistic career at the IHT – from war zones to SARS wards – has been erased.

At the same, though, I don’t have as much sympathy for Wikipedia losing its links to the same stories, as detailed by 2Crampton in a second posting.

The issue: Wikipedia – one of the highest traffic websites on the Internet – makes reference to a large number of IHT stories, but those links are now all dead. They need to delete them all and find new references or use another solution.

As I wrote in comments at Teleread:

I do have sympathy, I know I would be frustrated if my stories disappeared from the web, but at the same time, there is a certain level of karma about all of this.

How many times have webloggers chortled at the closure of another newspaper? How many times have webloggers gloated about how we will win over Big Media?

The thing is, when Big Media is gone, who will we quote? Who will we link? Where will the underlying credibility for our stories be found?

Isn’t this exactly what webloggers have wanted all along?

Isn’t this what webloggers have wanted, all along?

I have sympathy for a writer losing his work, though I assume he kept copies of his writings. If they can’t be found in hard copies of the newspaper, then I’m assuming the paper is releasing its copyright on the items, and that Mr. Crampton will be able to re-publish these on his own. That’s the agreement I have with O’Reilly: when it no longer actively publishes one of my works, the copyright is returned to me. In addition, with some of the books, we have a mutual agreement that when the book is no longer published, the work will be released to the public domain.

I don’t have sympathy for Wikipedia, though, because the way many citations are made at the site don’t follow Wikipedia’s citation policy. Links are a lazy form of citation. The relevant passage in the publication should be quoted in the Wikipedia article, matched with a citation listing the publication, author, title of the work, and publication—not a quick link to an external site over which Wikipedia has no control.

I’m currently fixing one of my stories, Tyson Valley, a Lone Elk, and the Bomb because the original material was moved, without redirection. But as I fix the article, what I’m doing is making copies of all of the material, for my own reference. Saving the web page is no different than making a photocopy of an article in the days before the web.

In addition, I will be adding a formal citation for the source, as well as the link, so if the article moves again, whoever reads my story will know how to search for the article’s new location. At a minimum, they’ll know where the article was originally found.

I’m also repackaging the public domain writing and images for serving at my site, again with a text citation expressing appreciations to the site that originally published the images.

By using this approach, the stories I consider “timeless”, in whatever context that word means in this ephemeral environment, would not require my constant intervention.

Authors posting to Wikipedia should be doing the same, and this policy should be enforced: provide a direct quote of relevant material (allowed under Fair Use), and provide a formal citation, in addition to the link. Or perhaps, instead of the link. Because when the newspapers disappear, they’ll have no reason to keep the old archives. No reason at all. And then, where will Wikipedia be?

1Crampton, Thomas, “Reporter to NY Times Publisher: You Erased My Career”, thomascrampton.com. May 8, 2009.
2Crampton, Thomas, “Wikipedia Grappling with Deletion of IHT.com”, thomascrampton. May 8, 2009.

Categories
Social Media Web

My abbreviated self

I discovered that a URL has to be less than 30 characters, or Twitter automatically creates a Tinyurl version of the URL. This, even if the entire message is less than 140 characters.

There’s no way I can create URLs that are less than 30 character and still maintain my subdomain designations. Therefore I’m not going to try, and will most likely be removing any short URL stuff here. With all the recent “one million followers” foo flah, including the breathless designation that one person achieving one million Twitter followers is equivalent to landing a man on the moon and space flight, in scientific importance, I would just as soon stick with stodgy old weblogging.

Weblogging, where no one really knows how many people are following you, most people don’t care, we can actually communicate complete thoughts, and do what we want with our URLs.


From today’s WhatWG IRC:

hsivonen: I can imagine all sorts of blog posts about evil HTML5 raining on the rev=canonical backpattery parade

svl: Mostly (from what I’ve seen) it’s been “let’s all use this en-masse, so html5 will be forced to include this”.

Of all the items in contention with the HTML5 working group, the use of rev=canonical is not high on my list. Why? Because there’s no real argument for it’s use, and a lot of good arguments against its use, and it’s just as easy to use something else.

This all came about because Twitter was built first, designed later. One of the difficulties to keeping a message to 140 characters is that URLs can take 140 characters, and more. Yet there is no URL shortening mechanism built into Twitter. Not only is there no URL shortening mechanism built into Twitter, Twitter, itself, uses another, 3rd party, service: tinyurl.com.

Now, all of a sudden, people are in a dead cold panic about using a service that may go away, leaving link rot in Twitter archives. I hate to break it to the folks so worried, but it will probably be a cold day in hell before anyone digs into Twitter archives. Most of us can’t keep up with the stream of tweets we get today, much less worry about yesterday’s or last week’s.

But there are a lot of other problems associated with using a 3rd party service. Problems such as the recent Twitter follies, otherwise known as Twitter Been Hacked, that ended up being a not particularly fun Easter Egg this weekend. When you click on a Tinyurl URL, you don’t know what you’re going to get, where you’re going, or worse, what will happen to you when you get there. Even Kierkegaard would have a problem with this leap of faith.

There’s also an issue with search engine link credit, not to mention everyone using different URL shortening services so you can’t tell if someone has referenced one of your posts in Twitter, or not. This didn’t use to be a problem, but since everyone does most of their linking in Twitter now, it gets mighty quiet in these here parts. You might think, sigh, no one likes what you’re doing, only to find out that a bunch of people have come to your party, but the party’s been moved to a different address.

So I think we can agree that third party URL services may not be the best of ideas. I, personally, like that we provide our own URL shorteners. Not only would we get the search engine credit, it should encourage the use of the same URL in Twitter, which might help us find the party we lost. Plus, wouldn’t you rather click a link that has burningbird.net in it, then one that has dfse.com? Implementation of our own short URLs should be simple in this day and age of content management systems. All we need to do is agree on a form.

Agree? Did someone say, agree?

As I wrote earlier, I’ve heard too many good arguments against rev=canonical, including the fact it’s too easy to make a typo and write rev=canonical, when we mean rel=canonical, and vice versa. In addition, rel is in HTML5, rev is not, and I’m not going to hammer a stake in the ground over rel/rev. I’m keeping my stakes for things that are important to me.

Note to HTML5 WG: she has a hammer. And stakes.

As for what attribute value to use with rel, whether it’s shortlink or shorturl or just plain short, I don’t care. I took about five minutes to implement shortlink in this space. I implemented shortlink, because this is the option currently listed in the rel attribute wiki page. However, it would only take about a minute to change to shorturl. I even added the short link to the bottom of my posts, which can be copied manually and used to paste into a Twitter post, if you’re so inclined. See, I don’t have to wait for anyone’s approval; I am empowered by Happy Typing Fingers.

Regardless of what we do, I agree with Aristotle: way too much effort on something that should be easy to decide, quick to implement, giving us time to worry about things that are important in HTML5. Things such as SVG, RDFa, and accessibility.

Other discussions related to rel/rev/tiny:

And that’s my 4424 character take on tiny URLs.


Another reason tiny URLs are getting attention is because of the evil new DiggBar. Goodness gracious, people, why on earth do you use crap like this?

Categories
Web

When can you use…now if you choose

Continuing the theme of moving forward in web design…

  • Several people have linked or otherwise noted Alexis Deveria’s excellent When Can I use… application. You can select from various options, including specifications or by browser, date, and so on, and you’ll get recommendations about what you can and can not use. I tried it by selecting all browsers but IE, specifications that are in recommendation status, and currently implemented or will be implemented in the very near future. The application recommended SVG, MathML, and serving web pages up as application/xhml+xml. When I added candidate recommendation, then all of the functionality I currently use was listed.
  • Robert Nyman says Stop developing for Internet Explorer 6.0, which echos our effort to generate an IE6 End of Life effort last year. Robert is receiving about the same concerns I received, and along the vein of “But the customer wants…”. This is a far cry from the designer community that existed a decade ago that asked for, nay demanded adherence to web standards. Some would say it is a sign of the times, but as my readings of the Great Depression has shown, it is exactly during times like these when great changes come about.

    If we extrapolated the continuing active support for IE6 to other industries, our cars would only get 5 miles to the gallon, our music would only come from stores on flat discs, books would only be available on paper, and we’d all still be developing CGI applications in Perl.

  • Smashing Magazine has a nice writeup on PHP IDE’s, including comparisons. I must admit to being old fashioned, and still using vi/vim. Vi rules.
  • Michael Bernstein has an ambitious plan to public a new web app every Wednesday. I’m currently playing with 1LinQR. I’m not sure about creating a new web application every week, but I am thinking of creating some form of scheduled output, to add structure to my life.
  • Speaking of structure, I received a suggestion to try the CMS Joomla this week, and am thinking of starting another subdomain for that purpose. I’m finding, though, that supporting multiple CMS applications is becoming an increasingly complex challenge. For example, though WordPress and Drupal, and Joomla, too, are PHP-based, they all have significantly different template systems and frameworks for extensions. I’m having the devil of a time wrapping my mind around the WordPress way of doing things now, as compared to Drupal’s. Then there’s the upgrades: I just finished ones for Drupal, and Drupal modules, and now WordPress is at 2.7.1.

    What I think I’ll create is a shell script that backs up all of my sites, databases and files, downloads whatever is the latest of Drupal, WordPress, and Joomla (if I do try the application), and whatever other applications I use, and then upgrades each, even if the software hasn’t changed. Then once a week I could do a blanket run at my entire site. There shouldn’t be broken bits, but if there is, well, then I’ll have a better idea of the robustness of the applications. Running an upgrade on a site with the same version of software currently installed should result in no change in the application.

    Since today is Charles Darwin’s birthday, call the approach CMS natural selection. No, not survival of the fittest, which really isn’t an evolutionary concept. My script process will naturally select for extinction, those applications that fail.

Speaking of which, Happy Birthday Charles, Happy Birthday, Abe.

Categories
SVG Web

Gracefully upgrading

I am reminded in comments of Steve Champeon’s progressive enhancement, which I actually did cover in my book, “Adding Ajax”. My apologies to Steve for seeming to subvert his subversion of all browsers look the same. I tend to think of Steve’s progressive enhancement in light of the use of JavaScript, but it is also focused at design, too. And, I am embarrassed to admit, I forgot about the concept when I started to write up what I’ve done with my site designs. Blame it on enthusiasm, or advancing age—take your pick.

However, if the concept is so popular with web designers, I have to wonder why every time I mention the use of SVG in web design, I’m met with “Oh, but not every browser supports SVG”? Perhaps IE has become, over time, a handy excuse for not trying something new.

Regardless, the idea of starting plain, and upgrading gracefully did originate with Steve.