Documents Political Web

Eclectically yours #1

Once Google Reader bit the dust I made my move to Feedly, and I’m quite happy with the change. I especially like the search feature incorporated in the Pro version of Feedly. Since I follow several court cases, and the only “notification” the federal PACER system provides is an RSS feed of every court docket entry, being able to search on key terms ensures I don’t miss a filing.

Speaking of Feedly…

Food Safety News reports that a coalition of consumer groups interested in food safety are gunning for two amendments to the House Farm Bill. The one I’m most interested in is the infamous Steve King amendment titled the “Protect Interstate Commerce Act”. This amendment would start a race for the bottom when it comes to animal welfare laws, food quality, and food safety laws. The King amendment would basically allow one state’s agricultural law to override another, more restrictive law. In other words, King wants to force Iowa’s crappy agricultural laws on to the rest of the country.

It’s one of the worst amendments attached to any bill in more modern times, from a man who is infamous for bad legislation focused on supporting his big agribusiness contributors and little else. What’s surprising is how many Tea Party Congressional members voted for the amendment, as these supposedly “states rights” types are voting for a bill that undermines states rights.

Remember pink slime? There’s a hearing in December related to a motion to dismiss by ABC News and the other defendants. The story contains a link to a copy of the motion to dismiss, but I couldn’t find one for the memorandum, which is the interesting part. However, I’m assuming it’s similar (if not identical) to the one filed with a similar motion in the federal court. Food Liability Law Blog provided a copy of this document. BPI’s response at the time was to refer to its memorandum in support of its motion to remand back to the South Dakota state court.

The pink slime case started in South Dakota, moved to the federal court system, and then back to the state court. I hate it when a court case gets moved back to a state court, because most states don’t have an easily accessible document system. PACER is pricey, but at least you can easily access most documents.

Speaking of documents, California’s effort to get a case management system online has failed, and now the tech companies are circling, like vultures over a particularly juicy carcass, over new contracts to build a system.

They are scrambling for a mother lode of multimillion-dollar contracts for software and licensing, vast additional sums for upkeep, and the right to set up a toll booth on Court Road for 38 million people.

I’m all for private contracting of court systems, though I think the states would do better to share expertise with each other when it comes to implementation. My biggest concern, though, is system privatization: hiring companies to run the systems, as well as develop them.

Privatization of court systems is, in my opinion, wrong, wrong, wrong. Not only does privatization add to the expense of an already outrageously expensive legal system, they inhibit easy access to the documents. Instead of paying a fee such as ten cents a document page, like you do with PACER, it may cost you several dollars to access even the smallest document.

Still, some court document access is better than nothing, which is what you have with most state courts.

Documents Web

Harvard Business School: it will cost you to link to us

Discovered via Facebook, Harvard Business School’s extraordinarily parsimonious attempts to milk every last penny out of its material:

No one ever charges people for the act of curating and directing attention. That is our job. It is our mantra. But that is precisely what HBSP are doing. To be sure, HBR looks like any other magazine and is already paid for under an institutional license by the UofT library. What HBSP are charging for are the links to those articles made by academics in the ordinary course of teaching. While our Dean was clear that he wanted us to continue to treat ideas as ideas and give the best to our students, he also wanted us now to be aware that it was costing money each time we did so. I don’t know how much it costs but my guess is that it is $5 or so per article per student. So if I have a class of 40 students and in the process of being thorough, add 10 HBR articles to my reading list or class webpage or, if the students do the same on bulletin boards for the class, we have to pay $2,000 for the privilege. I want to adhere to norms but that is enough to cause me to think twice.

Note, this isn’t direct access to the article content, this is just linking to, or citing the article.

I guess Harvard must be hard up for cash.

Specs Web

Google’s Ta Da moments

Recovered from the Wayback Machine.

Henri Bergius wrote a piece on Google’s seeming desire to replace all web components, except HTML. Among the “new” technologies:

  • SPDY to replace HTTP
  • and Microdata to replace a decade’s worth of semantic work with RDF and microformats
  • WebP, a new image format
  • WebM, a new video format
  • And now Dart, to replace JavaScript

However, I wouldn’t leave HTML out. The only editor for HTML5 is a Google employee, Ian Hickson, who has been working with other folks, including another Google employee, to break pieces off the HTML5 specification, take them to WHATWG space, and completely re-write them in isolation. Then, when the pieces are re-written, the editors don’t seem want to bring them back to the W3C. (Or they have to ask Google Legal whether they can do so, completely ignoring the fact that as a W3C member, Google pledged to work with others.)

I’ve been battling this effort with the Editing API and just recently, the same thing happened with the section on dynamic markup insertion.

It’s not that people aren’t happy about these non-HTML components being pulled out of the HTML5 specification, but rather than work with the members of the HTML WG and the W3C, Google has been encouraging people to act unilaterally, aided and abetted by the HTML5 editor.

What’s ironic is that the concepts behind the Editing API and the dynamic markup insertion sections, which includes innerHTML among other things, actually originated with Microsoft. I’ve been waiting for Microsoft to go, “Hold on partner!” Apple already has. (And again).

Google has become all that is arrogant conceit. It believes it can do anything better than anyone else. It has dropped any pretense of seemingly wanting to work with others, and pretends its work is open, as long as it “gives” it all away when it’s finished.

The internet and the web were created so that people could connect; that those who were separated physically could still work together. The roots of the web are based in openness and cooperation, not unilateral decisions that demonstrate little tolerance and no empathy. I’d rather use an imperfect technology created by a team of varied and interested people, then a “perfect” work created in isolation and dumped on the world in some grand “Ta Da!” moment.

An imperfect technology can be perfected, but you can’t fix hubris.

Specs Technology Web

Why read about it when you can play?

Earlier today I got into a friendly discussion and debate on Twitter about a new web site called W3Fools. The site bills itself as a “W3Schools intervention”, and the purpose is to wake developers up to the fact that W3School tutorials can, and do, have errors.

The problem with a site like W3Fools, I said (using shorter words, or course, since this was Twitter), is that it focuses too much on the negative aspects of W3Schools, without providing a viable alternative.

But, they said, W3Fools does provide links to other sites that provide information on HTML, CSS, or JavaScript. And, I was also told, the reason W3Schools shows up first in search results is because of uncanny use of SEO optimization.


It may be true that W3Schools makes excellent use of SEO, and it may be equally true that W3Schools commits egregious and painful errors. However, neither of these account for what W3Schools is doing right. If you don’t acknowledge what the site does well, you’re not going to make much headway into turning people off the site—no matter how many cleverly named sites you create.

For instance, one of the superior information sites recommended by W3Fools is the Mozilla Doc Center, or MDC as it is affectionately known. Now, I’m a big fan of MDC. I use it all the time, especially when I want to get a better idea of what Firefox supports. But look at the work you have to put in to learn about a new HTML5 element, such as the new HTML5 hgroup element:

  1. Go to main page
  2. Click on HTML5 link
  3. Search through the topics until you see one that’s titled “Sections and outlines in HTML5”, which you know you want because it mentions hgroup
  4. Have a neuron fire and realize that you can just click directly on hgroup
  5. Go to the hgroup page, past the disclaimer about what version of Firefox supports the element, looking for an example of usage
  6. Realize there is no example of how to use hgroup
  7. Go to the original Sections and Outlines in HTML5 link
  8. Go past some stuff about elephants, looking for example
  9. Go past some bullets about why all this new sectioning stuff is cool, looking for an example
  10. Break down and use your in-page search to find hgroup
  11. Finally find an example of how to use hgroup

As compared to W3Schools:

  1. Go to main page
  2. Click on Learn HTML5 link
  3. Click on New Elements link
  4. Start to scroll down when you realize the new elements are listed along the left side
  5. Click on hgroup
  6. Look at example

One thing W3Schools does well is provide a clean, simple to navigate interface that makes it very easy to find exactly what you need with a minimum of scrolling or searching.

Returning to our comparison between W3Schools and MDC, we then search for information on SQL. Oh, wait a sec: there isn’t anything on SQL at the Mozilla site. That’s because Mozilla is primarily a browser company and is only interested in documenting browser stuff.

So then our intrepid explorer must find another site, this one providing information on SQL. And if they want to learn more about PHP, they have to find yet another site. To learn about ASP? Another site, and so on.

What W3Schools also provides is one stop shopping for the web developer. Once you’ve become familiar with the interface, and once the site has proved helpful, you’re more likely to return when you need additional information. Let’s face it: wouldn’t you rather use one site than dozens?

Screenshot of W3Schools page showing many of the topics

Let’s say, though, that you need information on CSS3. Well, you know that MDC covers CSS, so you return to the MDC site, and you click on the link that’s labeled “CSS”, and you look for something that says CSS3.

What do you mean there isn’t anything that says CSS3? What do you mean that transitions are CSS3—how am I, a CSS3 neophyte, supposed to know this?

Returning to W3Schools, I click the link in the main page that is labeled CSS3. Oh look, in the page that opens, there’s a sidebar link that’s labeled “CSS3 transitions”. And when I click that link, a page opens that provides an immediate example of using CSS3 transitions that I can try, as well as an easy to read table of browser support.

Screenshot of W3Schools CSS3 transitions page

W3Schools doesn’t throw a lot of text before the examples, primarily because we learn web material best by example. Remember that entire generations of web developers grew up with “View Source” as our primary learning tool.

But so far, I’ve only compared W3Schools to MDC. There are other useful sites that the W3Fools site approves. So I try the “Google: HTML, CSS, and JavaScript from the ground up” web page. When it opens, I click the link labeled CSS…

And I get a video about using CSS.

A video.

Remember in junior high or high school, when your science teacher would bring out the projector and you knew you were going to get a video? Do you remember that feeling that came over you? How you kind of relaxed, because you know the teacher wasn’t going to ask you any questions, and you didn’t have to write any notes, or even really pay attention?

I bet some of you even fell asleep during the video.

Videos are good for specific types of demonstrations—when something is complex, with many different steps, and the order of the steps and other factors have to be just so.

When it comes to CSS, HTML, and so many other web technologies, though, video is about the most passive and non-interactive learning experience there is. More importantly, if the video doesn’t have captioning, and most don’t, you’re also leaving part of your audience behind.

Now let’s return to the W3Schools site, this time looking at one of the CSS selector tutorials. The first thing you notice is that right below the example there’s a button, labeled “Try it Yourself”.

W3Schools screenshot showing the Try It button

Why read about it, when you can play?

One of the more annoying aspects of trying to learn about a specific HTML element, or a bit of CSS, is that you have to create an entire web page just to try it out. What W3Schools provides is that all important, absolutely essential, one button click to Try it out.

I’m not defending W3Schools. The site has played off the W3C title, though that doesn’t have a lot of meaning nowadays. More importantly, some of the material has errors and the site is resistant to correcting any of these errors, and this is unconscionable.

But you aren’t going to dent the popularity of the site without at least understanding why it is so popular. The W3Schools’ site is not popular because of SEO, and it’s not popular because of the W3 part of the name.

The W3Schools web site is so popular because it is so usable.

Burningbird Technology Web

A major site redesign

I’ve finished the re-organization of my web site, though I have odds and ends to finish up. I still have two major changes featuring SVG and RDFa that I need to incorporate, but the structure and web site designs are finished.

Thanks to Drupal’s non-aggressive use of .htaccess, I’ve been able to create a top-level Drupal installation to act as “feeder” to all of the sub-sites. I tried this once before with WordPress, but the .htaccess entries necessary for that CMS made it impossible to have the sub-sites, much less static pages in sub-directories.

Rather than use Planet or Venus software to aggregate feed entries for all of my sites, I’m manually creating an excerpt describing a new entry, and posting it at Burningbird, with a link back to the full article. I also keep a listing of the last few months stories for each sub-site in the sidebar, in addition to random display of images.

There is no longer any commenting directly on a story. One of the drawbacks with XHTML and an unforgiving browser such as Firefox, is that a small error is enough to render the page useless. I incorporate Drupal modules to protect comments, but I also allow people to enter in some markup. This combination handles most of the accidentally bad markup, but not all. And it doesn’t protect against those determined to inject invalid markup. The only way to eliminate all problems is not allow any markup, which I find to be too restrictive.

Comments are, however, supported at the Burningbird main site. To allow for discussion on a story, I’ve embedded a link in every story that leads back to the topmost Burningbird entry, where people can comment. Now, in those infrequent times when a comment causes a problem with a page, the story is still accessible. And there is a single Comment RSS feed that now encompasses all site comments.

The approach may not be ideal, but commentary is now splintered across weblog, twitter, and what not anyway—what’s another link among friends?

I call my web site design “Silhouette” and will release it as a Drupal theme as soon as it’s fully tested. It’s a very simple two column design, with sidebar column either to the right (standard) or easily adjusted to fall to the right. It’s an accessible design, with only the top navigation bar coming between the top of the page and the first story. It is valid markup, as is, with the XHTML+RDFa Doctype, because I’ve embedded RDFa into the design. It is not valid, however, when you also add SVG silhouettes, as I do with all but the top most site.

The design is also valid XHTML 5.0, except for a hard coded meta element that was added to Drupal because of security issues. I don’t serve the pages up as HTML 5, though, because the RDFa Doctype triggers certain behaviors in RDFa tools. I’m also not using any of the new HTML 5 structural elements.

The site design is plain, but it suits me and that’s what matters. The content is legible and easy to locate, and navigate, and that’s my second criteria. I will be adding some accessibility improvements in the next few months, but they won’t impact on the overall design.

What differs between all of the sites is the header graphic, and the SVG silhouettes, which I changed to suit the topic or mood of the site. The silhouettes were a lot of fun, but they aren’t essential, and you won’t be able to see them if you use a browser that doesn’t support SVG inline. Which means you IE users will need to use another browser to see the images.

I also incorporate some new CSS features, including some subtle use of text-shadows with headers (to add richness to the stark use of black text on pastel graphics) and background-color: rgba functionality for semi-transparent backgrounds. The effects are not viewable by browsers that don’t yet support these newer CSS styles, but loss of functionality does not impact access to the material.

Now, for some implementation basics:

  • *I manually reviewed all my old stories (from the last 8 years), and added 410 status codes for those I decided to permanently remove.
  • For the older stories I kept, I fixed up the markup and links, and added them as new Drupal entries in the appropriate sub-site. I changed the dates to match the older entries, and then added a redirect between the old URL and the new.
  • By using one design for all of the sites, when I make a change for one, it’s a snap to make the change for all. The only thing that differs is the inline SVG in the page.tpl.php page, and the background.png image used for the header bar.
  • I use the same set of Drupal modules at all sub-sites, which again makes it very easy to make updates. I can update all of my 7 Drupal sites (including my restricted access book site), with a new Drupal release in less than ten minutes.
  • I use the Drupal Aggregator module to aggregate site entries in the Burningbird sidebar.
  • I manually created menu entries for the sub-site major topic entries in Burningbird. I also created views to display terms and stories by vocabulary, which I use in all of my sub-sites.
  • The site design incorporates a footer that expands the Primary navigation menu to show the secondary topic entries. I’ve also added back in a monthly archive, as well as recent writings links, to enable easier access of site contents.

The expanded primary menu footer was simple, using Drupal’s API:

$tree = menu_tree_all_data('primary-links');
print menu_tree_output($tree);

To implement the “Comment on this story” link for each story, I installed the Content Construction Kit (CCK), with the additional link module, and expanded the story content type to add the new “comment on this story” field. When I add the entry, I type in the URL for the comment post at Burningbird, which automatically gets linked in with the text “Comment on this story” as the title.

I manually manage the link from the Burningbird site to the sub-site writing, both because the text and circumstance of the link differs, and the CCK field isn’t included as part of the feed. I may play around with automating this process, but I don’t plan on writing entries so frequently that I find this workflow to be a burden.

The images were tricky. I have implemented both the piclens and mediaRSS Drupal Modules, and if you access any of my image galleries with an application such as Cooliris, you’ll get that wonderful image management capability. (I wish more people would use this functionality for their image libraries.)

I also display sub-site specific random images within the sub-site sidebars, but I wanted the additional capability to display random images from across all of the sites in the topmost Burningbird sidebar.

To get this cross-site functionality, I installed Gallery2 at, and synced it with the images from all of my sub-sites. I then installed the Gallery2 Drupal module at Burningbird (which you can view directly) and used Gallery2 plug-ins to provide random images within the Drupal sidebar blocks.

Drupal prevented direct access from Gallery2 to the image directories, but it was a simple matter to just copy the images and do a bulk upload. When I add a new image, I’ll just pull the image directly from the Drupal Gallery page using Gallery2’s image extraction functionality. Again, I don’t add so many images that I find this workflow to be onerous, but if others have implemented a different approach, I’d enjoy hearing of alternatives.

One problem that arose is that none of the Gallery2 themes is XHTML compliant because of HTML entity use. All I can say is: folks, please stop using &nbsp;. Use &#160; instead, if you’re really, really generating XHTML, not just HTML pretending to be XHTML.

To fix the non-compliant XHTML problem, I copied a version of my site to a separate theme, and just removed the PHP that serves the page up as XHTML for XHTML-capable browsers from this “Silhouette for HTML” theme. The Gallery2 Drupal modules allow you to specify a different theme for the Gallery2 pages, and I use the new HTMLated theme for the Gallery2 pages. I use my XHTML compliant theme for the rest of the site. Over time, I can probably add conditional tests to my main theme to test for the presence of Gallery blocks, but what I have is simple and works for now.

Lastly, I redirected the old Planet/Venus based feed locations to the Burningbird feed. You can still access full feeds from all of my sub-sites, and get full entries for all but the larger stories and books, but the entries at Burningbird will be excerpts, except for Burningbird-only posts. Speaking of which, all of my smaller status updates, and general chit-chat will be made directly at Burningbird—I’m leaving the sub-sites for longer, more in-depth, and “stand alone” writings.

As I mentioned earlier, I still have some work with SVG and RDFa to finish before I’m completely done with the redesign. I also have some additional tweaks to make with the existing infrastructure. For instance, I have custom 404403, and 410 error pages, but Drupal overrides the 403 and 404 pages. You can redirect the error handling to specific pages, but not to static pages, only to pages within the Drupal system. However, I’m not too worried about this issue, as I’m finding that there’s typically a Drupal module for any problem, just waiting to be discovered.

I know I must come across as a Drupal fangirl in this writing, but after using the application for over a year, and especially after this site redesign, I have found that no other piece of software matches my needs so well as Drupal. It’s not perfect software—there is no such thing as perfect software—but it works for me.

* This process convinced me to switch fully from using Firefox to using Safari. It was so much more simple to fix pages with XHTML errors using Safari than with Firefox’s overly aggressive XHTML error handling.