Categories
Burningbird

Having one’s cake

Recovered from the Wayback Machine.

I’ve now mapped out a plan for moving forward on the organization of my site, including which tools to use, where and even some preliminary designs. I’ve also played around more with incorporating SVG into a site design, as well as trying out some of the newer CSS3 design attributes. I’m finding out that one can have one’s cake and eat it to.

For instance, you can use SVG for a site design, and the site doesn’t have to look either plain or ugly with IE–just different. If you’re comfortable with different, this isn’t a bad way to move forward with the more advanced browsers, such as Firefox/Gecko, Opera, and Safari/Webkit (the Big Three), while still accounting for a more primitive browser like IE.

Right now, today, at Realtech I have an experimental design up called “World War”, featuring both a photo from an air show, as well as three different SVG images. Only the photo shows with IE, but rather than have a completely white page, I added a background color and repeating background pattern, both of which are overlayed by the SVG ‘background’ image that the Big Three can see.

This is where it gets a little tricky. The SVG element supports both a width and a height attribute. If you specify the width and height in the element as SVG attributes, not in the CSS style attribute, Internet Explorer ignores both, which means the SVG element takes up no page space in IE.

However, the Big Three understand that width and height are supported attributes for SVG container elements, like the SVG element, itself. All three support the width and height setting directly in the SVG element. Not only that, but both Safari and Opera get a bit snitty if you don’t use these attributes and instead set the width and height using CSS, only.

The end result of this mechanization is that the Big Three see the SVG images and override the background image and background color. True, they still load the background image, but since it’s so tiny, it’s not a significant load on the server or client. Best of all: no conditional references have to be used, either in HTML, CSS, or JavaScript. If IE were ever to support SVG someday, the browser would then process the SVG just like the Big Three.

I continued this concept into using some CSS3 attributes. CSS 2.1 provides the meat of web page design, but CSS3 is the desert, and what’s a good meal without desert?

I use the rgba color function when setting the background color for both my sidebar and my article title bars. The rgba function takes four parameters: the three decimal values, in a range from 0 to 255, for the red, green, and blue channels, respectively, and a fourth representing the alpha channel. The alpha channel is what controls the transparency. Using the rgba function allows us to create semi-transparent backgrounds.

I could use a variation of opacity setting, including the CSS3 opacity attribute, as well as the older moz-opacity, filter, thing. However, the opacity settings effect the opacity of the element on which it is set and any child elements. Using the rgba function for the background-color creates a semi-transparent background for the element on which it is set, but has no impact on the child elements. (For more on opacity and rgba, see A brief introduction to Opacity and RGBA.)

What about a gracefully degrading design? For user agents that don’t support rgba, what I’ve found is that we can specify a background color using non-rgba functionality:

.sidebar
{

background-color: #fff;
background-color: rgba(255,255,255,0.8);
}

Either the agent will pick up the non-rgba background color, or it won’t pick up any background color at all. In the latter case, the behavior that the browser demonstrates is that it recognizes a supported CSS attribute (background-color), but not the value (rgba). Therefore it flushes the previously set background color, but doesn’t apply the new background color.

(I believe the former behavior is the correct, while the latter behavior is the incorrect. If you any input on this, please leave a note in comments.)

Combined, these two CSS background-color attribute settings result in the following: the sidebar and the inner panel background are both semi-transparent with Safari and Firefox, which support rgba; Opera doesn’t currently support rgba, but will pick up the earlier, solid white background-color; IE doesn’t pick up any background color, and both items are transparent.

Another CSS3 attribute I use that gracefully degrades is the new text-shadow attribute. With text-shadow, I can add shadow to text, such as the title in the page header. If the browser supports the text-shadow attribute, the shadow displays; otherwise, no shadow.

The text-shadow attribute takes four parameters: the color of the shadow, the x coordinate of the shadow as it relates to the original element; the y coordinate; the radius of the applied blur. I currently have the following text-shadow attribute setting on my main title:

text-shadow: #333 2px 2px 4px;

This CSS setting creates a dark gray shadow, offset 2 pixels to the right and bottom of my current text, with a blur radius of 4 pixels–a relatively soft shadow. The shadow shows with Opera and with Safari, though not with Firefox or IE. As long as no dependency is placed on the shadow (i.e. text the same color of the background, depending on the shadow to make the text show), the look degrades gracefully for browsers that don’t, currently, support text-shadow.

Best of all, when the text-shadow attribute is eventually supported by a browser, the shadow is displayed without any further intervention or modification of the page design. All you have to do to is accept that a page will look different in different browsers. Not “bad”, different. If you’re willing to live with “different”, you can have a lot of fun now with new design elements

Categories
Burningbird Specs

IE8, XHTML, and what am I going to do with my site?

Recovered from the Wayback Machine.

I thought it interesting and even odd how few people have remarked on the fact that Ray Ozzie began the opening keynote of a conference focused specifically at developers by talking about ads.

My source for things geek, Planet Intertwingly, has had very few entries devoted to IE8. I imagine people either don’t care or are trying things out. Or perhaps they’re at ETech or on their way to SxSW. What a way to filter your audience: schedule the conferences at the same time. What sad irony that Ozzie next spoke of the Yahoo deal, as Yahoo itself was launching its latest, greatest tech initiative, which was then overshadowed by Microsoft’s rolling out of the IE8 public beta.

Not to be outdone, Apple has something today probably about its SDK. All we’re missing is something from Adobe, but it preferred to dance alone.

To return to IE8. One doesn’t have to tax one’s imagination to read the purpose behind the ‘advances’ in IE8. All of the new functionality is focused on Microsoft’s new “cloud” agenda, including client data storage support for offline working, and back button navigation. According to the “readiness” document I linked yesterday:

Internet Explorer 8 provides a simplified yet powerful programming model for AJAX development that spans browser, webpage, and server interaction. As a result, it is easier for you to build webpages that have much better end-user experiences, are more functional, and have better performance. APIs are based on the W3C HTML 5.0 or Web Applications Working Group standards. Enhancements or novel intellectual property for AJAX will be made available for standardization before the Internet Explorer 8 release.

The thing is, HTML5 is most definitely a work in progress. What Microsoft has done is cherry picked what it wanted, implemented it, threw in its own stuff and then glossed it over by either attaching it’s own bizarre “open source” license, or tossed the non-critical bits into the public domain.

The proprietary bits aside, it is typical for vendors to start implementing standards before they’re finalized, as a test and a validation. Just as typically, though, the other members of the standards group are usually aware of such plans. I am curious to hear what other members of the HTML5 working group think of IE8 and the HTML5 bits.

As for me, not hard to see that I’m unhappy. I have a choice now: do I continue to serve this site using the XHTML MIME type, in which case it will never be accessible by IE (because I now believe Microsoft will never support the XHTML MIME type); or do I “break” my site by adding back content negotiation?

I wrote previously that I had a plan I was going to implement if Microsoft didn’t support XHTML with IE8. In the back of my mind, I really thought the company would. Not to do so is the company saying that, for all its talk about standards and openness, it will implement only those standards that support its own agenda, and no others. While I expected this attitude, I didn’t expect Microsoft to be so obvious about it.

I really didn’t expect Microsoft to blow off XHTML, and now that it has, I have some work to do on my sites to follow through on my fallback plan. I’m not doing anything earth shattering, or probably all that interesting to most folks (since, seemingly, standards take a back seat to ads for today’s new web developer). I’m just dealing with the situation.

I’m also investigating Drupal, as a content tool–either alone or perhaps with WordPress. I’ve been interested in Drupal since I started looking through the site and the code base. I became more interested when Maki mentioned the SVG Toolkit for Drupal, and Elaine talked about how improved it is. Then Ian Davis at Nodalities mentioned Drupal’s RDF and semantic web commitment yesterday, and that’s all she wrote for me.

The Drupal folks seem more committed to supporting standards, all standards, than the WordPress folk. And when I read something about Drupal, I read about the technology; I don’t read about ads or mergers. This focus on technology appeals to me right now.

Categories
Copyright Web Writing

Something for nothing

Recovered from the Wayback Machine.

I like Andrew Orlowski, though he offered me a writing job once and then yanked it. I don’t always agree with him, and I don’t always agree with how he phrases some of his material, but he typically has a good point.

Take the recent Nine Inch Nails album release. Several songs for free, and the rest of the album costs $5.00. What happens? It’s immediately dumped on Pirate Bay. Bandwidth issues aside, as Radiohead found out, people won’t pay.

The anti-copyright crowd kicked at the music business, because it was complacent, wasteful and reactionary, and no digital download services were available. Then they kicked at DRM-locked music, because DRM was there. Then DRM died, and they’d indiscriminately kick at the music business – indie or major – simply because there was a middleman. But now, with no middleman, they just kick the creator directly. They can’t stop kicking. These zombies are unstoppable. Are they incurable, too?

This goes beyond copyright. Too many people expect immediate access to anything on the Net, or anything that could possibly be put on the Net. They want something for nothing. This isn’t free speech, this isn’t Free the Mouse, this isn’t anything to do with not stifling creativity: people assume a privilege for themselves they, frankly, don’t deserve. Their cry is, “gimme gimme gimme”, existing in a state of selfishness to bring down the band. And by their selfishness, they’ll probably screw things up for the rest of us. After all, DRM doesn’t exist so you can’t copy a song on to your iPod.

Excuse me, while I go put my DRM locked movie into the DVD player.

Categories
Burningbird

Feed problems

The biggest mistake I ever made was to install WordPress at the top level. The second, was to use “smart” URLs.

My site was restricted due to bandwidth overlimit this morning, something that shouldn’t have happened. When I checked my stats, one site, proxyit.com, was hammering my bandwidth. Checking the recent visitors list, this domain was grabbing my feed every minute, except it was grabbing the Burningbird feed, which was then redirected to the new combined feed, at http://burningbird.net/feeds/atom.xml.

This feed, created by the aggregator, Venus, hadn’t changed, but with the redirect, it was coming up fresh and sparkly new. Now, that doesn’t excuse the fact that this site was accessing it every minute, but I’m not sure that my twisted convoluted redirects to feeds wasn’t at least partially responsible. To make matters worse, I used an inline SVG object yesterday, which shouldn’t tax bandwidth limits overmuch…unless your feed is being hammered.

(Not to mention that using SVG inline absolutely killed my entry at Planet RDF…)

Of course, when I redirected my Burningbird main feed, /feed/ to atom.xml, this redirected all other variations of /feed, including /feed/atom, /feed/rdf, and so on. Not just for Burningbird, but all sub-domains, too. So I had to add more redirects, which attempted to bypass WordPress’s programmatic management of URLs. I had to so many redirects in my sites to get feeds to serve correctly, I wasn’t sure who was getting what. So I’ve removed all of them.

One of my site changes is to remove WordPress at the top level. I’m replacing it with a page generated by Venus that combines all feeds from WordPress installations in sub-domains. Each sub-domain gets its own WordPress installation. Some will get the full installation, and others will get my new semi-forked version that I’ve named Curmudgeon WordPress. Curmudgeon WordPress is a WordPress installation that has had all the reader interactive bits, such as ping back, registration, XMLRPC, and comments, and their associated includes and admin functions removed.

When I get all this finished, no more RDF feeds, no more RSS feeds. You get one feed for each WordPress installation, an Atom feed. And you get one overall feed generated by Venus, name and location TBD, generated once per day.

In the meantime, feeds may be a problem. My bandwidth may be exceeded. Yada yada, you know the rest.

Categories
Technology Writing

Tasks, transcripts, and semantics

Recovered from the Wayback Machine.

I’m spending the rest of the week creating plug-ins that will XHTMLate WordPress. I’m not sure how far I can get with plug-ins, but the end result could be both interesting and useful. I still feel that XHTMLating WordPress is at least partially philosophy, as much as it is code. I can’t seem to communicate this clearly, though, so I am dropping the subject and just focusing on code.

I also have a design for my “Painting the Web” book web site, and need to create a lovely SVG paintbrush, as part of the design. Since my artistic skills are more along the lines of telling a program to draw a line from A to B, the effort may take some time. However, the medium I’m using (SVG) is compatible with my skillset, so perhaps the effort will be trivial and the result good. Better yet, I’ll be able to find a paintbrush at Wikipedia to use.

I did want to point out an interview that Paul Miller of Talis had with Tim Berners-Lee. Unlike most other podcasts, this interview also has a written transcript as well as published show notes. I really wish more video and audio podcasters would spend the time transcribing shows into text, as well as providing more in-depth information about the show than posting a video window and telling people, “Hey! Cool Stuff!” In the meantime, I’m going to watch this podcast via my Apple TV, since the Talis series is also listed at iTunes. I wonder if it’s in HD? (Later: oops! It’s not in video. Darn. I was looking forward to seeing Sir Tim in HD.)

In the write-up on the interview, Miller wrote:

We talked for a fascinating hour during which we ranged from past to future, from technology to policy. We covered specifications such as RDF and SPARQL, and we talked about the pressing need for more accessible texts to explain the Semantic Web to mainstream business.

My book, “Practical RDF”, is out of date, and I and my editor have been talking about a new edition. However, a new edition would not be focused entirely on RDF, and probably wouldn’t cover certain aspects of RDF, in order to be a bit more comprehensive. RDF doesn’t function alone in the world, and a book that covers semantic web technologies needs to cover not only RDF, but also all the complementary technologies. This, in addition to the new tools, data initiatives, and companies.

Now is actually a rather exciting time to be creating a new edition of a book on semantic web technologies. I remember when I wrote “Practical RDF”, which was published before the final release of the RDF specification, I had to stretch a bit to find tools and technologies focused on RDF and/or the semantic web. Now, the semantic web is hip, and the challenge is less on finding good material and more on ensuring that the book isn’t too big, or covers too much.

I don’t think the new edition will be called the same, but we’ll be keeping the “Practical” in the title in some way. Maybe something along the lines of “Practical Semantic Web”. I am nothing if not a practical person, and the “practical” component of the title will also be the overall theme for the book. However, even with this constraint I visualize a book bursting at the seams.

We’re also planning a new edition of Learning JavaScript, too. Unfortunately, the first edition was on a bit of a fast track, and I made mistakes in the book; more than I’d like to see with any of my books. I’ve made corrections via errata, but it will be nice to create a new, updated version.

I’m also helping out with a new edition of a third book, but this would be more along the lines of contributing commentary on organization and some chapters than being sole author.