Categories
Social Media Technology

The Tweet stuff: When it absolutely positively has to get there

If we’ve learned one thing from this week’s massive attack against the very fabric of our social connectivity, it’s that clouds don’t make the best stuff on which to build.

Twitter, in particular, has shown how very vulnerable it is—a vulnerability we share as we become more dependent on Twitter for most of our communication with each other. Oddly enough, I needed to contact someone about a business opportunity just as Twitter universe began to crumble, but all I had was her Twitter name—I couldn’t find her email address. Since Twitter was down, I couldn’t connect up with her for hours.

Of course, massive DDoS isn’t all that common, but Twitter still hasn’t recovered from the attack. As I’ve been playing with new Twitter accounts this week, I found varying degrees of responsiveness across the accounts, probably based on how busy they are; possibly based on how many followers a person has. None of the accounts would allow me to change most profile information, including the design. As you can see with my new integrated @shelleypowers Twitter account, I haven’t been able to change the picture, or to delete or add a new background image. I’ve had varying success with just posting a new message.

I have never liked centralized systems, though I understand their appeal and worth. It always seems, though, that just when you start to depend on the centralized service something happens to it.

Yahoo is now out of the search engine business, and with its new business partnership with Microsoft, its side applications like delicious are now vulnerable. I’ve managed to replace delicious with Scuttle, though I no longer have the social aspect of delicious. However, my Scuttle implementation does an excellent job with bookmarks, which is what I needed.

Then NewsGator sent an email around this last week telling all of us that our NewsGator feed aggregator is being replaced by Google Reader. I don’t like Google Reader. More importantly, I really don’t want to give Google yet more information about me. So, I replaced my NewsGator/NetNewsWire installation with a Gregarius implementation. It took me some time to get used to the new user interface, and I’ve had to password protect the installation, but I’m not dependent on a centralized feed aggregator, which can, and did, go away.

Twitter, though. I was not a big Twitter fan at first, but I can see the benefits of the application, especially if you want to point out an article or something else to folks, and have it quickly, virally spread, in a nice swine flu-like manner. It’s fun to have a giggle with folks, too. But the darn thing is centralized, and not only centralized, vulnerable and centralized, which gives one pause.

I have an Identi.ca account, too, but most folks are in Twitter. You can integrate the two by linking your identi.ca account to Twitter, as I have. Still, identi.ca is also centralized, just located in a different slice of the internet pie.

I finally bit the dust this week, and installed my own version of Laconica which is the microblogging software used with identi.ca. There were a couple of glitches, not the least of which were two very minor programming typos in the install program (yes, I have turned these in to the developers and it should be fixed, soon). However, the application is actually quite easy to use. I’ve had fun playing around with a new theme.

Just like identi.ca, you can connect an individual Laconica account up with Twitter, but doing so would cut my identi.ca account out of the picture. Beyond just the identi.ca issue, I also want to be able to display a list of links in my burningbird.net sidebar, with expanded URLs, so folks get search engine mojo. I could aggregate tweets, but you end up with shortened URLs, not expanded URLs when you go from tweet to sidebar. Besides, a sidebar link and a tweet are not the same thing, with the same structure.

I finally created my own tweet workflow, as I like to call it.

  • First, I installed Laconica, created a single user account (at this time), and then disabled registration. I don’t want to run a Twitter alternative.
  • Next, I found software, RSSdent, which will take an RSS feed and submit the items as tweets to identi.ca/Laconica. What I did was modify the application to submit the body of the feed without a link to the feed. The reason I don’t want the URL is that the feed I’m syndicating is my newly created Laconica installation. The body of the items will have the links that matter. Since I didn’t need any URL shortening (happening sooner in the process), I was able to trim much of the code, leaving a nice simple little application.
  • I set up a cron job so that items posted to my individual Laconica account will get posted to my identi.ca account every hour.
  • I connected my identi.ca account to my main Twitter account, at @shelleypowers. Now, when an item is posted in my identi.ca account, it gets posted to Twitter.
  • I can individually post in my Laconica account, but I also want to capture the links for my main Drupal installation, at Burningbird. There is a Drupal Twitter module, which works with identi.ca (by using indenti.ca/api as the alternative URL), or a Laconica account (in my case, using laconica.burningbird.net/api/). The only problem, though, is that this module is used to post a status update reflecting a new weblog post, not an interesting link. It gives you options to post the title, post link, and/or author, but not the body.
  • To work around the problem, I created a new content type, linkstory, with a custom content field (via CCK) that contains the link of interest and it’s link text. When I create a new linkstory, the body contains the tweet text and the expanded or shortened URL (depending on how long the URL is), but the CCK field contains the expanded link and the text I want for the link text.
  • I then created a view to display the content field text and URL, but not the body, or title of the posting.
  • I copied the Twitter module and did a small tweak (tweak, twitter, tweet — my head just blowed up) so that it outputs the body of the post, when I provided the !body label.
  • When a new linkstory is posted, the full link and link text get put into the sidebar, while the post body, containing the possibly shorted URL and any message get posted to my Laconica account.

The full workflow is: create a new linkstory or regular post in Drupal, which gets posted to my Laconica account via my modified Twitter module. Once a hour, these postings are picked up by rssdent, and posted to identi.ca. Posting on identi.ca automatically posts to my Twitter account.

If Twitter goes offline, the posts still get made to identi.ca. If identi.ca goes offline, the post is still made to my Laconica account, and the fully expanded URL for the link is posted to my main web site. My rssdent application keeps trying, once an hour, to post to identi.ca, and hence to Twitter. My modification to the Twitter Drupal module was an addition, so I can tweet posts and links, alike.

It sounds like a lot of work, but it was only about a day’s fun playing around. I plan on submitting my small Twitter Drupal module tweak as a patch, and hopefully it will be accepted. It only adds new functionality at the cost of one line of code. I’ll check in my fork of rssdent, but I need to figure out how github works. The Laconica installation didn’t require any modification, once I made the code corrections. These corrections should be incorporated into the original application, hopefully soon.

Now, this isn’t spamming. Everything gets posted to one place, though if people are subscribed to my Twitter and identi.ca accounts (or even my Laconica account), they’ll get an echo effect. This is just me grabbing hold of a little independence, while still partying with the communes. Setting my Bird free.

update I’m still getting familiar with the Twitter/Laconica API, but received a message via my identi.ca account from csarven about remote subscriptions. I can subscribe to identi.ca folks, as well as other Laconica sites, using the REST API. For a Laconica site, attach “?action=remoteSubscribe” to the URL, and you’ll get a page to enter the nickname of the person to whom you want to subscribe (at that site), and your remote profile, such as http://laconica.burningbird.net/shelleypowers. Or if you’re not logged into the system, just clicking the subscribe button next to the person’s avatar will open the Remote subscription page, automatically.

Once you enter the remote subscription request, you’re then taken back to your own site, where you have to accept the request. This prevents spamming. Once accepted, when you access your Home location, the postings from your remote friends will show up, in addition to postings from your friends who are local. You can also reply to the individual.

This functionality is also available for Twitter, built-in, but on my system, trying to use it caused errors. This is a known bug and a fix is currently being developed.

This is truly distributed, and decentralized, connectivity. You can’t take a system like this down, no more than you can take all email down, or all weblogs down. Way of the future.

Now, I must find out what other goodies are in the API…

Categories
Technology

Embedded fonts with font-face

I’m experimenting with my first attempt at using embedded fonts here at RealTech. I’m using the Gentium Basic TrueType font, which I downloaded from Font Squirrel. Since Internet Explorer doesn’t support truetype fonts, I had to use the ttf2eot application to convert the truetype into EOT, which is what Microsoft supports. Edward O’Connor has a nice writeup in how to use ttf2eot. I downloaded my copy of the utility, since I couldn’t get the Macports version to install. You can also use Microsoft’s WEFT utility on Windows.

(update There is also a Gentium Basic font-face kit that contains everything you need, including EOT files and a stylesheet.)

Once I had both versions of all the font files uploaded to my server, I added the CSS for the font-face rules. You can add these rules as a separate file, or include them in your stylesheet, whatever rings your bell:

/* For IE */

@font-face {
        font-family: 'Gentium Basic';
        src: url('GenBasR.eot');
}

/* For Other Browsers */

@font-face {
        font-family: 'Gentium Basic';
        src: local('Gentium Basic Regular'),
             local('GentiumBasic-Regular'),
             url('GenBasR.ttf') format('truetype');
}

@font-face {
        font-family: 'Gentium Basic';
        src: local('Gentium Basic Italic'),
             local('GentiumBasic-Italic'),
             url('GenBasI.ttf') format('truetype');
        font-style: italic;
}

@font-face {
        font-family: 'Gentium Basic';
        src: local('Gentium Basic Bold'),
             local('GentiumBasic-Bold'),
             url('GenBasB.ttf') format('truetype');
        font-weight: bold;
}

@font-face {
        font-family: 'Gentium Basic';
        src: local('Gentium Basic Bold Italic'),
             local('GentiumBasic-BoldItalic'),
             url('GenBasBI.ttf') format('truetype');
        font-weight: bold;
        font-style: italic;
}

Notice that there are separate files for bold and italic fonts, as well as the ‘normal’ font. All are included, and all are given the same font-face alias, in this case “Gentium Basic”. It’s up to the user agent to determine which font to use in which circumstance (normal weight versus bold, normal text versus italics), Once the fonts are defined, they’re used wherever you would use standard fonts:

#rap {
  width:960px;
  margin: 0 auto;
  position: relative;
  z-index: 2;
  background-color: #fff;
  color: #444;
  font: 0.9em/1.5em "Gentium Basic", Georgia,serif;
  font-style: normal;
}

I provide fallback fonts, for browsers that don’t support font-face.

So, how are these font rules working in various browsers?

Browsers that don’t support font-face yet, such as Opera 9.64, pick the fallback web safe font, as they should.

Safari 4 supports font-face, though the page can show up oddly until the fonts are downloaded to the person’s machine. If you’ve accessed a web site and no text shows, but underlines are appearing for the links, chances are the person is using font-face. The “blank” text doesn’t last long, though, depending on how fast your connection is, and how fast the web server serves up the font.

Once the font is loaded, Safari uses the fonts as described in the CSS3 specification.

Firefox has support for font-face beginning with 3.5. It doesn’t have the download hiccup that Safari seems to have, because it uses the default web font until the font is downloaded, and then redraws the text. (From what I can see, other browsers such as Opera 10.0 beta 2 do the same.)

Using Browsershots and my own PC, I’ve found that Internet Explorer also makes use of the fonts, in the EOT format I’ve provided. However, it doesn’t support the font style and weight specification.

Chrome is using WebKit, which would lead one to think that it supports font-face, but I’ve not seen webfonts work with Chrome. Again, just because Chrome is using the WebKit engine, doesn’t mean the Chromium graphics engine (Chrome’s own graphics engine) supports the same functionality that Safari or WebKit’s graphics engine have.

Opera 10.0 has been a problem for me. All of the fonts are showing as italic in Opera 10 beta 2. I don’t think there’s a problem with my CSS, and have since filed a bug with Opera. When the fonts are installed on the desktop, then Opera 10.0 seems to work. If not, you get italics.

update Thanks to Philippe, I’ve updated the stylesheet. IE does not support the font weight and style setting, so you only specify the one EOT file, for the basic font. In addition, the local setting in the stylesheet provides a local file name for the font, for the browser to use rather than have to download the font. Opera 10 does work when the file is local, but not when downloaded.

Also thanks to Baldur, whose use of Gentium Basic inspired me to try it at my site.

Update

Bruce Lawson added a comment that Opera is aware of the problem, and working on a solution.

I received an email from Stefan Hausmann,who wrote:

@font-face has been disabled in Chrome [1] because of “security concerns”. They announced webfonts support months ago, then disabled them by default, but failed to communicate that to the general public. They’re working on reenabling them by default [2].

In the meantime, you can use the –enable-remote-fonts command line switch. It’s still buggy on Chromium 4.0.202.0 where I tested it. Sometimes web fonts don’t render at all, sometimes single words don’t render (as if they were transparent) or only when selected. Sometimes selecting text with web fonts crashes the browser.

[1] http://code.google.com/p/chromium/issues/detail?id=9633 [2] http://code.google.com/p/chromium/issues/detail?id=17818

Categories
HTML5 SVG Technology

Separating Canvas out of HTML5

The HTML 5 specification is too large, that’s a given. Too large, and too diverse. With the merge of the DOM into the specification, as well as an attempt to cover two different serializations, not to mention the microdata section, it’s difficult to describe the HTML 5 spec as an “improvement on HTML 4”, which is what the HTML WG’s charter specifies. Kitchen sink comes to mind, and kitchen sinks don’t make good specifications.

One simplification of HTML 5 I would make is to remove the Canvas section from the specification. Instead, I would reduce the Canvas section down to coverage of the syntax for the Canvas element, similar to what’s happening with MathML and SVG, but remove the guts of the object to a separation specification. Or don’t move the Canvas object to a separate specification, but don’t leave the object in HTML 5.

I read through results of the three votes associated with Canvas, or should I say, “immediate mode graphics API”. Two of the votes had to do with the WG charter and creating a tutorial about the Canvas element, and one was specifically about splitting the Canvas element out.

The vote was overwhelmingly against splitting the element out, but also against not updating the charter to reflect the fact that including the Canvas element is outside of the group’s current charter. Frankly, this was undisciplined, and at that point in time, the W3C Director should have stepped in to remind the group about what the charter is, and the importance of adhering to it.

Looking again at the vote about not splitting the Canvas object into a separate specification, you can see immediately that few people are really enthusiastic about keeping the Canvas element in the HTML 5 specification. However, they were even less enthusiastic about doing the work necessary to split the Canvas element into a new specification, and developing a group to support the new spec. Being disinterested in starting a new working group does not make for a compelling argument for keeping Canvas in HTML 5.

Now we’re seeing problems arise by that bad decision. There have been numerous recent discussions about Canvas and accessibility, and it isn’t difficult to see that work on Canvas accessibility needs to continue, probably for a significant period of time; possibly long enough to impact on the timeline for the Last Call for HTML 5.

In addition, there is a very real concern that the same governments that mandate against JavaScript because of accessibility will also mandate against Canvas for the same reason, because Canvas is dependent on JavaScript. Yet the Canvas element is integrated into the HTML 5 specification. The end result could be a slower roll out of HTML 5, perhaps even a reluctance to adopt HTML 5. I hesitate to say there may be a ban against HTML 5, but there is that possibility, too, slight as the risk is.

Most importantly for the folks who like the Canvas object, it’s now tied to the same schedule of the HTML 5 specification. This means that if we want to expand the Canvas object at same later point, we have to do so in conjunction with a new version of HTML. This tie-in makes absolutely no sense. When you consider the increasing capabilities being built into Flash and Silverlight, Canvas also needs room to grow. Now, the HTML WG has effectively boxed it in, limited its future expansion, and probably helping to hasten its future obsolescence. Of course we still have SVG, which is not integrated tightly into the HTML 5 specification, and can continue to grow and expand. Good for SVG. However, I happen to believe that it’s healthy to have both graphics capabilities—but only if both have room to grow.

It wasn’t up to the HTML WG to insist that the Canvas element either be included in the HTML 5 specification or some other formal working group. The group can’t just grab things in, willy nilly, like crows grab a piece of tinsel because it sparkles in the sun. Oh! Me want! Me want! If people are interested in the object, they’ll work to help standardize its use. If they aren’t, then it will continue as it has in the past, based on informal agreement among four of the top five browser developers. At least then it won’t get stuck being permanently embedded in an HTML specification.

Categories
W3C XHTML/HTML

XHTML2 is dead

XHTML2 news on Twitter

I have mixed feelings on this news.

On the one hand, I think it’s a good idea to focus on one X/HTML path.

On the other, I’ve been a part of the HTML WG for a little while now, and I don’t feel entirely happy, or comfortable with many of the decisions for X/HTML5, or for the fact that it is, for all intents and purposes, authored by one person. One person who works for Google, a company that can be aggressively competitive.

Categories
Burningbird Technology Web

A major site redesign

I’ve finished the re-organization of my web site, though I have odds and ends to finish up. I still have two major changes featuring SVG and RDFa that I need to incorporate, but the structure and web site designs are finished.

Thanks to Drupal’s non-aggressive use of .htaccess, I’ve been able to create a top-level Drupal installation to act as “feeder” to all of the sub-sites. I tried this once before with WordPress, but the .htaccess entries necessary for that CMS made it impossible to have the sub-sites, much less static pages in sub-directories.

Rather than use Planet or Venus software to aggregate feed entries for all of my sites, I’m manually creating an excerpt describing a new entry, and posting it at Burningbird, with a link back to the full article. I also keep a listing of the last few months stories for each sub-site in the sidebar, in addition to random display of images.

There is no longer any commenting directly on a story. One of the drawbacks with XHTML and an unforgiving browser such as Firefox, is that a small error is enough to render the page useless. I incorporate Drupal modules to protect comments, but I also allow people to enter in some markup. This combination handles most of the accidentally bad markup, but not all. And it doesn’t protect against those determined to inject invalid markup. The only way to eliminate all problems is not allow any markup, which I find to be too restrictive.

Comments are, however, supported at the Burningbird main site. To allow for discussion on a story, I’ve embedded a link in every story that leads back to the topmost Burningbird entry, where people can comment. Now, in those infrequent times when a comment causes a problem with a page, the story is still accessible. And there is a single Comment RSS feed that now encompasses all site comments.

The approach may not be ideal, but commentary is now splintered across weblog, twitter, and what not anyway—what’s another link among friends?

I call my web site design “Silhouette” and will release it as a Drupal theme as soon as it’s fully tested. It’s a very simple two column design, with sidebar column either to the right (standard) or easily adjusted to fall to the right. It’s an accessible design, with only the top navigation bar coming between the top of the page and the first story. It is valid markup, as is, with the XHTML+RDFa Doctype, because I’ve embedded RDFa into the design. It is not valid, however, when you also add SVG silhouettes, as I do with all but the top most site.

The design is also valid XHTML 5.0, except for a hard coded meta element that was added to Drupal because of security issues. I don’t serve the pages up as HTML 5, though, because the RDFa Doctype triggers certain behaviors in RDFa tools. I’m also not using any of the new HTML 5 structural elements.

The site design is plain, but it suits me and that’s what matters. The content is legible and easy to locate, and navigate, and that’s my second criteria. I will be adding some accessibility improvements in the next few months, but they won’t impact on the overall design.

What differs between all of the sites is the header graphic, and the SVG silhouettes, which I changed to suit the topic or mood of the site. The silhouettes were a lot of fun, but they aren’t essential, and you won’t be able to see them if you use a browser that doesn’t support SVG inline. Which means you IE users will need to use another browser to see the images.

I also incorporate some new CSS features, including some subtle use of text-shadows with headers (to add richness to the stark use of black text on pastel graphics) and background-color: rgba functionality for semi-transparent backgrounds. The effects are not viewable by browsers that don’t yet support these newer CSS styles, but loss of functionality does not impact access to the material.

Now, for some implementation basics:

  • *I manually reviewed all my old stories (from the last 8 years), and added 410 status codes for those I decided to permanently remove.
  • For the older stories I kept, I fixed up the markup and links, and added them as new Drupal entries in the appropriate sub-site. I changed the dates to match the older entries, and then added a redirect between the old URL and the new.
  • By using one design for all of the sites, when I make a change for one, it’s a snap to make the change for all. The only thing that differs is the inline SVG in the page.tpl.php page, and the background.png image used for the header bar.
  • I use the same set of Drupal modules at all sub-sites, which again makes it very easy to make updates. I can update all of my 7 Drupal sites (including my restricted access book site), with a new Drupal release in less than ten minutes.
  • I use the Drupal Aggregator module to aggregate site entries in the Burningbird sidebar.
  • I manually created menu entries for the sub-site major topic entries in Burningbird. I also created views to display terms and stories by vocabulary, which I use in all of my sub-sites.
  • The site design incorporates a footer that expands the Primary navigation menu to show the secondary topic entries. I’ve also added back in a monthly archive, as well as recent writings links, to enable easier access of site contents.

The expanded primary menu footer was simple, using Drupal’s API:


<?php
$tree = menu_tree_all_data('primary-links');
print menu_tree_output($tree);
?>

To implement the “Comment on this story” link for each story, I installed the Content Construction Kit (CCK), with the additional link module, and expanded the story content type to add the new “comment on this story” field. When I add the entry, I type in the URL for the comment post at Burningbird, which automatically gets linked in with the text “Comment on this story” as the title.

I manually manage the link from the Burningbird site to the sub-site writing, both because the text and circumstance of the link differs, and the CCK field isn’t included as part of the feed. I may play around with automating this process, but I don’t plan on writing entries so frequently that I find this workflow to be a burden.

The images were tricky. I have implemented both the piclens and mediaRSS Drupal Modules, and if you access any of my image galleries with an application such as Cooliris, you’ll get that wonderful image management capability. (I wish more people would use this functionality for their image libraries.)

I also display sub-site specific random images within the sub-site sidebars, but I wanted the additional capability to display random images from across all of the sites in the topmost Burningbird sidebar.

To get this cross-site functionality, I installed Gallery2 at http://burningbird.net/gallery2, and synced it with the images from all of my sub-sites. I then installed the Gallery2 Drupal module at Burningbird (which you can view directly) and used Gallery2 plug-ins to provide random images within the Drupal sidebar blocks.

Drupal prevented direct access from Gallery2 to the image directories, but it was a simple matter to just copy the images and do a bulk upload. When I add a new image, I’ll just pull the image directly from the Drupal Gallery page using Gallery2’s image extraction functionality. Again, I don’t add so many images that I find this workflow to be onerous, but if others have implemented a different approach, I’d enjoy hearing of alternatives.

One problem that arose is that none of the Gallery2 themes is XHTML compliant because of HTML entity use. All I can say is: folks, please stop using &nbsp;. Use &#160; instead, if you’re really, really generating XHTML, not just HTML pretending to be XHTML.

To fix the non-compliant XHTML problem, I copied a version of my site to a separate theme, and just removed the PHP that serves the page up as XHTML for XHTML-capable browsers from this “Silhouette for HTML” theme. The Gallery2 Drupal modules allow you to specify a different theme for the Gallery2 pages, and I use the new HTMLated theme for the Gallery2 pages. I use my XHTML compliant theme for the rest of the site. Over time, I can probably add conditional tests to my main theme to test for the presence of Gallery blocks, but what I have is simple and works for now.

Lastly, I redirected the old Planet/Venus based feed locations to the Burningbird feed. You can still access full feeds from all of my sub-sites, and get full entries for all but the larger stories and books, but the entries at Burningbird will be excerpts, except for Burningbird-only posts. Speaking of which, all of my smaller status updates, and general chit-chat will be made directly at Burningbird—I’m leaving the sub-sites for longer, more in-depth, and “stand alone” writings.

As I mentioned earlier, I still have some work with SVG and RDFa to finish before I’m completely done with the redesign. I also have some additional tweaks to make with the existing infrastructure. For instance, I have custom 404403, and 410 error pages, but Drupal overrides the 403 and 404 pages. You can redirect the error handling to specific pages, but not to static pages, only to pages within the Drupal system. However, I’m not too worried about this issue, as I’m finding that there’s typically a Drupal module for any problem, just waiting to be discovered.

I know I must come across as a Drupal fangirl in this writing, but after using the application for over a year, and especially after this site redesign, I have found that no other piece of software matches my needs so well as Drupal. It’s not perfect software—there is no such thing as perfect software—but it works for me.

* This process convinced me to switch fully from using Firefox to using Safari. It was so much more simple to fix pages with XHTML errors using Safari than with Firefox’s overly aggressive XHTML error handling.