Categories
Web

Find your exit points

The first time I stayed in a hotel was when I was 12 and I and my brother met my father for holiday in Hawaii. We’d stayed in motels before–this was the era of the auto vacations–but never a multi-story hotel, where you accessed your room using an elevator.

When we got to our room, my Dad took us out into the hallway and pointed out the Exit sign. He told us that if a fire happened, we should not use the elevator. Instead, we should look for the Exit signs and follow them out of the building.

Since that one trip, I briefly pause at my door and locate the nearest exit before entering my room in hotels.

That trip was also the first time I flew on a plane. It was wonderful–scary and exciting. When the stewardess talked about what to do in case of an crash landing, I paid attention. To this day, I still pay attention–not because I don’t know what to do (butt, meet lips), but because it’s rude to ignore this poor soul who has to go through the motions. Shades of fatalism aside, I do check to see what is the closest exit when I find my seat. Old habits are hard to break.

My check for the exit bleeds over into my use of web services. No matter how clever a service, I never use it if it doesn’t have an exit strategy.

Recently, I took a closer look at the possibility of using Feedburner for serving up my feed. Now that I’ve moved my photos offsite to Amazon’s S3 service, the feeds are now the most massive bandwidth use. With my new austerity program of minimizing resource use, the use of Feedburner is attractive: let it serve up the feeds, with its much more efficient use of bandwidth.

My first thought, though, was: what’s the exit strategy? After all, it’s easy for me to redirect my feeds (all but the RSS 1.0) to Feedburner: I can adjust my .htaccess file to redirect traffic for all requests that don’t come from the Feedburner web bot. But what happens if I decide to bail on Feedburner?

This question was asked of the Feedburner staff last year, and the organization responded with an exit plan. It’s a month long process where you can redirect from Feedburner back to whatever feed URI you want. At the end of that time, all aggregators should have an updated feed URI–all without people having to manually edit feed subscriptions.

As such, I’m trying the service out, see how it goes. I know that if I decide I don’t like it, I can bail. If the worst case scenario happens, with Feedburner going belly up, then people know where to find my weblog and will have to manually edit their feeds. That’s also an exit, albeit more like jumping out a window than walking down stairs.

When I used Flickr, the API was what sold me on the service more than anything. When I decided to not use Flickr, the first thing I did was use an existing application to export a dump of all the original images, to ensure I had a copy of each. If I wanted to, I could also export the metadata and comments. I then ran an application to make an image capture of all the photos I had linked in my web pages, saving the photos locally still using the image names that Flickr generated.

I created a program that then converted all Flickr, as well as other photo URIs, to using one local URI: http://burningbird.net/photos/. This is redirected using the .htaccess to Amazon S3. If I decide to stop using Amazon the exit strategy is very simple: run an API call and pull down the images into one location; stop redirecting to that service and either host the images locally, or redirect to another storage service.

I use Bloglines, but I can easily export my subscriptions as OPML. Though it lacks much as a markup vocabulary, OPML is becoming ubiquitous as a way of managing feed subscriptions. I can then use this file to import subscriptions into Newsgator, or even a desktop hosted tool, like NetNewsWire.

I won’t use a hosted web service like Typepad or weblogs.com. It’s too easy for them to decide that you’re ‘violating’ terms of service, and next thing you know, all your weblog entries are gone. I saw this with wordpress.com in the recent events that caused so much discussion: in fact, I would strongly recommend against using wordpress.com because of this–the service is too easily influenced by public opinion.

I don’t use either my Yahoo or Gmail mail accounts. Regardless of whether I can get a copy of my email locally, if I decide to not use either account I have no way of ‘redirecting’ email addresses from either of these to the email address I want to use. (Or if there is a way, I’m not aware of it.) Getting a copy of my data is not an exit strategy–it’s an export strategy. An exit strategy is one where you can blow off the service and not suffer long-term consequences. A ‘bad’ email address is definitely a long-term consequence*.

Instead, I have a domain, burningbird.net, which I use for everything. I will always maintain this domain. My email address listed in the sidebar, will always be good.

There was a lot of discussion about Yahoo Pipes recently. Pipes is an interesting innovation, and excellent use of the Canvas object–my hat’s off to the creators for their UI design. However, the service has one major drawback: it’s a hosted solution. If you want to ‘export’ your Pipe, you can’t. There’s no way to generate, say a PHP application, from the Pipe, which creates the web service requests for you that can be run locally. No matter how good and interesting the service–there’s currently no exit strategy.

Anytime you find yourself saying, or even thinking, how ‘dependent’ you are on a service, you should immediately look for the exit strategy. If there isn’t one, decrease your dependency. The web is an ephemeral beast; the path of least resistance is 404, not 200. All pages will end someday. The same can be said for services.

Where are you vulnerable? What’s your exit strategy?

*An option for email is to use a local email address, and forward all email to Yahoo or GMail.

Categories
Technology Web

Wither away Ajax?

Dare Obasanjo has wrung the death knell on Ajax, but I disagree with him on several counts.

He writes:

Most people who’ve done significant AJAX development will admit that the development story is a mess. I personally don’t mind the the Javascript language but I’m appalled that the most state of the art development process I’ve found is to use Emacs to edit my code, Firebug to debug in Firefox and attaching Visual Studio to the Internet Explorer processes to debug in IE. This seems like a joke when compared to developing Java apps in Eclipse or .NET applications in Visual Studio. Given how hypercompetitive the “Web 2.0” world is, I doubt that this state of affairs will last much longer.

There is an Eclipse plugin for JavaScript development (actually, more than one), and that’s only one of many JS development tools. I tend to use text editors and Firebug because I’ve found both to be sufficient for my needs. All you have to do is search on “edit JavaScript” to pull up a host of free, open, and/or commercial tools to simplify JavaScript editing. This is in addition to the graphical tools for working with SVG and Canvas.

As for Firebug, I think that Dare sells this application way too short. With Firebug I can not only inspect a web page contents, stop a process and inspect programming constructs, drill into the CSS and markup, investigate what’s taking so long for the page to load, and review responses back from Ajax calls, I can see the document object model at a glance, and how all these little pieces pull together. I’ve worked with a host of desktop tools: none has ever had the ability to drill into every aspect of the application as much as Firebug allows me to do so with a simple web page.

Dare also writes:

There is too much pressure on Web companies to improve their productivity and stand out in a world full of derivative YouTube/MySpace/Flickr knock offs.

Yes, but why did all of these become so popular? Because they’re all accessible using something that everyone has: a web browser. There’s nothing to install, other than the ubiquitous Flash plug-in. There’s nothing proprietary about most of the technology used. These applications work with most browsers, except perhaps the older Internet Explorers or Netscape 4.x. Even then, most work with operating systems and browsers that the companies that provided both dropped support for years ago.

In fact, Ajax, rather than being a technology that’s heading out the door, could actually be one of the few open doors left for the people who have not been able to buy that new Macbook Pro, or dual-processor Dell machine. Microsoft, Sun, Apple, Adobe–any one of these companies would leave the less affluent in the dust in a heart beat. The web page is the great equalizer on the internet.

I do somewhat agree with Dare, in that desktop development systems that incorporate Ajax-like technologies, such as JavaScript, will grow. I imagine that Flash/Flex, OpenLaszlo, and WPF/E will get a following and do well. But their health is not negatively correlated with the health of Ajax, with one gaining only at the expense of the other.

Ajax isn’t just a name or a set of technologies: it’s a way of pulling out as much functionality as can be pulled out of a web page as possible. The desktop applications such as Google’s office killers get a lot of the publicity, but the real power behind ‘Ajax’ is little things like comment live preview, Flickr’s in-place edits, or WordPress’ expandable form elements. It’s deleting a row and not having to re-find your place as the page loads. It’s zooming in on a picture, mapping out a route on a map, live updating unread feeds, and a host of other ways of Doing Things Better.

If there’s anything to worry about with Ajax is that sometimes accessibility and validity of web page contents are sacrificed for ‘cool effects’. That, and the hype. By hyping Ajax as a ‘thing’ than it becomes easier to dismiss that ‘thing’ in favor of other ‘things’. But the concepts, effect, libraries, tool, techniques, go beyond being just ‘a thing’–it’s just the way web development will be done in the future. You can no more say that its day is done, as you can say that the hyperlink is old and therefore passè.

Dare also mentions about Java being the most used language today. Frankly, I doubt that it is the most used language. I would say that JavaScript is the most used language, followed closely by PHP. Java is most likely the most used for corporate development, but I don’t think it can compete with all the folks running simple little PHP applications just to upload their photos. Or post thoughts online, like this one.

For every person using a WebLogic or WebSphere application, there’s ten thousand webloggers using WordPress. For ever consumer of EJBs, there’s a thousand people using Gallery. For every web site that runs off Tomcat, there’s a million, or more, running PHP, Perl, Python, and Ruby. Google and Yahoo big users of Java? MSN the same with .NET? None of them would be anything without the millions, billions of simple web pages the rest of us produce.

Now is when things are really starting to get good. More web services, more semantics, more agreement among the browser developers, advances in the technology–better graphics, better JavaScript, better CSS and markup, and interesting twisty new ways of bringing it together…give it up? Not likely.

Of course, I’m finishing up a book on Ajax, and it’s natural I’ll say these things. But I didn’t write this book for the Mega-tool developers, the Kool Kids, or those who seemingly want to replace Photoshop or Office with online tools. I certainly didn’t write it to support the la-la land that is Silicon Valley, or the megapolis that is Microsoft. I wrote the book for the webloggers and the Gallery users; the folks running an online store, a corporate site, or an online publication; those digging after knowledge, and the knowledge providers; those who come to the web to teach and those who come to learn. Saying Ajax is ‘going away’ makes as much sense as saying all of these are going away.

Categories
Web

Nofollow

I guess now it’s OK to be against nofollow. Well, thank goodness our opinions have been validated.

Categories
Web

Old Skool

Recovered from the Wayback Machine. 

Lifted my head long enough from Adding Ajax to see a fooflah about Flickr’s newest announcement.

Flickr had said a long time ago that there would be a time when you won’t be able to have a login separate from a Yahoo account. Today the group announced that it’s no longer viable to maintain separate login systems and folks will have until March 15th to create a Yahoo identification and port their account to it. I must admit to some amazement about the anger this has generated. It was a given this was going to happen. It makes no sense to have two completely different sign-on systems.

Ken from Digital Common Sense writes:

I don’t like this. I have multiple Yahoo IDs. They are disposable, in part, becauseYahoo is disposable. My loyalty to Yahoo is non-existent. Their email sucks. The IM client is a bloated pig given years of creeping featurism with continual incorporation of crap the doesn’t work and users don’t want. In short, I’m not a Yahooligan.

I can understand that Ken doesn’t like Yahoo, but Yahoo did buy Flickr. In fact, chances are if they hadn’t, Flickr would have fallen under the weight of the demands on the system. It’s not many companies that have the built in infrastructure to handle the access sites like Flickr, or Yahoo for that matter, demand. True there are bigger photo sites, but the larger ones are focused around the photos, themsevles, which are a static, easy to serve and maintain commodity . Flickr is a community site with enormous CPU, complex data storage, as well as bandwidth needs.

If Flickr was asking something that wasn’t reasonable, I could understand the push-back, but not wanting to maintain separate sign-on and identity systems makes perfect sense–I wondered at them keeping these separate for so long.

Still, if folks aren’t comfortable with a Yahoo ID, they should consider dropping this account. There are other social photo systems, such as Zooomr, though I agree with Anil Dash in that …using these sort of opportunities to promote a competing business… is not cool. Predictable, but not cool.

Other concerns are that you can only now have 3000 contacts, and no more than 75 tags per photo. Wow, what a hardship. One person has 19,000 contacts, and as soon as he mentioned this on the thread, those folks who were among his contacts asked to be taken off. There’s a new thing in social system called the contact junkie, who craves contacts as others would crave the next heroin fix. I suppose that Flickr making these folks go cold turkey is cruel, but I can’t see a system being maintained just for the less than 1/10 of one percent of connection addicts.

It’s interesting about how some people seem to think there’s an ulterior motive for all of this, because, according to these folks, putting limits on data structures is never necessary for enhancing the performance or robustness of the system. Before you ask, no, none of the people making statements like this have a clue in how systems are built.

There are legitimate concerns about this move, not the least of which is it is difficult to find a meaningful Yahoo identifier. Luckily mine, P2PSmoke is one I’ve had since P2P was the hot thing; way before all this social software stuff. When folks talk about ‘old skool’, I have a meaningful Yahoo account–can’t get more old ‘skool’ than that.

(Question: why can’t cool people spell words correctly? This trend to add ‘cuteness’ to words should die a sudden and irreversible death.)

The point is moot, though, because if these forms of social environment are meant to equalize between participants than separating between the ‘old skool’ identities and the Yahoo identities for the newer folks is just another way of creating a false sense of elitism. With this switch, some of this is being swept away, and I wonder how much of this fooflah is because of this very thing?

It’s fascinating to read the threads–all those folks feeling betrayed because, according to one person in one of the threads:

Stewart, why are the oldskool members being treated like second class citizens on this issue, we are the community that made Yahoo buy you guys. Nothing like alienating your core supporters and cheerleaders. We are the bloggers, podcasters, videobloggers, and photographers that made the community. Your alienating the most vocal people on the internet. It’s going to be a shit storm of bad press for yahoo and Flickr tomorrow from the blogosphere, I promise you that.

All I can say is: When the frog farts in a pond in the forest, the cat in the city doesn’t smell it.

As I said earlier, there are legitimate concerns about this move: what are the Terms of Service differences between having a Filckr account as compared to one for Yahoo? People have had problems with Yahoo sign-ons, and other technologies, and the merge doesn’t sound like it’s well crafted: what kind of support is available to help folks with this move? What additional constraints will this move have on folks, other than having to have a separate account? Can the people use the same email addresses? Not to mention that it is really tough to find a unique user name with Yahoo: how about a Flickr specific namespace for identities?

In a way, Flickr’s sign-on merge into Yahoo may actually have a reverse effect, because Flickr’s customers tend not to be as geeky as Yahoo customers (yes, I know they’re the same, bare with me); this move might actually lead a trend into improving the overall Yahoo customer service interface. Or not, and people will quit Yahoo and Flickr both.

I have my old, old Yahoo account, which I’m now using with a Flickr free account specifically for development purposes. Flickr still has one of the better open web services, made better with the new ‘machine’ tags concept. I don’t post photos much anymore, and certainly not at a ‘social’ site, so perhaps my lack of concern doesn’t reflect the concerns of others. I am sympathetic to those who are concerned about issues of privacy, or who have had problems with Yahoo’s technology, or with the photo merge. I have no sympathy, though, for those who seem to be more concerned about losing their ‘old skool’ status, or worse, using this as an opportunity to shill for another company within the threads set up for the discussions.

Bottom line, though, is that I’ve never known Flickr to pull their punches, and if they say this is going to happen, this is going to happen. That’s one of the things I’ve always admired about Flickr: lack of smarmy marketing. What you see, is what you get.

SmugMug is offering 50% off for Flickr jumpees. There you go, Thomas Hawk.

Categories
Web

Wikipedia and nofollow

Recovered from the Wayback Machine.

That bastard Google warp-around nofollow rears its ugly little head again, this time with Wikipedia. Jimmy Wales, chief Pedian has issued a proclamation that Wikipedia outgoing links will now be labeled with ‘nofollow’, as a measure to prevent link spam.

seomoz.org seems to think this is a good thing:

What will be interesting to watch is how it really affects Wikipedia’s spam problem. From my perspective, there may be slightly less of an incentive for spammers to hit Wikipedia pages in the short term, but no less value to serious marketers seeking to boost traffic and authority by creating relevant Wikipedia links.

Philipp Jenson is far less sanguine, writing:

What happens as a consequence, in my opinion, is that Wikipedia gets valuable backlinks from all over the web, in huge quantity, and of huge importance – normal links, not “nofollow” links; this is what makes Wikipedia rank so well – but as of now, they’re not giving any of this back. The problem of Wikipedia link spam is real, but the solution to this spam problem may introduce an even bigger problem: Wikipedia has become a website that takes from the communities but doesn’t give back, skewing web etiquette as well as tools that work on this etiquette (like search engines, which analyze the web’s link structure). That’s why I find Wikipedia’s move very disappointing.

Nick Carr agrees writing:

Although the no-follow move is certainly understandable from a spam-fighting perspective, it turns Wikipedia into something of a black hole on the Net. It sucks up vast quantities of link energy but never releases any.

Seth Finkelstein notices something else: WIKIPEDIA IS NOT AN ANARCHY! THERE IS SOMEBODY IN CHARGE!

The rel=”nofollow” is a web extension I despise, and nothing in the time it was first released–primarily because of weblog comment spam–has caused me to change my mind. As soon as we saw it, we knew the potential existed for misuse and people have lived down to my expectations since: using it to ‘punish’ web sites or people by withholding search engine ranking.

Even when we feel justified in its use, so as to withhold link juice to a ‘bad’ site (such as the one recently Google bombed that had misleading facts about Martin Luther King) we’re breaking the web, as we know it. There should be no ‘good’ or ‘bad’ to an item showing up on a search list: if one site is talked about and linked more than another, regardless of the crap it contains, it’s a more topically relevant site. Not authoritative, not ‘good’, not ‘bad’, not definitive: topically relevant.

(Of course, if it is higher ranked because of Google bombing of its own, that’s a different story, but that’s not always the case.)

To return to the issue of Wikipedia and search engine ranking, personally I think one solution to this conundrum would be to remove Wikipedia from the search results. Two reasons for this:

First, Wikipedia is ubiquitous. If you’ve been on the web for even a few months, you know about it and chances are when you searching on a topic, you know to go directly to Wikipedia to see what it has. If you’ve been on the web long enough, you also know that you have to be skeptical of the data found, because you can’t trust the veracity of the material found on Wikipedia. I imagine that schools also provide their own, “Thou shalt not quote Wikipedia”, for budding young essayists.

Reason one leads to reason number two: for those folks new to this search thing, ending up on Wikipedia could give them the impression that they’ve ended up with a top-down authority driven site, and they may put more trust into the data than they should. After all, if they’re not that familiar with search engines, they certainly aren’t familiar with a wiki.

Instead of in-page search result entries, Google, Yahoo, MSN, any search engine should just provide a sidebar link to the relevant Wikipedia entry, with a note and a disclaimer about Wikipedia being a user-driven data source, and how one should not accept that this site has the definitive answer on any topic. Perhaps a link to a “What is Wikipedia?” FAQ would be a good idea.

Once sidebarred, don’t include Wikipedia in any search mechanism, period. Don’t ‘read’ its pages for links; and discard any links to its pages.

Wikipedia is now one of those rare sources on the web that has a golden door. In other words, it doesn’t need an entry point through a search engine for people to ‘discover’ it. If anything, its appearance in search engine results is a distraction. It would be like Google linking to Yahoo’s search result of a term, or Yahoo linking to Google’s: yeah, we all know they’re there but show me something new or different.

More importantly, Wikipedia needs to have “Search Engine General’s” warning sticker attached to it before a person clicks that link. If it continues to dominate search results, we may eventually get to the point where all knowledge flows from one source, and everyone, including the Wikipedia folks, know that this is bad.

This also solves the problem about Wikipedia being a Black hole, as well as the giving and taking of page rank: just remove it completely from the equation, and the issue is moot.

I think Wikipedia is the first non-search engine internet source to truly not need search engines to be discovered. As such, a little sidebar entry for the newbies, properly annotated with a quiet little “there be dragons here” warning, would eliminate the spam problem, while not adding to a heightened sense of distrust of Wikipedia actions.

One other thing worth noting is is seomoz.org’s note about a link in Wikipedia enhancing one’s authority: again, putting a relevant link to Wikipedia into the search engine sidebars, with a link to a “What is Wikipedia?” FAQ page, as well as the dragon warning will help to ‘lighten’ some of the authority attached to having a link in the Wikipedia. Regardless, I defer to Philipp’s assertion that Wikipedia is self-healing: if a link really isn’t all that authoritative, it will be expunged.