Categories
Standards Technology

What do you want from digital identity

Recovered from the Wayback Machine.

I removed the last paragraph from my last posting. It added nothing to the discussion and was unnecessarily snarky. Still, doing so doesn’t impact on the message threaded throughout the post that *I’m not supportive of universal (read that ‘federated’) digital identities.

I don’t believe there is a system that can’t be cracked. What I do believe is that there is a tradeoff between the willingness to spend time and energy in cracking a system, and how universally it’s used. One overall, agreed on universal digital identity system that every major financial, economic, government player has bought into seems to me to be a mighty big target. It’s not so much that it represents a widely used identity infrastructure; it’s that behind the infrastructure is some very tasty data.

Additionally, I’m not sure that there is demand for this type of overall identity. In the midst of these discussions, Johhanes Ernst posed the question: why do we want digital identity? Is it for seamless enterprise wide access? Is it to facilitate commerce? Eliminate the existing highly fractured state of security, with implementations that range from heavily robust to wide open?

I personally favor the concept of ’single sign-on’ where I can use the same name and passwords at different sites, without having to re-input my contact information, and without having to remember different connection information with each. Even then, I would most likely only use something like this with sites where the cost of exposure of the data is minimal. Though it would be tempting to want to store my credit card on my machine, and have a remote system handshake with my local computer to exchange the information without me having to do so, I don’t find the fact that I have to re-input the data with each purchase to be an overwhelming burden. Not to the point of storing this information on my machine–whether it is my dual Windows/Linux machine, or my Mac.

Work on enhancing the security of our data exchanges is a goodness; but the farther from my machine I can store sensitive data, the happier I’ll be. In this discussion, rather than focus on separating the specification of a security infrastructure from the implementation, I’d rather discuss separating the storage of the data from the transport.

(Of course, some companies require that you store your credit card information on their machines, but I don’t know how something like InfoCards would eliminate this–unless part of the architecture also provides an ‘on-demand’ request for the card information from the site back to us. )

To answer his own question, Johhanes sees digital identity as a way of empowering people:

So let me tell you what excites me about Digital Identity: it is the transformational power that Digital Identity can bring — assuming it is done right — to empower individuals and groups in ways that are highly desirable but impossible without. Or, in plain language: the new products and features that only can be built with Digital Identity and will be built as soon as we have it. And we will never look back.

I thought this, at first, had to do with authenticity, and establishing that we’re who we say we are. However, from the examples Johhanes lists, this doesn’t seem to be the case. Examples, such as Marc Cantor’s digital lifestyle aggregation, where all of our digital devices work out how to integrate themselves; and Johhanes own company’s software, which …is aware of the user’s immediate situation, and proactively supports them in that situation, instead of being just able to offer a bunch of remote websites that are very clueless about the user and thus not very helpful.

I don’t know that we need digital identity for the latter — I have extensions added to my browser that lets me know when a site has RDF/XML I can examine, or a syndication feed I can link to. I’d rather passively put easily discoverable information out on a site using established ‘hooks’ and then use generic discovery tools to find this data elsewhere, then build something in them reflecting my identity.

I don’t think the power of the internet is based on the concept that eventually, everyone will know your name. I think it’s based on the fact that everyone doesn’t know your name.

*There goes my planetary status on Planet Identity I imagine

Categories
Specs

Google doesn’t REST

Thanks to Sam Ruby for a heads up on a potentially nasty problem with Google’s new Web Accelerator, and badly designed REST applications. He linked to two sites that go into the details. The short version is that users of a specific web service were finding that they were losing data and after investigation, the service discovered that the Web Accelerator was the culprit.

The Web Accelerator is one of Google’s newest releases, and supposedly will help with the server-side backlogs that can occur when you’re accessing a site with a faster DSL or cable connection. How it works is that when you navigate to a site it ‘pre-fetches’ information by clicking on all the various links and, it would seem, caching the results.

All dandy (if confusing — we asked for this?) except for one thing: since this little deskside bot operates on a page under your name and authority at whatever site you’re at, if you’re at a web application that has links that do things such as ‘delete’ a web page or make some other form of update, the bot is just as happy clicking those links as not. Even if there is a Javascript alerts that should say, “Are you really sure you want to do that?”, it manages these behind the scene. Before you even know what’s happened, your data is gone.

Leaving aside that perhaps this won’t rival Google Maps for being handy, this bot does prove out a problem that people like Sam have been pointing out for some time — we’re using REST incorrectly, and because of this, we’re going to get bit in the butt some day.

Well, there’s a rottweiler hanging off some asses now, and it has “Google” on its name tag. (Uh, no metaformat pun intended.)

REST is an extremely simple web application protocol, and is what I’m using for my metadata layer in Wordform. Before I started implementing it, I researched around at what is needed for an application to be RESTful and the primary constraints are knowing when to use GET and when not to use GET. Really, really knowing when not to use GET.

You’re familiar with GET operations. In simplest form, when you search in a search engine, or access a weblog entry and there’s a URL with a bunch of parameters attached, that’s a GET request being made to the server. In this case, the parameters are passed as part of the URL.

Lots of applications have been using GET, not only to fetch information, but also to create or remove resources or make updates. However, what they should be doing is using methods other than GET, because this HTTP request type is only supposed to be used for operations that don’t have side effects. In other words: you can invoke the same service again and again and nothing will happen to the data with each iteration. Because of this, it’s also an operation that usually isn’t overly protected, other than perhaps a login being required to access the page or service. To the non-tech, it’s a link.

For operations that change data, we should be using POST, PUT, and DELETE. These requests are different in that all three have side effects. A POST is used to create a resource; PUT to update it; and DELETE to remove it.

These types of operations are associated with a certain sequence of events–you click some kind of Submit button and usually another page or alert box opens up that perhaps asks if you’re really sure you want to do this; you don’t see the parameters, and you don’t even necessarily see the service the request went to before a response page is shown. They are not links, and they don’t have the same global accessibility that a link has.

More importantly, potentially destructive agents such as Google’s Web Accelerator can’t do damage when you use the four REST commands correctly.

Today I tried to run my metadata flickr update application on my new weblog post and the flickr API is not responding. Since flickr is not using REST correctly –it’s using GET operations for events that have side effects–I am assuming that the web service is offline while the folks work on this. I haven’t been able to find an update anywhere on this, though, so this is my assumption only. Since I’m only using flickr for fetches, hopefully this won’t result in me having to change my code.

As for my metadata layer–it’s not as open as some of the applications that have used all GETs, but it isn’t fully RESTful either, which is why it won’t be released until it is–not when Google releases such a potentially harmful application. To be honest, though, my own pride should demand that if I’m going to use a specific protocol, I use it correctly.

Bottom line: Do not use Google Web Accelerator unless you know all web service applications you use are fully RESTful. If you do, you’ll most likely be unhappy as you watch your data disappear.

Two excellent articles on how REST works: Joe Gregorio’s How to Create a REST Protocol and Dare Obasanjo’s Misunderstanding REST: A look at the Bloglines, del.icio.us and Flickr APIs.

Oh, and don’t miss Phil’s Launch the Nuclear Weapon.

Categories
Social Media Specs

I broke Nofollow

I’m still trying to write something on Technorati Tags. What’s slowing me up is there’s been such a great deal of interesting writing on the topic that I keep wanting to add to what I write. And, well, the weather warmed up to the 60’s again today, and who am I to reject an excuse to go for a nice walk. Plus I also watched Japanese Story tonight, so there goes yet, even more, opportunity to write to this weblog.

Thin excuses for sloth and neglect aside, it is interesting that a formerly obscure and rarely used attribute in X(HTML), rel, has been featured in two major technology rollouts this week: Technorati Tags and the new Google “nofollow” approach to dealing with comment spam. Well, as long as they don’t bring back blink.

Speaking of the new spam buster, after much thought, I’ve decided not to add support for rel=”nofollow” to my weblogs. I agree with Phil and believe that, if anything, there’s going to be an increase of comment spam, as spammers look to make up whatever pagerank is lost from this effort. And they’re not going to be testing whether this is implemented — why should they?

But I am particularly disturbed by the conversations at Scoble’s weblog as regard to ‘withholding’ page rank. Here’s a man who for one reason or another has been linked to by many people, and now ranks highly because of it: in Google, Technorati, and other sites. I imagine that among those that link, there was many who disagree with him at one time or another, but they’re going to link anyway. Why? Because they’re not thinking of Google and ‘juice’ and the withholding or granting of page rank when they write their response. They’re focusing on what Scoble said and how they felt about it, and they’re providing the link and the writing to their readers so that they can form their own opinion. Probably the last thing they’re thinking on is the impact of the link of Scoble’s rank.

Phil hit it right on the head when he talked about nofollow’s impact, but not its impact on the spammers — the impact on us:

But, again, it’s not so much the effects I’m interested in as the effects on us. Will comments wither where the owner shows that he finds you no more trustworthy than a Texas Hold’em purveyor, or will they blossom again without the competition from spammers? Will we do the right thing, and try to find something to link to in a post by someone new who leaves a comment we deem not worthy of a real link, or will new bloggers find it that much harder to gain any traction?

That Phil, he always goes right to the heart within the technology–but blinking, lime green? That’s cruel.

No, no. I don’t know about anyone else, but I’ve spent too much time worrying about Google and pageranks and comment spammers. A few additions to my software, and comment spam hasn’t been much of a problem, not anymore. I spend less than a minute a day cleaning out the spam that’s collected in my moderated queue. It’s become routine, like clearing the lint out of the dryer after I finish drying my clothes.

Of course, if I, and others like me, don’t implement “nofollow” we are, in effect, breaking it. The only way for this to be effective as a spam prevention technique is if everyone uses the modification. I suppose that eventually we could fall into “nofollow” and “no-nofollow” camps, with those of us in the latter added to the new white lists, and every link to our weblogs annotated with “nofollow”, as a form of community pressure.

Maybe obscurity isn’t such a bad thing, though; look what all that page rank power does to people. But I do feel bad for those of you who looked to this as a solution to comment spam. What can I say but…

Categories
Specs Weblogging

The other shoe on nofollow

Recovered from the Wayback Machine.

I expected this reason to use nofollow would take a few weeks at least, but not the first day. Scoble is happy about the other reason for nofollow: being able to link to something in your writing and not give ‘google juice’ to the linked.

Now, he says, I can link to something I dislike and use the power of my link in order to punish the linked, but it won’t push them into a higher search result status.

Dave Winer started this, in a way. He would give sly hints about what people have said and done, leaving you knowing that an interesting conversation was going on elsewhere, but you’re only hearing one side of it. When you’d ask him for a link so you could see other viewpoints, he would reply that “…he didn’t want to give the other party Google juice.” Now I imagine that *he’ll link with impunity–other than the fact that Technorati and Blogdex still follow the links. For now, of course. I imagine within a week, Technorati will stop counting links with nofollow implemented. Blogdex will soon follow, I’m sure.

Is this so bad? In a way, yes it is. It’s an abuse of the purpose of the tag, which was agreed on to discourage comment spammers. More than that, though, it’s an abuse of the the core nature of this environment, where our criticism of another party, such as a weblogger, came with the price of a link. Now, even that price is gone.

*or not

Categories
Specs

Too late solutions

Recovered from the Wayback Machine.

Flash at 6: Google calls Dave Winer. Ooo. The suspense.

Per Sam Ruby:

Robert Sayre: I noticed that the links his comment form have an interesting rel attribute.

Implemented. Prediction: that wouldn’t solve the problem.

I agree with Sam — this isn’t going to solve the problem. Gas station cash registers have signs saying that the attendants only keep 20.00 in cash on hand, but they’re still robbed.

Still, I remember something like this being discussed before.

Waiting for more. I love surprises.