Brave New World

What is going to be the future of connectivity? What is the Brave New World of the Internet going to be?

Is it going to be a system of services linked together through one centralized (but benevolent) agency? Need a service? Want to sell a service? Check into the Agency, the Agency will take care of you. Oh, by the way, you need to add this to your machine. And you need to give us this information.

And you need to understand that we know what’s best for you…and you have no choice, anyway, do you?

Or is it going to be a brave new world of content publishing and subscription?

You sitting at home passively on your machine hooked up as a dying man is hooked up to a heart machine, each beat a pulse from the great wire, delivering you all the information fit to print, at least fit enough to survive the filters.

You sit and add your own beat, with perhaps an accompaniment of a pat on the head, job well done. Why seek? Why search?

Now, just put that finger on that mouse and click those checkboxes and yes, we’ll take care of you because we know what’s best for you…and you have no choice, anyway, do you?

Put your mouth to the nipple and prepare to be fed.

A brave new world.

Connecting to the void you send tendrils out seeking others of like mind, or not, occasionally bumping into something new or unexpected in your search.

Two paths open for every path that closes, and the only locked door you find is standing alone with no walls around it. You laugh into the void as you walk past the door, continuing on your journey of discovery.

Blogging as Journalism and other modern myths

I’m not sure if webloggers buy into the whole “weblogging as a new and better form of Journalism” because they truly see themselves in this light, or because they seek some form of justification for all the time they spend weblogging.

People can call themselves whatever they want in their weblogs; their space, their place. However, when they start taking themselves seriously, think of themselves as pioneering personal Journalists in a brave new World Media, then I beg leave to differ. Weblogging is not a replacement for mainstream media. Weblogging is not a replacement for traditional news sources. Weblogging is not capital ‘J’ Journalism.

While its true that webloggers can be first at a story, being first doesn’t make a person a Journalist; it just makes them lucky. In some cases, it makes them unlucky.

Webloggers can also provide a personal perspective of an event, background color if you will; supplying nuances the dry recital of fact doesn’t provide. But webloggers don’t have access to the resources that make up a story, that form what we call “news”.

Ultimately the difference between webloggers and Journalists is that Journalists have an obligation to provide the facts, all the facts. To assist them in their effort, they’re given access to resources and information most of us do not have. And with this access comes a responsibility.

In our weblogs, we hold to our own moral code of what we consider responsible writing; we can say what we think and feel, issuing compliment or slander with impunity and disregard for consequences.

The Journalist, though, is held not only to their own code, but to their editor’s, their publication’s, their peers’, the code of the law, and, ultimately, their readers’ codes. And if they slander without fact, they risk loss of respect, at best, and a lawsuit at worst. If they tell only half the story, they are condemned and censured when the full truth is told.

Tuesday, in an article titled Blogosphere: the emerging Media Ecosystem, John Hiler wrote:

Because of these limited resources, many have charged Traditional Media with a consistent bias that fails to reflect the diversity of opinions and ideas. About half the email I get on this subject claims that bias is a Liberal one, while the other half claims it’s a decidedly Conservative one. Either way, there is a strong sense from some readers that Media organizations have a mixed record when it comes to accurately and fairly reporting the News.

Many people are looking to weblogs to help address this media bias.

Using weblogging to address media bias. I almost fell over laughing when I read this. But I sobered as Hiler entered into a discussion about the impact webloggers such as Glenn Reynolds and Meryl Yourish had on the recent clash between pro-Palestian/pro-Israel protestors at SFSU (summarized at another weblog).

Hiler congratulates Reynolds and Meryl and others for bringing this breaking news to the attention of the mainstream media, to Journalism:

As Meryl and others broke the story, other mainstream outlets followed the story across the Breaking News – Analysis – Op-Ed continuum.

Hiler also quotes Reynolds:

As Glenn explained, “Sometimes a story will streak across the Blogosphere like a praerie fire. Weblogs can be the dry grass, helping to spread the story.” But interestingly, some stories don’t make the leap from weblogs to mass media articles precisely because they’ve been so widely blogged. As he put it, “Journalists will sometimes drop a story idea because they’ve already been so well covered in weblogs.”

Weblogging: a thousand points of news.

If the concept of noble weblogger as Journalist is true, then I’m curious as to why isn’t there weblogger follow-up to the SFSU story? For instance, why is there no weblogger coverage of the fact that the college referred students to the DA for prosecution for hate crimes? After all, this is news, too.

In fact, Big Media – that same biased Big Media – printed the story, as seen in:

SF Gate

The PIXPage

A SFSU news release

Mercury News

SFSU’s web site created to address the issue, including a summary of the events

However, when I looked for this story in weblogs such as Meryl’s and Glenn Reynold,s I didn’t find one mention of this information. Why was this?

Is it because recent facts have emerged, such as the fact that both pro-Palestinian and pro-Israeli students have been referred to the DA for hate crimes? Is it because of the fact that there were pro-Palestinian people working to control members of their protest, trying to keep the demonstration peaceful?

Is it because in this fight, no one was entirely on the side of angels, and no one was entirely dancing with the devil?

Weblogger as Journalist. Yeah. Right.

It’s time we put the story of Weblogger as Journalist on the shelf next to stories of Bigfoot and Ogopogo and the other great myths of our time.

Defining P2P

In P2P, a peer both provides and consumes services. A group of peers can then provide and consume services to and from each other without dependence on any one server. With this understanding, there’s an assumption that this consumption and distribution occurs when the peer is connected.

Within some P2P enabled applications, the communication may be cached or queued when the peer is not connected. I know this the way Groove works.

Within Freenet, any one of the nodes within the network can consume or supply files. But if a peer is not connected, it’s not part of the network, it isn’t a participant and files are consumed and supplied through other participants. Either you’re a peer, or you’re not. Again, the assumption of 24-hour access is not a factor.

Some systems support a hybrid cloud whereby service requests are cached at a remote location (usually hidden from the peer), waiting for the other peer to connect. When the other peer connects, the communication is concluded. The results of the service call can then be communicated back to the originating peer, or cached itself if the originating peer is offline.

In a true P2P system, any one of the peers within the network could act as a cloud (intermediary) for other peers. Within a hybrid system, such as Groove, the system itself might provide these types of intermediary services.

As for firewall issues, most P2P tools can work from within firewalls, or be made to work within firewalls.

Next Generation P2P?

John Robb at Userland has defined a set of constraints for what he considers to be next generation of P2P. I appreciate that he’s put Userland architecture interests online — it generates conversation. However, I am concerned about the interpretation of “P2P”, for what is, essentially a lightweight server system.

Requirement one: The ability for individual users to create subnets where authorization is required before use is enabled.

It’s interesing that people talk about sub-nets and authorization. For true P2P security, the same rules of trust and security must be established with all peers, sub-net participants or not. Rather than create new authentication and security for each individual sub-net, the same security mechanisms and trust definitions must apply to all P2P nodes. Otherwise, any one P2P node that’s on a wire that has physical access to the secure sub-net is a point of vulnerability. And I guarantee that there will be one node that’s connected to the Internet, making all nodes insecure.

However, applying security measures across all possible P2P nodes is going to be a burden on a system — security takes bandwidth. And that’s not the biggest issue — security within P2P nodes implies control. Most forms of authentication and authorization are based on these functions being provided by a central server.

As we’ve seen recently with Morpheus, central points of entry make a P2P system vulnerable.

If this issue is straight user signon and authorization to access of services, then you’re not talking about P2P — you’re talking about a more traditional server/client application. A true P2P system must have a way for each peer to establish a secure connection and determine identity and accessibility without reliance on any specific server.

Yeah. “Gack” is right.

Requirement two: The ability to publish structured content such as a complete web site or web app to a multi-million person network without flooding the publisher’s PC.

I know where this one is going, and I’m sorry, but this is based on a flawed vision: pushing content out to an individual client rather than having the client connect to a centralized source. In addition, this isn’t really a requirement for P2P, but a specific application’s functional need. It’s important to keep the two separate as we discuss the requirement in more detail.

At it’s simplest, published content is nothing more than files, and any P2P file system will work, including Freenet and Gnutella. But in reality, with published content we’re talking about structure as well as files. In addition, the published content also implies an ability to access and re-access the same publication source again and again in order to get fresh content.

Traditional P2P file transfer systems are based on the concept that you’re after a specific resource, a single item — you don’t care where you get it. For published content, the source is a key factor in the peer connection.

As for the issues of scalability, again, traditional P2P networks don’t have an answer that will work in for this requirement because of that single port of content. This would be equivalent to a Gnutella network and only one node on that network has Michael Jackson’s Thriller. As relieved as we are about this, this does put some serious limitations on a P2P-based resource system.

However, once we get beyond the stretch to the P2P paradigm this requirement necessitates, the same concepts of store and forward of Freenet could work for this requirement, except that you’re not talking about intermediate nodes storing an MP3 file — you’re talking about the possibility of massive amounts of information being dumped on each individual intermediate node.

The only way for this to work would be to stripe the material and distribute the content on several nodes, basically creating a multi-dimensional store and forward. Ugh. Now, what was the problem with the web?

Requirement three: The ability to connect subscribed users in a given subnet to each other via Web Services in order to enable a new class of applications that share information (but don’t utilize centralized resources).

The whole principle behind P2P is connecting peers to each other. However, maintaining a true connection in order to successfully conduct a transaction, that’s the key. I once wrote the following functionality for a P2P transaction:

Transaction reliability — the old two-phase commit of database technology appears again, but this time in a more challenging guise.

Transaction auditing — a variation of the two-phase commit, except that auditing is, in some ways, more fo the business aspect of the technology.

Transaction security — we need to ensure that no one can snoop at the transaction contents, or otherwise violate the transaction playing field.

Transaction trust — not the same thing as security. Transaction trust means that we have to ensure that the P2P service we’re accessing is the correct one, the valid one, and that the service met some business trust criteria (outside of the technology realm with the latter).

Service or Peer discovery — still probably one of the more complicated issues about P2P. How do we find services? How do we find P2P circles? How do market our services?

Peer rediscovery — this is where the iron hits the cloud in all P2P applications I know of. You start a communication with another peer, but that peer goes offline. How do you take up the conversation again without the use of some centralized resource? Same could also be applied to services.

Bi-directional communication — This is a reference to HTTP’s asymmetric nature. Peers share communication; otherwise you’re only talking about the traditional web services model.

Interesting challenge. As far as I know, at no one has met it yet…at least nothing that can handle complex data with a single point of origin.

Outside of the listed requirements, John discusses that the next generation P2P systems needs some form of development environment. He states, “Notice, that in this system, the P2P transport is important but generic — it is just a pipe.” He also says “… this system it doesn’t have to be completely decentralized to avoid legal action.”

Last time I looked, decentralization was the basis of P2P. And can we all forget the damn copyright issues for once and focus on what P2P was meant to be: total enablement of each node within the Internet?

John, you have specified requirements of which some, but not all, can be met by P2P-based functionality. Let me emphasize that “some but not all” response again.

You’re really not packaging requirements for the next generation of P2P systems; what you’re packaging is the requirements for “Next Generation Radio”. It’s important not to confuse this with what’s necessary for P2P systems.

I am Superwoman. What makes me Superwoman? Because I meet all the requirements for being Superwoman. And what are the requirements for being Superwoman?

Being me.

It just doesn’t work that way.

UDDI and Discovery

Questions:

How do you compare UDDI to other methods of discovering networked resources

(may or may not be web services)

What’s the difference a global UDDI registry and…
– Google: controlled by a single organization
– dmoz.org: open, and replicated by other search engines
– DNS: governed by ICANN, but organizations can apply to be registrars
– others?

Do the above services have the same weakness you attribute to a UDDI global registry?

In some ways, we’re talking apples, oranges, cherries, and perhaps some peaches. They’re all fruit, but the similarity ends at that point.

UDDI is a centralized discovery service managed by a consortium of organizations, the content of which may or may not be striped across several different servers. Information is added to the repository by submission of those with services to provide.

Google is a discovery service that is also centralized under one authority but uses many different methods to discover information including automated agents (bots), subscription to other services (such as dmoz) and manual intervention.

Google, though, has an interesting twist to its discovery mechanism: it has a set of algorithms which are constantly evaluating and merging and massaging its raw data in order to provide additional measurements, ensuring higher degrees of accuracy and recency. The discovery of data is never the same two times running within a collection period.

The dmoz directory is a great open source effort to categorize information intelligently. In other words, the data is manually added and categorized to the directory. This makes the directory extremely efficient when it comes to human interpretation of data. You might say that with dmoz, the “bots” are human. You get the world involved then you have a high level of intelligent categorization of data. The only problem, though, is that human interpretation of data is just as unreliable as a mechanical interpretation at times.

However, dmoz is probably the closest to UDDI of the network discovery services you’ve listed primarily because of this human intervention.

Finally, DNS. DNS does one thing and as pissy as people are about it, it does the one thing reasonably well. The web has grown to huge proportions with something like DNS to handle naming and location of resources.

In some ways, DNS is closest to what I consider an iron-free cloud if you look at it from an interpretation point of view (not necessarily implementation). You have all these records distributed across all these authoritative servers providing a definitive location of a resource. Then you have these other servers that basically do nothing more than query and cache these locations to make access to these resources more quickly and the whole framework more scalable.

In some ways, I think UDDI is like DNS, also. You can have UDDI records distributed across different servers to make service lookup more efficient and to make the whole process more scalable.

This same approach also happens with Circle, Chord, and Freenet if you think about it (the whole store and forward, query and cache at closer servers or peers so that the strain of the queries aren’t channeled to a few machines).

UDDI is like DNS for another reason: controlling organization and potential political problems. ICANN hasn’t had the best rep managing the whole DNS/registrar situation. In particular, you should ask some of the Aussie ISP’s what they think of the whole thing. They’ve had trouble with ICANN in the past.

All of the services share one common limitation: they all have hardcoded entry points, and all have some organization as a controller. I don’t care how altruistic the motives, there is a controlling body. There’s iron in all the approaches. All of them.