Categories
Technology

Defining P2P

In P2P, a peer both provides and consumes services. A group of peers can then provide and consume services to and from each other without dependence on any one server. With this understanding, there’s an assumption that this consumption and distribution occurs when the peer is connected.

Within some P2P enabled applications, the communication may be cached or queued when the peer is not connected. I know this the way Groove works.

Within Freenet, any one of the nodes within the network can consume or supply files. But if a peer is not connected, it’s not part of the network, it isn’t a participant and files are consumed and supplied through other participants. Either you’re a peer, or you’re not. Again, the assumption of 24-hour access is not a factor.

Some systems support a hybrid cloud whereby service requests are cached at a remote location (usually hidden from the peer), waiting for the other peer to connect. When the other peer connects, the communication is concluded. The results of the service call can then be communicated back to the originating peer, or cached itself if the originating peer is offline.

In a true P2P system, any one of the peers within the network could act as a cloud (intermediary) for other peers. Within a hybrid system, such as Groove, the system itself might provide these types of intermediary services.

As for firewall issues, most P2P tools can work from within firewalls, or be made to work within firewalls.

Categories
Technology

Next Generation P2P?

John Robb at Userland has defined a set of constraints for what he considers to be next generation of P2P. I appreciate that he’s put Userland architecture interests online — it generates conversation. However, I am concerned about the interpretation of “P2P”, for what is, essentially a lightweight server system.

Requirement one: The ability for individual users to create subnets where authorization is required before use is enabled.

It’s interesing that people talk about sub-nets and authorization. For true P2P security, the same rules of trust and security must be established with all peers, sub-net participants or not. Rather than create new authentication and security for each individual sub-net, the same security mechanisms and trust definitions must apply to all P2P nodes. Otherwise, any one P2P node that’s on a wire that has physical access to the secure sub-net is a point of vulnerability. And I guarantee that there will be one node that’s connected to the Internet, making all nodes insecure.

However, applying security measures across all possible P2P nodes is going to be a burden on a system — security takes bandwidth. And that’s not the biggest issue — security within P2P nodes implies control. Most forms of authentication and authorization are based on these functions being provided by a central server.

As we’ve seen recently with Morpheus, central points of entry make a P2P system vulnerable.

If this issue is straight user signon and authorization to access of services, then you’re not talking about P2P — you’re talking about a more traditional server/client application. A true P2P system must have a way for each peer to establish a secure connection and determine identity and accessibility without reliance on any specific server.

Yeah. “Gack” is right.

Requirement two: The ability to publish structured content such as a complete web site or web app to a multi-million person network without flooding the publisher’s PC.

I know where this one is going, and I’m sorry, but this is based on a flawed vision: pushing content out to an individual client rather than having the client connect to a centralized source. In addition, this isn’t really a requirement for P2P, but a specific application’s functional need. It’s important to keep the two separate as we discuss the requirement in more detail.

At it’s simplest, published content is nothing more than files, and any P2P file system will work, including Freenet and Gnutella. But in reality, with published content we’re talking about structure as well as files. In addition, the published content also implies an ability to access and re-access the same publication source again and again in order to get fresh content.

Traditional P2P file transfer systems are based on the concept that you’re after a specific resource, a single item — you don’t care where you get it. For published content, the source is a key factor in the peer connection.

As for the issues of scalability, again, traditional P2P networks don’t have an answer that will work in for this requirement because of that single port of content. This would be equivalent to a Gnutella network and only one node on that network has Michael Jackson’s Thriller. As relieved as we are about this, this does put some serious limitations on a P2P-based resource system.

However, once we get beyond the stretch to the P2P paradigm this requirement necessitates, the same concepts of store and forward of Freenet could work for this requirement, except that you’re not talking about intermediate nodes storing an MP3 file — you’re talking about the possibility of massive amounts of information being dumped on each individual intermediate node.

The only way for this to work would be to stripe the material and distribute the content on several nodes, basically creating a multi-dimensional store and forward. Ugh. Now, what was the problem with the web?

Requirement three: The ability to connect subscribed users in a given subnet to each other via Web Services in order to enable a new class of applications that share information (but don’t utilize centralized resources).

The whole principle behind P2P is connecting peers to each other. However, maintaining a true connection in order to successfully conduct a transaction, that’s the key. I once wrote the following functionality for a P2P transaction:

Transaction reliability — the old two-phase commit of database technology appears again, but this time in a more challenging guise.

Transaction auditing — a variation of the two-phase commit, except that auditing is, in some ways, more fo the business aspect of the technology.

Transaction security — we need to ensure that no one can snoop at the transaction contents, or otherwise violate the transaction playing field.

Transaction trust — not the same thing as security. Transaction trust means that we have to ensure that the P2P service we’re accessing is the correct one, the valid one, and that the service met some business trust criteria (outside of the technology realm with the latter).

Service or Peer discovery — still probably one of the more complicated issues about P2P. How do we find services? How do we find P2P circles? How do market our services?

Peer rediscovery — this is where the iron hits the cloud in all P2P applications I know of. You start a communication with another peer, but that peer goes offline. How do you take up the conversation again without the use of some centralized resource? Same could also be applied to services.

Bi-directional communication — This is a reference to HTTP’s asymmetric nature. Peers share communication; otherwise you’re only talking about the traditional web services model.

Interesting challenge. As far as I know, at no one has met it yet…at least nothing that can handle complex data with a single point of origin.

Outside of the listed requirements, John discusses that the next generation P2P systems needs some form of development environment. He states, “Notice, that in this system, the P2P transport is important but generic — it is just a pipe.” He also says “… this system it doesn’t have to be completely decentralized to avoid legal action.”

Last time I looked, decentralization was the basis of P2P. And can we all forget the damn copyright issues for once and focus on what P2P was meant to be: total enablement of each node within the Internet?

John, you have specified requirements of which some, but not all, can be met by P2P-based functionality. Let me emphasize that “some but not all” response again.

You’re really not packaging requirements for the next generation of P2P systems; what you’re packaging is the requirements for “Next Generation Radio”. It’s important not to confuse this with what’s necessary for P2P systems.

I am Superwoman. What makes me Superwoman? Because I meet all the requirements for being Superwoman. And what are the requirements for being Superwoman?

Being me.

It just doesn’t work that way.

Categories
Technology

UDDI and Discovery

Questions:

How do you compare UDDI to other methods of discovering networked resources

(may or may not be web services)

What’s the difference a global UDDI registry and…
– Google: controlled by a single organization
– dmoz.org: open, and replicated by other search engines
– DNS: governed by ICANN, but organizations can apply to be registrars
– others?

Do the above services have the same weakness you attribute to a UDDI global registry?

In some ways, we’re talking apples, oranges, cherries, and perhaps some peaches. They’re all fruit, but the similarity ends at that point.

UDDI is a centralized discovery service managed by a consortium of organizations, the content of which may or may not be striped across several different servers. Information is added to the repository by submission of those with services to provide.

Google is a discovery service that is also centralized under one authority but uses many different methods to discover information including automated agents (bots), subscription to other services (such as dmoz) and manual intervention.

Google, though, has an interesting twist to its discovery mechanism: it has a set of algorithms which are constantly evaluating and merging and massaging its raw data in order to provide additional measurements, ensuring higher degrees of accuracy and recency. The discovery of data is never the same two times running within a collection period.

The dmoz directory is a great open source effort to categorize information intelligently. In other words, the data is manually added and categorized to the directory. This makes the directory extremely efficient when it comes to human interpretation of data. You might say that with dmoz, the “bots” are human. You get the world involved then you have a high level of intelligent categorization of data. The only problem, though, is that human interpretation of data is just as unreliable as a mechanical interpretation at times.

However, dmoz is probably the closest to UDDI of the network discovery services you’ve listed primarily because of this human intervention.

Finally, DNS. DNS does one thing and as pissy as people are about it, it does the one thing reasonably well. The web has grown to huge proportions with something like DNS to handle naming and location of resources.

In some ways, DNS is closest to what I consider an iron-free cloud if you look at it from an interpretation point of view (not necessarily implementation). You have all these records distributed across all these authoritative servers providing a definitive location of a resource. Then you have these other servers that basically do nothing more than query and cache these locations to make access to these resources more quickly and the whole framework more scalable.

In some ways, I think UDDI is like DNS, also. You can have UDDI records distributed across different servers to make service lookup more efficient and to make the whole process more scalable.

This same approach also happens with Circle, Chord, and Freenet if you think about it (the whole store and forward, query and cache at closer servers or peers so that the strain of the queries aren’t channeled to a few machines).

UDDI is like DNS for another reason: controlling organization and potential political problems. ICANN hasn’t had the best rep managing the whole DNS/registrar situation. In particular, you should ask some of the Aussie ISP’s what they think of the whole thing. They’ve had trouble with ICANN in the past.

All of the services share one common limitation: they all have hardcoded entry points, and all have some organization as a controller. I don’t care how altruistic the motives, there is a controlling body. There’s iron in all the approaches. All of them.

Categories
Technology

Golden Gateway

Julian started a Usenet thread at comp.distributed (viewable at Google) about the Golden Gateway — how do you find the first node in a P2P network? Without any reliance on any centralized service?

Viewing the responses, there is an assumption that entry points have to be known at some point — through a friend or a server or some other static entry. Through some form of publication.

To me, a P2P cloud that is dependent on some form of publication, central server, or another centralized method of discovery, has iron in the core and is therefore vulnerable to take down from some external agency. I am not being paranoid; I am stating a technical fact.

This whole point of networks being invulnerable to external pressures is the basis of one of the legal arguments being made by the recording industry in actions involved with P2P music-sharing sites such as Kazaa and Morpheus.

The P2P music-sharing networks say that they are self-sustaining and can’t be shut down. However, this week, a glitch in a software update basically shut Morpheus down. Now Morpheus goes to Gnutella for P2P architecture support — does this make the network safe from take down?

I don’t believe so and in my next posting, we’ll look at the details behind my opinion.

Categories
Technology

Morpheus Shut Down

Recovered from the Wayback Machine.

And over at John’s the discussion is about the shut down of Morpheus, software based on Fast Track, as is Kazaa.

I believe I mentioned this last week that any P2P network that has iron – no matter how minute – in the cloud can be shut down. I will refrain from saying I told you so. Well, no, I won’t refrain from this. I told you so.

Update: more on this story at ZDNet.

Question: Can you shut down a Gnutella network?