Golden Gateway

Julian started a Usenet thread at comp.distributed (viewable at Google) about the Golden Gateway — how do you find the first node in a P2P network? Without any reliance on any centralized service?

Viewing the responses, there is an assumption that entry points have to be known at some point — through a friend or a server or some other static entry. Through some form of publication.

To me, a P2P cloud that is dependent on some form of publication, central server, or another centralized method of discovery, has iron in the core and is therefore vulnerable to take down from some external agency. I am not being paranoid; I am stating a technical fact.

This whole point of networks being invulnerable to external pressures is the basis of one of the legal arguments being made by the recording industry in actions involved with P2P music-sharing sites such as Kazaa and Morpheus.

The P2P music-sharing networks say that they are self-sustaining and can’t be shut down. However, this week, a glitch in a software update basically shut Morpheus down. Now Morpheus goes to Gnutella for P2P architecture support — does this make the network safe from take down?

I don’t believe so and in my next posting, we’ll look at the details behind my opinion.

P2P and relying on HTTP

The Don Box discussion about HTTP was a good read with valid points.

From a P2P, not a web services perspective, we need to guarantee certain capabilities in P2P services that we take for granted in more traditional client/server environments. This includes the following:

Transaction reliability — the old two-phase commit of database technology appears again, but this time in a more challenging guise.

Transaction auditing — a variation of the two-phase commit, except that auditing is, in some ways, more fo the business aspect of the technology.

Transaction security — we need to ensure that no one can snoop at the transaction contents, or otherwise violate the transaction playing field.

Transaction trust — not the same thing as security. Transaction trust means that we have to ensure that the P2P service we’re accessing is the correct one, the valid one and that the service met some business trust criteria (outside of the technology realm with the latter).

Service or Peer discovery — still probably one of the more complicated issues about P2P. How do we find services? How do we find P2P circles? How do market our services?

Peer rediscovery — this is where the iron hits the cloud in all P2P applications I know of. You start a communication with another peer, but that peer goes offline. How do you take up the conversation again without the use of some centralized resource? Same could also be applied to services.

Bi-directional communication — This is Don’s reference to HTTP’s asymmetric nature. Peers share communication; otherwise, you’re only talking about the traditional web services model.

The file transfer nature of Napster or Freenet, and the IM nature of Jabber don’t necessarily consume all of these aspects of P2P applications, so haven’t necessarily pushed the P2P bubble to the max. However, when we start talking about P2P services — a variation of web services one could say — then we know we’re going to be stretching both our technology capabilities and our trust of the same.

Fun!

UDDI Questions

Andy sent some questions on UDDI that I’m going to attempt to answer. If you agree, disagree, or have additions, please drop a comment.

Questions:

How do you compare UDDI to other methods of discovering networked resources

(may or may not be web services)

What’s the difference a global UDDI registry and…
– google: controlled by a single organization
– dmoz.org: open, and replicated by other search engines
– DNS: governed by ICANN, but organizations can apply to be registrars
– others?

Do the above services have the same weakness you attribute to a UDDI global registry?

In some ways, we’re talking apples, oranges, cherries, and perhaps some peaches. They’re all fruit, but the similarity ends at that point.

UDDI is a centralized discovery service managed by a consortium of organizations, the content of which may or may not be striped across serveral different servers. Information is added to the repository by submission of those with services to provide.

Google is a discovery service that is also centralized under one authority, but uses many different methods to discover information including automated agents (bots), subscription to other services (such as dmoz) and manual intervention.

Google, though, has an interesting twist to it’s discovery mechanism: it has a set of algorithms which are constantly evaluating and merging and massaging its raw data in order to provide additional measurements, ensuring higher degrees of accuracy and recency. The discovery of data is never the same two times running within a collection period.

The dmoz directory is a great open source effort to categorize information intelligently. In other words, the data is manually added and categorized to the directory. This makes the directory extremely efficient when it comes to human interpretation of data. You might say that with dmoz, the “bots” are human. You get the world involved then you have a high level of intelligent categorization of data. Only problem, though, is that human interpretation of data is just as unreliable as mechanical interpretation at times.

However, dmoz is probably the closest to UDDI of the network discovery services you’ve listed primarily because of this human intervention.

Finally, DNS. DNS does one thing and as pissy as people are about it, it does the one thing reasonably well. The web has grown to huge proportions with something like DNS to handle naming and location of resources.

In some ways, DNS is closest to what I consider an iron-free cloud if you look at it from an interpretation point of view (not necessarily implementation). You have all these records distributed across all these authoritative servers providing a definitive location of a resource. Then you have these other servers that basically do nothing more than query and cache these locations to make access to these resources more quickly and the whole framework more scalable.

In some ways I think UDDI is like DNS, also. You can have UDDI records distributed across different servers to make service lookup more efficient, and to make the whole process more scalable.

This same approach also happens with Circle, Chord, and Freenet if you think about it (the whole store and forward, query and cache at closer servers or peers so that the strain of the queries aren’t channeled to a few machines).

UDDI is like DNS for another reason: controlling organization and potential political problems. ICANN hasn’t had the best rep managing the whole DNS/registrar situation. In particular, you should ask some of the Aussie ISP’s what they think of the whole thing. They’ve had trouble with ICANN in the past.

All of the services share one common limitation: they all have hard coded entry points, and all have some organization as controller. I don’t care how altruistic the motives, there is a controlling body. There’s iron in all the approaches. All of them.

 

Visual C++ helper function

I popped over to bumr for a minute and came face to face with this Visual C++ code. Whoa! Work!

And yes, as noted in the comments,  _bstr_t and _variant_t are darn handy. Almost make VC++ palatable at times. The problem with Microsoft’s Visual products isn’t that they aren’t powerful. The problem is you have to really dig to find the nifty helper functions to make your life easier.*

Users shouldn’t have to dig for information about how to use a product. This is equivalent to “if you have to ask directions, you can’t afford to use it” in attitude. Arrogant.

*Another problem is that going Microsoft’s way usually implies total buy-in to the MS way of doing things; I still own my soul, thank you very much.

 

P2P Networks

I checked out Circle as well as Chord as P2P networks. These are excellent efforts and should be note to anyone who is interested in P2P systems. As with KaZaA, much of the P2P cloud is transient and located on the peers themselves. The folks at Userland should look at how this can be done with Radio 8.0 if they want a true, distributed backend to the product.

I have a feeling the cloud part isn’t the issue — it will be the Radio backend and this assumption of one controlling application per weblog. At least, that’s what I found when I started peeking around a bit. Perhaps folks more knowledgeable about Radio will have a better idea.

Back to the P2P systems: aside from a key entry point (and all of these systems need this and there’s a reason why) the P2P clouds are without iron. Aside from the key entry point.

Why is the entry point needed? Because each P2P circle is too small (yes it is) to make it efficient to send a bot out into the open Internet, knocking at IPs looking for a specific node of any one of these systems. All P2P systems are too small for this to be effective, Napster, Gnutella, and so on. Think about it — how many nodes are online now in the Internet? I wouldn’t even try and guess the number but I imagine millions and millions. Now you have a P2P network with about 200,000 nodes. Needle in haystack. Right?

Well, not necessarily. Depending upon the dispersion level of the nodes of the P2P network, it might not be that difficult to find an entry node into the network. So with a bot and a handshake protocol implemented at each node you could have a golden gateway — an entry point totally without iron.

However, the problem with this approach is you then have to have a bot for every system you want to join: Groove, Gnutella, Circle, and so on. What a pain.

Wouldn’t it be better to have all these systems provide a common form of identification as well as a common handshake and direction protocol and then have one type of bot that’s smart enough to tap on the door of the nearest P2P system and say “I’m looking for so and so”? And wouldn’t it be better to have each system learn about the others when contacted, such as when a bot returns to a node with a connection into Circle, it also happens to have information about the nearest golden gateway node to Gnutella?   And would it be such a resource burden to have the node check every once in a while to make sure it’s neighboring nodes are still online? So that when our bot of discovery comes calling, it’s given up to date information?

What’s the cost of a ping?

You know, I have so many bots crawling my servers that I’m amazed it’s still standing at times. But none of them work together. If they did, and if they were smarter, and if our sites had something a bit smarter than just open ports, firewalls, or web servers — then maybe we could do without DNS and centralized repositories of information such as UDDI.

Just some more grand ideas to throw out and see if people think I’m full of little green beans again.