Categories
Technology

UDDI Questions

Andy sent some questions on UDDI that I’m going to attempt to answer. If you agree, disagree, or have additions, please drop a comment.

Questions:

How do you compare UDDI to other methods of discovering networked resources

(may or may not be web services)

What’s the difference a global UDDI registry and…
– google: controlled by a single organization
– dmoz.org: open, and replicated by other search engines
– DNS: governed by ICANN, but organizations can apply to be registrars
– others?

Do the above services have the same weakness you attribute to a UDDI global registry?

In some ways, we’re talking apples, oranges, cherries, and perhaps some peaches. They’re all fruit, but the similarity ends at that point.

UDDI is a centralized discovery service managed by a consortium of organizations, the content of which may or may not be striped across serveral different servers. Information is added to the repository by submission of those with services to provide.

Google is a discovery service that is also centralized under one authority, but uses many different methods to discover information including automated agents (bots), subscription to other services (such as dmoz) and manual intervention.

Google, though, has an interesting twist to it’s discovery mechanism: it has a set of algorithms which are constantly evaluating and merging and massaging its raw data in order to provide additional measurements, ensuring higher degrees of accuracy and recency. The discovery of data is never the same two times running within a collection period.

The dmoz directory is a great open source effort to categorize information intelligently. In other words, the data is manually added and categorized to the directory. This makes the directory extremely efficient when it comes to human interpretation of data. You might say that with dmoz, the “bots” are human. You get the world involved then you have a high level of intelligent categorization of data. Only problem, though, is that human interpretation of data is just as unreliable as mechanical interpretation at times.

However, dmoz is probably the closest to UDDI of the network discovery services you’ve listed primarily because of this human intervention.

Finally, DNS. DNS does one thing and as pissy as people are about it, it does the one thing reasonably well. The web has grown to huge proportions with something like DNS to handle naming and location of resources.

In some ways, DNS is closest to what I consider an iron-free cloud if you look at it from an interpretation point of view (not necessarily implementation). You have all these records distributed across all these authoritative servers providing a definitive location of a resource. Then you have these other servers that basically do nothing more than query and cache these locations to make access to these resources more quickly and the whole framework more scalable.

In some ways I think UDDI is like DNS, also. You can have UDDI records distributed across different servers to make service lookup more efficient, and to make the whole process more scalable.

This same approach also happens with Circle, Chord, and Freenet if you think about it (the whole store and forward, query and cache at closer servers or peers so that the strain of the queries aren’t channeled to a few machines).

UDDI is like DNS for another reason: controlling organization and potential political problems. ICANN hasn’t had the best rep managing the whole DNS/registrar situation. In particular, you should ask some of the Aussie ISP’s what they think of the whole thing. They’ve had trouble with ICANN in the past.

All of the services share one common limitation: they all have hard coded entry points, and all have some organization as controller. I don’t care how altruistic the motives, there is a controlling body. There’s iron in all the approaches. All of them.

 

Categories
Web

Are Web Services ready for the Web

Recovered from the Wayback Machine.

The headline at news.com reads “Are Web Services Ready for the Web?”

This really annoyed me. Last time I heard, there was a web before Microsoft. And there will be a Web in spite of Microsoft. So the company outreached itself with .Net My Services. Well, that’s not surprising considering how little trust the world has in Microsoft to keep the data safe, as well as not use it inappropriately.

However, smart move on the company’s part now. Instead of trying to bull through with a business model that’s both confusing as well as problematical, Microsoft is going to come to the public with its hat in its hand and say “Well, we’re not sure where to go just yet. We’ve made mistakes.” After endearing themselves to all the people with these heartfelt sentiments, the little MS sharkies will sit back and watch to see how other companies fill the gap with workable and non-workable business models, and then swoop down and pick and choose among the approaches.

Deja Vu, all over again.

 

Categories
Technology

Mind the email virus

I’ve had an unusually high number of email virus attempts to wreck havoc on my tender little system today. My quarantine area of Norton is beginning to resemble fly paper in a particularly hot, moist, and odorous climate. (This is where you all go “Ewwww, yuck!”)

Of course you all know not to open emails (not even for review) that don’t have a subject line, right? And you all know not to have email preview/review turned on with Outlook, right?

Ah, I love the coming of Spring. Green leaves, flower buds, warmer winds, and fresh, happy little computer viruses digging their busy little way through the Internet, chipping away at each node like it’s a particularly tasty little tender tree root…

…that it then STRIPS of all nutrients, leaving it withered and dessicated, brown, and crumbling in the hot noon day sun before moving on like a RAVENING HORDE to the other trees in the forest until the whole damn Internet is just one desert with us as pack animals HOWLING in the night desperate to find each other in a system that’s no longer functioning!!!

Ahem. Ah. Well. Hmmm.

It’s okay. I’m all better now. Just a little posting to say “Mind the Email Virus.”

Categories
Technology

Visual C++ helper function

Recovered from the Wayback Machine.

I popped over to bumr for a minute and came face to face with this Visual C++ code. Whoa! Work!

And yes, as noted in the comments,  _bstr_t and _variant_t are darn handy. Almost make VC++ palatable at times. The problem with Microsoft’s Visual products isn’t that they aren’t powerful. The problem is you have to really dig to find the nifty helper functions to make your life easier.*

Users shouldn’t have to dig for information about how to use a product. This is equivalent to “if you have to ask directions, you can’t afford to use it” in attitude. Arrogant.

*Another problem is that going Microsoft’s way usually implies total buy-in to the MS way of doing things; I still own my soul, thank you very much.

 

Categories
Technology

P2P Networks

Recovered from the Wayback Machine.

I checked out Circle as well as Chord as P2P networks. These are excellent efforts and should be note to anyone who is interested in P2P systems. As with KaZaA, much of the P2P cloud is transient and located on the peers themselves. The folks at Userland should look at how this can be done with Radio 8.0 if they want a true, distributed backend to the product.

I have a feeling the cloud part isn’t the issue — it will be the Radio backend and this assumption of one controlling application per weblog. At least, that’s what I found when I started peeking around a bit. Perhaps folks more knowledgeable about Radio will have a better idea.

Back to the P2P systems: aside from a key entry point (and all of these systems need this and there’s a reason why) the P2P clouds are without iron. Aside from the key entry point.

Why is the entry point needed? Because each P2P circle is too small (yes it is) to make it efficient to send a bot out into the open Internet, knocking at IPs looking for a specific node of any one of these systems. All P2P systems are too small for this to be effective, Napster, Gnutella, and so on. Think about it — how many nodes are online now in the Internet? I wouldn’t even try and guess the number but I imagine millions and millions. Now you have a P2P network with about 200,000 nodes. Needle in haystack. Right?

Well, not necessarily. Depending upon the dispersion level of the nodes of the P2P network, it might not be that difficult to find an entry node into the network. So with a bot and a handshake protocol implemented at each node you could have a golden gateway — an entry point totally without iron.

However, the problem with this approach is you then have to have a bot for every system you want to join: Groove, Gnutella, Circle, and so on. What a pain.

Wouldn’t it be better to have all these systems provide a common form of identification as well as a common handshake and direction protocol and then have one type of bot that’s smart enough to tap on the door of the nearest P2P system and say “I’m looking for so and so”? And wouldn’t it be better to have each system learn about the others when contacted, such as when a bot returns to a node with a connection into Circle, it also happens to have information about the nearest golden gateway node to Gnutella?   And would it be such a resource burden to have the node check every once in a while to make sure it’s neighboring nodes are still online? So that when our bot of discovery comes calling, it’s given up to date information?

What’s the cost of a ping?

You know, I have so many bots crawling my servers that I’m amazed it’s still standing at times. But none of them work together. If they did, and if they were smarter, and if our sites had something a bit smarter than just open ports, firewalls, or web servers — then maybe we could do without DNS and centralized repositories of information such as UDDI.

Just some more grand ideas to throw out and see if people think I’m full of little green beans again.