Categories
Web

Are Web Services ready for the Web

Recovered from the Wayback Machine.

The headline at news.com reads “Are Web Services Ready for the Web?”

This really annoyed me. Last time I heard, there was a web before Microsoft. And there will be a Web in spite of Microsoft. So the company outreached itself with .Net My Services. Well, that’s not surprising considering how little trust the world has in Microsoft to keep the data safe, as well as not use it inappropriately.

However, smart move on the company’s part now. Instead of trying to bull through with a business model that’s both confusing as well as problematical, Microsoft is going to come to the public with its hat in its hand and say “Well, we’re not sure where to go just yet. We’ve made mistakes.” After endearing themselves to all the people with these heartfelt sentiments, the little MS sharkies will sit back and watch to see how other companies fill the gap with workable and non-workable business models, and then swoop down and pick and choose among the approaches.

Deja Vu, all over again.

 

Categories
Web

Browser breakage

NJ Meryl has been having some interesting challenges accessing a specific web site so she tried accessing it using an older browser – Netscape 3.x to be exact. Well, as she found out, Burningbird breaks with Netscape 3.x.

My reaction? No offense to the world, but I could give a flying squirrel (this is a polite euphemism you understand) if a 3.x browser can’t access this site. And the person trying to access the site with Lynx might as well give up now, too.

I’ve been working the cross-browser and cross-version issue since the first release of IE in the 1995/1996 time period (can’t even remember specific date any more), as shown in this old article at Javacats that I had to pull from the Wayback Machine, Netscape Navigator’s JavaScript 1.1
vs Microsoft Internet Explorer’s JScript
. And this is only the start of articles and two books on the subject of cross-browser and cross-version problems. Long before XML, XHTML, CSS and the like. Back in the good old days when we got excited about the FONT tag, and wanted to lynch the idiot that invented BLINK.

I have a set of cross-browser DHTML objects that have successfully ported from the 3.x browsers to working with Mozilla, Netscape 6.x, Opera, and IE 6.x. That’s a lot of time for one set of objects. Want to see them work? Try the Adobe PhotoShop Demos, the Dr. Dotty Games and the very popular Match Game.

Here, I don’t want tech. Here I want everything in the world but tech. Not that I don’t love tech — I do. But I need a break from it. You can get tech at my other web sites. Here, there be nonsense. Unreadable in 3.x nonsense.

Categories
Web

What the hell is P2P?

Recovered from the Wayback Machine.

If the Net is good for one thing, it’s the propagation of buzzwords and acronyms. In the case of P2P – Peer-to-Peer – you have a term that is both buzzword and acronym, and represents a group of technologies that is both very old (in Net terms) and very new.

Well, that’s all fine and dandy, but what the hell is P2P?

Well, some (such as Clay Shirky of The Accelerator Group) would say that P2P technologies encompass any technology that exists outside of DNS. Sounds impressive, but what does this mean?

This means that P2P services are accessed using some technique other than Domain Name Service (DNS) lookup. DNS is used to find specific IP addresses using well-known and human friendly aliases such as www.w3.org. P2P doesn’t rely on DNS and aliases such as these because much of the P2P resources and/or functionality may exist on IP addresses that change every time a participant connects to the Net through something such as a dial-up modem.

Tim O’Reilly broadens P2P to include any technology whereby peers share services and/or resources rather than these same resources being served up by a central server. In addition he also includes collaborative functionality and distributed computing, as well as instant messaging services such as AOL’s Instant Messenger and Jabber. In each of these, common points of intersection are features such as an assumption of no fixed IP address and no reliance on a common, centralized server.

Based on this, a standard Web application is not P2P because clients that access a site’s functionality all do so through one common server – the Web server – using a well-known DNS-supplied and supported name. Though the approach is Net-enabled, this way of serving functionality is a variation of the standard client/server application model that’s been around for years, with the browser acting as what is termed a very thin client (i.e. very little of the functionality is present on the client, most resides on the server).

So, if Web browsing isn’t P2P, how does P2P work?

As an example of P2P in action in its purest form, Gnutella is a P2P application that enables file and resource sharing among peers.  To participate, you download the Gnutella client software, you pick an IP address among any that are currently connected to the system and connect to that IP, or connect to several IPs of friends and known associates who are also Gnutella clients.

Once connected to the Gnutella network, you have access to the peers each of your connected IPs are connected to, and through these secondary connections you have access to their connections, and so on. Sort of like a phone tree where the phone numbers change every day, but you’re guaranteed to have access to at least one of the numbers at any given time.

What happens once you’re connected? Well, in the case of another known P2P application, Freenet, you can issue a request for a specific file or resource, and any client that has access to that resource returns it via the request path…to you.

Gnutella and Freenet share the same common characteristics of P2P applications. Both use a naming system other than DNS to locate peers; both don’t require a common, centralized server. However, they differ in that with Freenet, each peer along the request path could keep a copy of the resource requested, basically increasing its availability. The more times a resource is accessed, the more peers have the resource on their own machines, the easier it is to access the resource.

With Gnutella, only the original resource and you since you’ve now accessed this resource have access to it. Because of this, you’re less likely to find the resource you’re requesting because it’s out of range of your request, or beyond the horizen. (Your request, unlike the ghost ships of yore isn’t going to float about the Internet, unsatisfied, forever. It will time out at a certain point.).

(Andy Oram has an excellent article on Gnutella and Freenet at http://www.oreillynet.com/pub/a/network/2000/05/12/magazine/gnutella.html.)

Okay, so that’s a quick look at some of the more well known P2P applications (other than Napster, and I’m sick of hearing about Napster). So now you might be asking yourself why you would care about this type of technology? Especially in a business context? After all, how can you charge for something when there’s no centralized server and no way of knowing which peer is accessing what resource?

Going back to Tim O’Reilly’s original list, let’s take a look at the concept of collaboration. In a collaborative P2P application, a group of peers work together on a specific project, such as a document or a presentation or using a virtual white board to brainstorm. In this type of environment, the services to work on the collaborative project are located on the peers, and the data is propagated to each of the clients at set intervals (based on the type of collaborative effort and the infrastructure supporting the peers). Again, there doesn’t have to be set IP addresses (though there isn’t a prohibition against fixed IP address), and there doesn’t have to be a central server (though again, a central server could be used to do such things as backup group data periodically – as long as it isn’t essential to the application process). In fact, a hybrid P2P application can be one that uses a centralized server for IP lookup, but all other services are performed on the peers.

It would be safe to say that this type of application does have commercial possibilities, and it is this type of technology that’s currently being implemented within the Groove architecture, a framework and toolset as well as application created by Ray Ozzie of Lotus Notes fame.

(You can download Groove at http://www.groove.net.)

How viable is something such as Groove? Viable enough to warrant 60 million dollars of Venture Capital money, and that’s big bucks to folks like you and I.

How Groove works is that a person wanting to be a peer downloads the Groove application/infrastructure and installs it on his or her machine. The architecture supports a user interface that provides a working area, known as a shared space,  for the collaborative efforts supported by the application.

A shared space could contain something as simple as a discussion forum, a virtual white board, or a chess game; or it could contain functionality as complex as a piece of a sophisticated supply chain application.

The Groove architecture provides the infrastructure to support tools that can be used in the shared spaces, and vendors can provide Groove “connectors” for their existing applications, or can create new tools for Groove, through exposed interconnectivity accessed through the Groove Developer Kit (GDK).

So, if a major player such as Ariba wanted to publish their software services through Groove, they can by using the GDK to create a Groove tool that acts as a connector to the company’s services. By doing this, Ariba then gets the benefit of an architecture that supports safe, secure, efficient, and multi-peer access of its applications without having to write more than some XML files, and perhaps a COM/COM+ wrapper (if the Ariba service isn’t currently implemented as a COM service – Groove’s current service implementation).

(See more on the Groove architecture at http://www.groovenetworks.com and download the Groove Developer Kit at http://devzone.groove.net. See more on Ariba at http://www.ariba.com.)

How will Ray Ozzie make money with Groove? By licensing the Groove architecture to these third party service suppliers, who in turn, license their services to the peer – the customer wanting access to the services.

Making money. Sounds good.

(See an article “21 for 2001” from internet.com at http://boston.internet.com/views/article/0,1928,2011_552841,00.html that looks at 21 startup companies in Boston this publication believes will succeed in the year 2001. Features Groove)

A company can use P2P-friendly technologies, while still staying within more conservative, mainstream client-server types of implementation, such as browser-based access. For instance, speaking of Ariba earlier, this is a company that is using technologies that would enable this company to become P2P-abled fairly easily, if the company chooses.

As an example of P2P-friendly technologies, Ariba is partnered with webMethods, a leading integration application software company that provides a centralized “hub and spoke” technology to allow different companies’ heterogeneous applications to communicate.

(See more on webMethods at http://www.webmethods.com. More on the products at http://www.webmethods.com/content/1,1107,SolutionsIndex,FF.html.)

Ariba supports a dialog of XML (another buzzy acronym that turned everything upside down) called cXML – Commerce XML – within its Ariba Commerce Services Network (CNS) architecture. This architecture, in turn, supports many of the popular eMarket and B2B Ariba applications such as Ariba MarketPlace or Ariba Buyer.

To support integration with clients, all transactions within the CNS architecture are based in cXML, and webMethods has provided a cXML interface within its own set of integration applications. As webMethods supports interfaces to numerous other XML-based languages such as RosettaNet and numerous other technologies, clients can access the Ariba CNS services through the webMethods supplied cXML interface, using their own application-specific technology (and legacy systems – no major re-write required).

So a buyer can use Ariba CNS services to access a catalog item and purchase an item, and have the process connect seamlessly to the supplier applications created using something such as EJB (Enterprise Java Beans) through the webMethods integration interfaces.

Now, technically, all of this in dependent on a server being in place (the CNS services and the webMethods hub), but the infrastructure is such that it wouldn’t take a major re-engineering effort to lighten and split apart the services to form a P2P network. And Ariba and webMethods would still lease or sell their individual services – integrated licensing could prevent any individual from sending copies of these services out freely on the Net for anyone to use.

There’s that making money again.

Categories
Web

Googlestak

Recovered from the Wayback Machine.

One last post as a favor to my antipodean friends. (Antipodean — what a perfect word. You meet the most charming and erudite people in weblogging.)

Victor wants to try to do a Googlestak — stacking the decks for a Google search by having webloggers link to a specific web site. The more links to a site, the higher it’s Google ranking, and the closer it will be in the returned results.

This is an experiment. The fact that the company being linked is owned by Victor, and that another weblogging friend is rumored to work there has nothing to do with this.

No siree, this is an unbiased experiment to influence Google. A test. An unbiased test. No gain here.

And here it is, my contribution to this unbiased and totally without gain and in the interests of science experiment:

If you want training in Australia on Macromedia and other web development technologies, go to Stand Out Training. Learn CSS. DHTML. HTML. Macromedia. How to eat Vegemite.

(Shameless hussies. Just because they have sexy accents, think they can get away with murder <smile />)

 

 

 

 

 

 

 

Categories
Web

Googlestak

Recovered from the Wayback Machine

Victor is trying to see how we can influence Google page rankings, using weblog links. He’s calling the process Googlestak. It will be interesting to see if this effort works. Target link is Stand Out Training — Victor’s company.

Because of my interest in the semantic web, I’ve always been interested in our friend Google. Through research I’ve found that you can, with effort, influence Google, but the circumstances have to be just right. There are entire mailing lists and web sites devoted to how Google works, and how the page ranking algorithm works.

The number of links to a particular page are only part of the Google search algorithm. The importance of those links can also influence the scoring. If you have one page that is linked by 10 pages that are themselves ranked fairly low, it won’t rank higher than a page that has one or two links from very high ranking pages.

Unfortunately, weblogs are — for the most part — not the highest ranking pages. Links are sporadic outside of the blogrolls, and these tend to be incestuous: page A links to page B which links back to page A. Put a lot of links within your blogroll linking back to you, and you’re still going to be low ranking because of that “importance” ranking catch 22. You would need to break this cycle by getting linked by an A-lister such as Scripting News or Doc Searls or Rageboy or JOHO — people that are linked by a disportionate share of the weblogging community.

As an example Jonathon will usually get more blogroll hits from my weblog rather than from the Scripting News blogroll because I’ve made his link part of the  “Australian Delegation”  — I made it stand out. However, within Google, Jonathon’s weblog will get more buzz because of that Scripting News link than anything I could do.

In addition, because our pages roll off into archives, the ranking algos start to break down because they don’t necessarily figure in the temporal nature of weblogging.

If you have a static page that generates mild interest and covers a topic that isn’t time sensitive, over time that page will rise slowly to the top of Google because the interest is cummulative. Weblog pages can rise to the top (especially when helped by strange words such as googlewhacking), but they’ll fall away quickly once the initial intense interest fades, and as the weblog pages roll off into archives (which are not directly linked to from blogrolls, remember).

I’m linking to a specific post of Victor’s. Others will join me. Eventually, this post will fall off into the archives and less pages will link to it. And the page where I created the link will also fall off into the archives, and less pages will link to it. Instead of a cummulative effect, with weblogs you get a decremental effect — over time the rank will decrease rather than increase. Regardless of the time sensitive nature of the topic or not.

The decremental effect of weblog postings is a good reason to use a Story mechanism for more important and wordy postings rather than the usual weblog posting — the story will remain accessible rather than get lost in the archives. However, this doesn’t help you when the pages that link to you eventually fade away. Still, the hope is that new, fresh links will come along. It helps to mention or reference the story occasionally in newer postings — call it a form of weblog pinging.

If you’re trying to raise to the top of the rankings associated on popular search terms, good luck. You can’t influence the rankings with something like Linux; it’s been around too long, and the major sites associated with this term are too firmly entrenched. You would have to invent a Microsoft buster within the Linux world to generate enough buzz to even get to the first page of the result set.

As for other subjects, I imagine you might have a shot if you’re diligent, work hard, get tricky.

I found a page with some interesting and clever tips for influencing Google. From this, I found that one thing you can do is find pages that link to you and make sure that Google is aware of them. Interesting idea, isn’t it? Because the more buzz these pages get, the higher ranking your linked to page gets.

Another thing you can do is package word sets to capture specific searches.

I’ve started an online C# book (that I really need to finish). Now, searching for C# is not going to return my book pages in the first result set — I haven’t finished the book and only have so many sites linking to the pages. However, if you search on “C# book”, I’m the second from the top in the listings (yasd.com is my domain as well as burningbird.net and several others). I’m using the term “C# book”, and I have sufficient links — combined, I’m close to the top.

Another way to influence Google is to do something to drive people to your site. If you write, write articles for online publications and insist they use a link to your weblog or site within the author bio. This is probably the number one reason why I get so much Google traffic — I’ve been writing online articles for years.

Can’t write for an online publication? Then start joining newsgroups, MetaFilter, whatever and post, post, post! Post to anything you can attach a signature to that has your weblog or site address. Believe it or not, this can influence Google — I’ve seen it happen with content in my own web sites. I still see it and surprises the heck out of me (as well as made me a bit more cautious about why I say at said newsgroups et al).

Fascinating tool Google. If we harness this type of connectivity and attach it to “meaning” rather than ranking, then you have the semantic web — one short step away. Exciting stuff.