Categories
Web

Web 2.0 and hamster wheels

Dare Obasanjo wrote a post about flipping your Web 2.0 startup and gave three reasons why a bigger company would gobble up a startup: users, technology, and people. Paul Kedrosky replied that Dare was wrong, wrong, wrong and that building companies to flip is also wrong, wrong, wrong.

I happen to agree (agree, agree) somewhat with Mr. Kedrosky, in that I wish technologies weren’t being built for the express purposes of flipping (i.e. quickly going from VC funding to purchase by a Player), but, as Dare writes, it does happen. Where I disagree with Dare is in his list of reasons why a company would buy a Web 2.0 startup. He left an important one off the list: image.

One week the web is full of talk about Google, the next Yahoo, the week after that might be MSN (though it is falling far behind the other two–Ozzie just doesn’t have the 2.0 stink; maybe it can buy wordpress.com); then the cycle continues anew. How these companies make money is as much image as software provided. It’s important to all three search engine companies to be seen as the company that’s the leader into a whole new version of the web. However, this doesn’t mean that each company is going to change the way it does business.

Google bought Blogger years ago, and I remember we talked about how this would change Google searches, and we discussed what this purchase meant. Now we can see that it didn’t mean anything, other than Google bought into the hip 2.0 kid at the time–weblogging. Blogger hasn’t changed all that much, other than new features to keep up with other tools; Google didn’t change at all.

When Yahoo bought Del.icio.us, I read comments here and there to the effect that this is going to change search dynamics and the old algorithmic approaches will soon give way to new tagging ones. Yet the statistics don’t support this. Delicious has, what, 300,000 users? Out of how many billions of web users? The number of people who tag–which is really what the Del.icio.us tech is all about, tagging and storage–compared to those who don’t is so skewed as to make tagging a non-starter.

(Not to mention that no one has effectively explained how tagging is going to make for more accurate searches.)

But Yahoo’s buying of Del.icio.us put it into the front page of many publications. It kept it even, or barely even, in the fight with Google for being the ‘hip’ company — the one that people will use for their searches. The site where advertisers should place their ads; the place to connect for other companies pushing themselves as Web 2.0.

Both Yahoo and Google made a lot of money selling space for ads; enough so that they can afford to invest a few millions in small startups that could add to their web 2.0 goodness. Heck, Google just plunked a billion or so into AOL, and we all know that’s an elephant bound for the graveyard. Until it ambles its rattly bones in that direction, though, it still brings an immense number of eyeballs to Google–eyeballs that Google would rather have then concede to Microsoft (it’s competitor in this deal).

Besides: investing money is a good corporate move at tax time. And it doesn’t hurt when investors see these companies seem to diversify — there’s been a lot of talk about bursting bubbles this year.

As for the technology, most of the Web 2.0 startups are based on copious amounts of data stored, accessible via search and subscription, tagged, and wrapped in an API. None of the companies are based on what I would call revolutionary uses of technology. Much of the early popularity of the companies is because the services each offered was free. Oh, and the fact that we have, personally, come to know the folks behind the companies. This then leads to the question: do companies benefit from bringing in the people behind these acquisitions?

Of course they do. After all, the folks behind Flickr and Del.icio.us and Bloglines and so on were the originators of ideas that took off –if I were Google and Yahoo, (eBay, Microsoft, and so on) I’d rather have these people on my team then on the competitor’s. But as we’ve seen with Google/Blogger and Yahoo/Flickr, the startup seems to benefit more from the new association than the other way around. Storage costs money, scaling isn’t cheap. Or easy. (Maybe Microsoft can buy TypePad.)

Before we make an assumption that Google and Yahoo, in particular, are going to throw out their web 1.0 cash cows in favor of shiny new web 2.0 branded calves, the evidence of our eyes does not support the what ifs generated by our fevered imaginations. Disappointing, true; but it is fun to watch each company take a turn on the hamster wheel each week. (Saaayyy, that’s who Microsoft can buy….)

More on Google and AOL.

More on Yahoo and Del.icio.us.

Categories
People Web Weblogging

It’s a mountain Mohammed thing

“So I have a blog” the words read, as I scrolled down the entries at Planet RDF. And then I noticed the author: Tim Berners-Lee.

In his first weblog entry, Sir Tim wrote:

…it is nice to have a machine to the administrative work of handling the navigation bars and comment buttons and so on, and it is nice to edit in a mode in which you can to limited damage to the site. So I am going to try this blog thing using blog tools. So this is for all the people who have been saying I ought to have a blog.

For all those who claim to be first, there is no doubt who was first, though late to this particularly party. Probably all that Web 2.0 stuff floating around.

I do believe that Sir Tim is also the first weblogger to hold Knight Commander, Order of the British Empire. Mind the language, children. Mind the language. No more of this informal lower-case ’s’, ‘w’ when talking about the Semantic Web now.

Categories
Web Weblogging

Online

Aside from adding some links and text, my first release of OutputThis! is online. Notice the exclamation point? Punctuation is the new black.

The rollout of the Structured Blogging work is tomorrow afternoon, but I’ve been playing with it today. When SB rolls out tomorrow I’ll list links to the test weblogs, but for now, you can check out OutputThis! Yes, I designed it. Yes, I know you hate it.

There’s been some odds and ends about the ‘forking’ nature of Structured Blogging today. It makes no sense, and the folks who are concerned haven’t posted anything online expressing their concerns, so end of story.

What is it, though, with webloggers who reach a point of success and then seem to stop weblogging? Is that the key to getting rid of webloggers–help them become successful at weblogging, and then they’ll stop weblogging? For all those people who don’t care for me and who would like to see me disappear, here’s your chance: help make me a Successful Weblogger, and I’ll go away.

In the meantime, I have a couple of long posts I’m working on and a links post to some very nice stuff you all are writing. I am surrounded by such talented people, which is good for folks like me; too bad for you, though, that you’re not successful enough at weblogging to give up weblogging.

Categories
Web

Quick note before bed

Phil Pearson is talking about the project I’ve been working on for Broadband Mechanics the last few weeks. I’m not working on the Structured Blogging component; I’m creating a middleware server called Outputthis.

A limited version of the service will be up for Tuesday for the Big Rollout, and then I need to add the rest: autodiscovery of web services through RSD; full update and delete for the Profile and Targets. But it should be enough for demonstration purposes and alpha testing/use on Tuesday.

Outputthis provides services that allow you to register weblogs or other resources that you might want to post to through the Structured Blogging “Blog This!” functionality. When you click the Blog This button, one of my services returns a list of weblogs, you check which ones you want, click the button and the next thing you know: the post has been posted to all the sites.

Right now, we’re upgrading the database and I’m fighting a really odd incompatibility between mcrypt and the xml_parser so it’s not running; something I’ll fix in the morning. Besides, it’s too early to turn it on–the rollout is Tuesday, and the focus at the party will be on the web 2.0 stuff like the Structured Blogging plugins (which are impressive); not the web 1.0 stuff, like Outputthis.

In the meantime, whatever Phil and Kimbro and the others have done with the SB plugin is not forking.

Categories
Technology Web

Accessing the Newsgator API within PHP

One of the programming jobs I’ve had recently was to provide PHP functions to access the Newsgator SOAP API; hiding as much of the SOAP bits as possible. I used the nuSOAP PHP library as the basis for my work. Though SOAP functionality is built into PHP 5, my client, like most people, are still using PHP 4, and nuSOAP has a very clean implementation.

For those who might want to give the API a shot, I’ll walk through some sample code that should be easily modified as interested. I had hoped to write a more complete application, but have ran out of time.

The Newsgator API requires an account in order to test the code, but you can sign up for one at no charge. When you get an account, you’re given an online Newsgator Location in which to add subscriptions. You’re also given the ability to create new locations, as well as folders, and to subscribe to and read, syndication feeds. The API itself is split into five main categories for the five SOAP endpoints: Locations, Folders, Subscriptions, Feeds, and Posts.

Each SOAP engpoint page lists the web service methods for the specific item, including a description of the parameters and values returned. An important element when looking at the page is to find a link to the endpoint at the bottom. Clicking on it opens a window asking for the account username and password. Once you enter these, the endpoint page opens, containing links for each of the methods.

Clicking on a method link opens up another page, usually containing a form, and an example SOAP request and response. These latter are essential in order to determine the values used with nuSOAP. You can also test the web service by typing values into the form and invoking the method. If, that is, the parameters are simple values rather than programmatic structures, such as arrays.

Once you’ve looked through the API methods to see what parameters are needed, and explored the actual SOAP request and response, it’s just a matter of plugging in values within the nuSOAP functions. To demonstrate, I’m going to walk through a program that creates a SOAP client, queries the service for all subscriptions for a given location, and then accesses and prints out links to the individual items for the subscriptions.

In the program, I first create a SOAP client using the appropriate endpoint, checking for any error afterwards. (Complete source code is provided later, so no worries about any gaps in the code):

// create SOAP client
$client = new soapclient(”http://services.newsgator.com/ngws/svc/Subscription.asmx”);
$err = $client->getError();
if ($err) {
err($client,$err);
die();
}

I’m not using a proxy or WSDL, so no other parameters other than the endpoint are set.

Next, I define the method’s parameters, in this case a location string and a synchronization token. This latter value is used to synchronize the data between method calls, and in the results you’ll see this returned as part of the response. Using this provided synch value in the next method call ensures that the data, such as the count of unread items for each subscription, is fresh.

// set parameters
$params = array(
‘location’ => $location,
’syncToken’ => $synctoken
);

During the initial web service request, the synch token is blank.

Once the method parameters are set, I added code to authenticate the user:

// authenticate against the service
$client->setCredentials($user, $pass,’basic’);

Note that this uses example uses BASIC authentication; Newsgator also supports DIGEST authentication.

The Newsgator API token is passed in a SOAP header, which I build manually next. Note that the token must be authenticated with the service, so you’ll need to specify the appropriate service namespace:

// create SOAP header for Newsgator API
$hdr = “<ng:NGAPIToken xmlns:ng=’http://services.newsgator.com/svc/Subscription.asmx’>
<ng:Token>$token</ng:Token></ng:NGAPIToken>”;

Finally, we can now invoke the service:

// invoke SOAP service
$result = $client->call(’GetSubscriptionList’, $params,’http://services.newsgator.com/svc/Subscription.asmx’,
‘http://services.newsgator.com/svc/Subscription.asmx/GetSubscriptionList’,
$hdr,false, ‘rpc’,’literal’);

// check for error
if ($client->fault) {
echo ‘<h2>Fault</h2><pre>’; print_r($result); echo ‘</pre>’;
} else {
$err = $client->getError();
if ($err) {
echo ‘<h2>Error</h2><pre>’ . $err . ‘</pre>’;
}
}

In this function call, the SOAP method is the first parameter, followed by the parameters, the SOAP endpoint (namespace), the SOAP action, the manually created header, the serialization style (’rpc’), and the serialization for the parameters (’literal).

The nuSOAP function processes any XML returned as multi-dimensioned arrays. With this service call, the subscriptions are returned as OPML, values of which you can access by walking through the array:

// decipher the array, based on OPML
$opml = $result[”opml”];
$body = $opml[”body”];
$outline = $body[”outline”];
$syntoken = $opml[”!ng:token”];
foreach ($outline as $key => $sub) {
$feed = $sub[”!ng:id”];
$title = $sub[”!title”];
$url = $sub[”!htmlUrl”];
echo “<a href=’$url’>$title</a><br />”;
}

After each subscription is accessed, the feed identifier ($feed) is then used to invoke another service to get the news for the feed. The complete application demonstrates this.