Web 9.75

The precision of naming takes away from the uniqueness of seeing. Pierre Bonnard.

Nick Carr comments on Google’s Web 3.0, pointing out the fact that Web 3.0 was supposed to be about the Semantic Web, or, as he puts it, the first step in the Machine’s Grand Plan to take over.

For all the numbers we flash about there really are only so many variations of data, data annotation, data access, data persistence, and whatever version of “web” features the same concepts, rearranged. Perhaps instead of numbers, we should use descriptive terminology when naming each successive generation of the web, starting with the architecture of the webs.

Application Architectures

thin client

This type of application is old. Older than dirt. A thin client is nothing more than an access point to a server, typically managing protocols but not storing data or installing application locally. All the terminal traditionally does is capture keystrokes and pass them along to the server-based application. The old mainframe applications were, and many still are, thin clients.

There was a variation of thin client a while back when the web was really getting hot: the network computer. Oracle did not live up to its name when it invested in this functionality, long ago. The network computer was a machine that was created to access the internet and serve up pages. In a way, it’s very similar to what we have with the iPhone and other hand held devices. There is no way to add third-party functionality to the interface device, and any functionality, at all, comes in through the network.

Is a web application a thin client? Well, yes and no. For something like an iPhone or Apple TV, I would say yes, it is a thin client. For most uses, though, web applications require browses and plug-ins and extensions, all of which do something unique, and require storage and the ability to add third-party applications, as well as processing functionality on the client. I would say that a web application where most of the processing is done on the server, little or none in the browser, is a thin client. However, beyond that, then the web application would be…


A client/server application typically has one server or group of servers managed as one, and many clients. The client could be a ‘thin’ client, but when we talk about client/server, we usually mean that there is an application, perhaps even a large application, on the client.

In a client/server application, the data is traditionally stored and managed on the server, while much of the business processing as well as user interface is managed on the client. This isn’t a hard and fast separation, as data can be cached on the client, temporarily, in order to increase performance or work offline. Updates, though, typically have to be made, at some point, back to the server.

The newest incarnation of web applications, the Rich Internet Applications (RIA), are, in my opinion, a variation of client/server applications. The only difference between these and application that have been built with something like Visual Basic is that we’re using the same technologies we use to build more traditional web applications. We may or may not move the application out of the browser, but the infrastructure is still the same: client/server.

However, where RIA applications may differ from the more traditional web applications is that RIA apps could be a variation of client/server–a three tier client/server application…


In a three, or more properly n-tier client/server application, there is separation between the user interface and the business logic, and the business logic and the data, creating three levels of control rather than two. The reasoning behind this is so that changes in the interface between the business layer and the data don’t necessarily impact on the UI, and vice versa. To match the architecture, the UI can be on one machine, the business logic on a second, and the data on a third, though the latter isn’t a requirement.

Some RIA applications can fit this model, because many do incorporate a concept of a middleware component. As an example, the newer Flex infrastructure can be built as a three-tier with the addition of a Flex server.

Some web applications, whether RIA or not, can also make use of another variation of client/server…

distributed client/server

Traditional client/server: many clients working against one set of business logic mapped to database server running serially. Easiest type of application to create, but one less likely to be able to scale, and from this arises the concept of a distributed client/server or computing architecture.

The ‘distributed’ in this title comes from the fact that the application functionality can be split into multiple objects, each operating on possibly different machines at the same time. It’s the parallel nature of the application that tends to set this type of architecture apart, and which allows it to more easily scale.

J2EE applications fit the distributed computing environment, as does anything running CORBA or the older COM and the newer .NET. It is not a trivial architecture, and needs the support of infrastructure components such as WebLogic or JBoss.

This ‘distributed parallel’ functionality sounds much like today’s widget-bound sidebars, wherein a web page can have many widgets, each performing a small amount of functionality on a specific piece of data at the same time (or as parallel as can be considering that the the page is probably not running in a true multi-threaded space).

Remember, though, that widgets tend to operate as separate and individual applications, each to their own API (Application Programming Interface) and data. Now, if all the widgets were front ends to backend processes running in parallel, and working together to solve a problem, then the distributed architecture shoe fits.

There’s a variation of distributed computing–well, sort of –which is…

Service Oriented Applications

Service Oriented Applications (SOA). Better known as ‘web services’. This is the APIs and the RESTful service requests, and other services that run the web we seem to become more dependent on every day. Web services are created completely independent of the clients, supporting a specific protocol and interface that makes the web services accessible regardless of the characteristics of the client.

The client then invokes these services, sending data, getting data back, and does so without having any idea of how the web services are developed or what language they’re developed with, other than knowing the prototype and the service.

Clean and elegant, and is increasingly running today’s web. The interesting thing about web services is that they can be almost trivially easy to tortuously complex to implement. And no, I didn’t specifically mention the WS-* stack.

Of course, all things being equal, no simpler architecture than…

A stand alone application

A stand alone application is one where no external service is necessary for accessing data or processes. Think of something like Photoshop, and you get a stand alone application.

The application may have internet capabilities but typically these are incidental. In addition, the data may not always be on the same machine, but it doesn’t matter. For instance, I run Photoshop on one Mac, but many of my images are on another Mac that I’ve connected through networking. However, though I may be accessing the data on the ‘net, the application treats the data as if it is local.

The key characteristic of a stand alone application is that you can’t split the application up across machines — it’s an all or nothing. It’s also the only architecture that can’t ‘speak’ web, so we can’t look for the Web 3.0 among the stand alones.

Alone again, naturally…

No joy in being alone; what we need is a little help from our friends.


P2P, or peer-to-peer applications are built in such a way that once multiple peers have discovered each other through some intermediary they communicate directly–sharing either process, data, or both. A client can become a server and a server can become a client.

Joost is an example of a P2P application, as is BitTorrent. There is no centralized place for data, and the same piece of data is typically duplicated across a network. Using a P2P application, I may get data from one site, which is then stored locally on my machine. Another person logging on to the P2P network can then get that same piece of data from me.

The power to this environment is it can really scale. No one machine is burdened with all data requests, and a resource can be downloaded from many resources rather than just one. It is not a trivial application, though, and requires careful management to ensure that any one participant’s machine isn’t made overly vulnerable to hacking, that downloads are complete, that data doesn’t get released into the wild, and so on. Communication and network management is a critical aspect to a P2P application.

These are the architectures, at least, the ones I can think of off the top of my head. Which, then, becomes the ‘next’ Web, the Web 3.0 we seem to be reaching for?

Web 3.0

Ew Ew Ew! The next generation of the web must be Google’s cloud thing, right. So that makes Web 3.0 a P2P application, and we call it “Google’s P2P Web” or “MyData2MyData”?

Ah, no.

The concept of ‘cloud’ is from P2P (*correct?). It is a lyrical description of how data is seen to a P2P application…coming from a cloud. When we make requests for a specific file, we don’t know the exact location of where the file is pulled; chances are, it’s coming from multiple machines. We don’t see all of this, though, hence the term ‘cloud’. Personally, I prefer void, but that’s just semantics.

The term cloud has been adopted for other uses. Clouds are used with ‘tags’ to describe keyword searches, the size of the word denoting the number of requests. I read once where a writer called the entire internet a cloud, which seems too generic to be useful. Dare Obasanjo wrote recently on the discussions surrounding OS clouds, which, frankly, don’t make any sense at all and, me thinks, using cloud in the poetic sense: artful rather than factual.

The use of ‘cloud’ also occurs with SOA, which probably explains Google’s use of the term. And Microsoft’s. And Apple, if they wanted, but they didn’t–being Apple (Stickers on our machines? We don’t need no stinking stickers!) Is the next web then called, “BigCo SOA P2P Web”?

Let’s return to Google CEO Schmidt’s use of the cloud, as copied from Carr’s post, mentioned earlier:

My prediction would be that Web 3.0 would ultimately be seen as applications that are pieced together [and that share] a number of characteristics: the applications are relatively small; the data is in the cloud; the applications can run on any device – PC or mobile phone; the applications are very fast and they’re very customizable; and furthermore the applications are distributed essentially virally, literally by social networks, by email. You won’t go to the store and purchase them. … That’s a very different application model than we’ve ever seen in computing … and likely to be very, very large. There’s low barriers to entry. The new generation of tools being announced today by Google and other companies make it relatively easy to do. [It] solves a lot of problems, and it works everywhere.

With today’s announcement of Google shared space we’re assuming that Google thinks of third-party storage as ‘cloud’, similar to Microsoft with its Live SkyDrive or Apple with its .mac. It’s the concept of putting either data or processes out on third party systems so that we don’t have to store on our local machines or lease server space to manage such on our own.

In Google’s view, Web 3.0 is more than ‘just’ the architecture: it’s small, fast applications built on an existing infrastructure (think Mozilla, Silverlight, Flex, etc.) that can run locally or remotely; on phones, hand helds, and/or desk sized or laptop computers; store data locally and remotely; built on web services run on one or many machines, created by one or more than one company. I guess we could call Google’s web, the Small, Fast, Device Independent, Remote Storage, SOA P2P Web, which I will admit would not fit easily on a button, nor look all that great with ‘beta’ stuck to its ass.

Not to mention that it doesn’t incorporate all that neat ‘social viral’ stuff. (I knew I forgot something.)

The social viral stuff

Whatever makes people think that Facebook or MySpace or any of the like is ‘new’? Since the very first release of the internet we’ve had sites that have enabled social gathering of one form or another. The only thing the newer forms of technology provide is a place where one can hang one’s hat without having to have one’s own server or domain. That’s not ‘social’–that’s positional.

Google mentions how we won’t be buying software at the store. I had to check the date on the talk, because we’ve been ‘spreading’ software through social contact for years. Look in the Usenet groups and you’ll see recommendations for software or links to download applications. Outside of an operating system and a couple of major applications, I imagine most of us download our software now.

What Google’s Schmidt is talking about isn’t downloaded software so much as software that has a small installation footprint or doesn’t even need to be installed at all. Like, um, just like the software it provides. (Question: What is Web 3.0? Answer: What we’re selling.)

Anyone who has ported applications is aware of what a pain this is, but the idea of a ‘platformless’ application has been around as long as Java has been around, which is longer than Google. It’s an attractive concept, but the problem is you’re more or less tied into the company, and that tends to wear the shininess off ‘this’ version of the web–not to mention all that ‘not knowing exactly what Google is recording about us, as we use the applications’ thing that keeps coming up in the minds of we paranoid few.

Is the next web then, the Small, Fast, Device Independent, Remote Storage, SOA P2P, Proprietary Web? God, I hope not.

Though Schmidt’s bits and cloudy pieces are a newer arrangement of technology, the underlying technology and the architectures have been around some time: the only thing that really differs is the business model, not the tech. In this case, then, ‘cloud’ is more marketing than making. Though the data could end up on multiple sites, hosted through many companies, the Google cloud lacks both the flexibility and freedom of the P2P cloud, because at the heart of the cloud is…Google. I’ve said it before in the past and will repeat: you can’t really have a cloud with a solid iron core.

Though ‘cloud’ is used about as frequently as lipstick at a prom, I don’t see the next generation of the web being based on either Google’s cloud, or Microsoft. Or Adobe’s or Mozilla’s or Amazon’s or any single organization.

If Google’s Web 3.0, or, more properly, Small, Fast, Device Independent, Remote Storage, SOA P2P, Proprietary, Web with an Iron Butt, is a bust, does this mean, then, that the Semantic Web is the true Web 3.0 after all?

Semantic Web Clouds…and stuff

Trying on for size: a Semantic Client/Server Web. Nope. Nope, nope, nope. Doesn’t work. There is no such thing as a semantic client/server. Or a semantic thin client, or even distributed semantics, or SOA RDF, though this one comes closest, while managing to sound like something that belongs on a Boyscout badge.

Semantics on the web is basically about metadata–data about data. Our semantic efforts are focused on how metadata is recorded and made accessible. Metadata can be recorded or provided as RDF, embedded in a web page as microformat, or even found within the blank spaces of an image.

We all like metadata. Metadata makes for smarter searches, more effective categorization, better applications, findability. If data is one dimension of the web, then metadata is another, equally important.

The semantic web means many things, but “semantic web” is not an application architecture, or profoundly new way of doing business. Saying Web 3.0 is the Semantic Web implies that we’ve never been interested in metadata in the past, or have been waiting some kind of solar congruence to bring together the technology needed.

We’ve been working with metadata since day one. We’ve always been interested in getting more information about the stuff we find online. The only difference now from the good old days of web 1.0 is we have more opportunities, more approaches, more people are interested, and we’re getting better when it comes to collecting and using the metadata. Then again, we’re also getting better with just the plain data, too.

Web 3.0 isn’t Google’s cloud and it isn’t the Semantic Web and it certainly isn’ t the Small, Fast, Device Independent, Remote Storage, Viral, SOA, P2P, Proprietary, Smart Web with an Iron Butt. Heck, even Web 3.0 isn’t Web 3.0. So what is the next great Web, and what the devil are we supposed to call it?

Web 9.75

It is a proprietary thing, this insistence on naming things. “From antiquity, people have recognized the connection between naming and power”, Casey Miller and Kate Swift wrote.

We can talk about Web 1.0, or 2.0, or 3.0, but my favorite is Web 9.75, or Web Nine and Three-Quarters. It reminds me of the train platform in the Harry Potter books, which could only be found by wizards. In other words, only found by the people who need it, while the rest of the world thinks it’s rubbish.

There are as many webs as there are possible combinations of all technologies. Then again, there are many webs as people who access them, because we all have our own view of what we want the web to be. Thinking of the web this way keeps it a marvelously fluid and ever changing platform from which to leap unknowing and unseeing.

When we name the web, however, give it numbers and constrain it about with rigid descriptions and manufactured requirements, then we really are putting the iron into the cloud; clipping our wings, forcing our feet down paths of others’ making. That’s not the way to open doors to innovation; that’s just the way to sell more seats to a conference.

Instead, when someone asks you what the next Web is going to be, answer Web 9.75. Then, when we hear it, we’ll all nudge each other, wink and giggle because we know it’s nonsense, but no more nonsense then Web 1.0, Web 2.0, Web 3.0 or even Google’s Web-That-Must-Not-Be-Named.

*As reminded in comments, network folks initially used ‘cloud’ to refer to that section of the network labeled “…and then a miracle happens…”


SnagIt equivalent for Mac

I love SnagIt for the PC. I’m using it for this book, and I’ve included a description of it in the book, as one of the tools covered. It’s a great screen capture tool.

Only problem: no version for the Mac.

Does anyone have any suggestions for a comparable tool for the Mac? Other than Grab? What I’m looking for is a tool that not only does the great screen captures in multiple ways (window, selection, timed, desktop, paged), but also provides the post-capture annotation, such as the nice looking arrows, cursors, and graphics, as well as the tasks such as select and magnify, and so on.

If it has a download trial or is shareware or even free, all the better.

More about SnagIt for Mac


Raw shoots

I’m looking at various RAW editors, including UFRaw and Adobe’s Camera RAW but also downloaded a copy of RawShooter 2006. The company that produced this tool, Pixmantec, is no more having been bought about by Adobe. However, the tool can still be downloaded, though the registration process fails each time you open it. A minor nuisance, no more.

It’s not the best of the RAW editors, but it is fast and one of the simpler to use. I like the batch conversion, but I also like the slideshow, similar to what you get with Lightroom or Aperture.

I was testing it out yesterday and started the slideshow for a set of photos I had taken of tulips at the Botanical Gardens this spring. No matter what tools you use in your photo workflow, nothing beats a slideshow to give you your first really good look at the photos as a set.

I keep most of my raw images after a shoot. Not the obviously bad ones that can’t be recovered: too blurry, too overexposed or underexposed, or missed subject; but the ones I didn’t especially care for at the time. You never know how much your perspective is going to change after a few months, and a picture you thought was uninteresting one day may suddenly seem to have potential another.

More importantly, when you’re stuck inside because it’s 108 degrees in the shade outside, and all the trees and lawns are baked brown, there’s nothing more refreshing than sitting down to a slideshow of rain kissed tulips. It just doesn’t matter if 99% of the photos will never see the inside of a web page or picture frame, or if the only person to see them is the same who took the pictures originally. It doesn’t even matter if they’re ‘good’ or art.

There is just something very satisfying about sitting down to monitor-sized slideshow of old photos.


Lasting stuff

This is the slow dissolving, long lasting stuff edition:

  • Photography is deadFrom Erwins HomeThe essence of film-based photography is not only the fact that the mechanism of capturing an image and fixing it in a silver halide grain structure creates a final picture that can hardly be altered. The fundamental issue here is the fact that the laws of physics create the image, in particular by the characteristics of light rays and the interaction between photons and silver halide grains. Photography is writing with light, and fixing the shadows. Human interaction and manipulation are minimized and reduced to the location, viewpoint and moment of exposure by the photographer. Reading the new book about Cartier-Bresson, the Scrapbook, makes one aware of that peculiar and forceful truth that photography is not only intimately linked to the use of film, but in fact depends for its very existence on film.If photography is dependent on film and not the photographer’s drive, interest, eye, skill, and talent, than all I have to do to become a great photographer is blow the dust off my old film camera, load it with film, stand on a corner and, every once in a while, snap the shutter.
  • Now is not the time to hear that global warming is going to increase drastically, though I have at least two years to move before it gets really bad.Not everyone agrees with the predictions, though. Freeman Dyson a physicist at Princeton states, My first heresy says that all the fuss about global warming is grossly exaggerated. Here I am opposing the holy brotherhood of climate model experts and the crowd of deluded citizens who believe the numbers predicted by the computer models. Of course, they say, I have no degree in meteorology and I am therefore not qualified to speak. But I have studied the climate models and I know what they can do. The models solve the equations of fluid dynamics, and they do a very good job of describing the fluid motions of the atmosphere and the oceans. They do a very poor job of describing the clouds, the dust, the chemistry and the biology of fields and farms and forests. They do not begin to describe the real world that we live in. The real world is muddy and messy and full of things that we do not yet understand. It is much easier for a scientist to sit in an air-conditioned building and run computer models, than to put on winter clothes and measure what is really happening outside in the swamps and the clouds. That is why the climate model experts end up believing their own models. (via 3Quarks)

    An interesting read, but in the end, Dr. Dyson doesn’t convince one of anything. His arguments are based more on anecdotes and opinion, rather than presenting anything factual that one can then review and either accept or reject. He also has too much belief, in my opinion, on humanity’s ability to ‘fix’ things at some future time if predictions of climate change do occur. He then wraps all of this in his ‘heresy’, as if to make himself seem a maverick, when there have been people who have argued against the prevailing views of global climate change. He strikes me as man who doesn’t want to see what the climatologist are predicting, but rather than focus on the sacrifice of today’s people, he disputes that any such prediction can’t be possible because of all the variances that exist in the world. The thing is, from what I know of climatological models, these do account for all that ‘messiness’.

  • Loren Webster wrote an in-depth review of Zen and the Art of Motorcycle Maintenance (ZAMM). I enjoyed reading the posts, and hearing Loren’s views.I read ZAMM once, a long time ago. I remembered thinking after reading the work that this was a book written by a man for men, though there is nothing in the work that is even remotely sexist. I felt, though, that I was reading a book written in language I’ve learned to speak fluently, but wasn’t my native language. After Loren’s reviews, I might try reading it again, and see if I still suffer the same disconnect.
  • If you haven’t seen the South Korean film The Host (Gwoemul), I can’t recommend it too strongly.I was expecting a creature feature, but I wasn’t expecting such excellent special effects, darkest black humor, and a fascinating look at South Korean culture, which may, or may not, match what actually exists in South Korea. Not to mention subtle and not so subtle digs at the US.

    I don’t want to give away much of the storyline other than a huge creature terrorizes Seoul, capturing the youngest daughter of an amazingly dysfunctional family. The rest of the movie is then taken up with the family’s attempt to rescue her from the beast, taking the members to hospitals, along water fronts, and into telecom companies.

    This is not a ‘likable’ family, either, at least not in the beginning. But as they traverse the shoals of bureaucracy and the lies of corporate and military leaders alike, not to mention the homeless, ecowarriors, and, well, the beast, they rather grow on you. One reviewer described it as …a mutant hybrid spawned from the improbable union of Little Miss Sunshine and Godzilla, which is as good a description as any.

    There was one scene, in particular, where the family is seated at a table eating their dinner. It was seemingly incidental to the movie, but it captured simply, without edging over into the maudlin, the relationships within the family–all without one word being exchanged. It was brilliantly done, unusual, but captivating.

    I watched it in Korean with English sub-titles, which I recommend; in my opinion, dubbing destroys movies. I wanted to see The Host at the St. Louis film festival last year, but they were out of tickets. Too bad, too, because I bet the movie was exceptional on the bigger screen. Still, it translates to the smaller screen nicely.

    Rotten Tomatoes critics give it a 92%, unusually high for that site. Out of five stars, it gets a five star rating from me.

The Host Movie Poster


DC Weblog

Recovered from the Wayback Machine.

Missouri folks: rest of you close your eyes

Missourinet posts a note that Lorna Domke from the Department of Conservation is starting a weblog. One of her first stories is on Pickle Springs. (Remember when I wrote on Pickle Springs? Back, when I used to have a life?)

Ms. Domke does need to find her own ‘voice’, but that will come in time. I’ve added her weblog to my reading and am looking forward to more. Well, maybe not more stories on hunting and fishing, but that goes with the conservation territory.