Categories
Copyright Legal, Laws, and Regs

Creative inevitability

It was a sense of inevitability that I read about the lawsuit against Creative Commons and Virgin Mobile, Australia. The suit came about because of the recent Virgin Mobile use of photos licensed for commercial use via a CC license.

Not surprising to read Lawrence Lessig’s optimistic look at the issue, though his segue going from a thoughtful look at where things went wrong to “everything worked as planned” is a rather interesting read:

this case does again highlight the free culture function of the Noncommercial term in the CC license. Many from the free software community would prefer culture be licensed as freely as free software — enabling both commercial and noncommercial use, subject (at least sometimes) to a copyleft requirement. My view is that if authors so choose, then more power to them.

But this case shows something about why that objective is not as simple as it seems. I doubt that any court would find the photographer in this case had violated any right of privacy merely by posting a photograph like this on Flickr. Nor would any court, in my view, find a noncommercial use of a photograph like this violative of any right of privacy. And finally, as the world is just now, while many might resist the idea of Virgin using a photograph of theirs for free (and thus not select a license that explicitly authorizes “commercial use”), most in the net community would be perfectly fine with noncommercial use of a photograph by others within the net community.

The Noncommercial license tries to match these expectations. It tries to authorize sharing and reuse — not within a commercial economy, but within a sharing economy. It tries to do so in a way that wouldn’t trigger at least most non-copyright rights (though again, most is not all — a CC BY-NC licensed photograph by a voyeur still violates rights of privacy, for example). And it tries to do so in a way that protects the copyright owner against presumptions about the waiver of his rights suggested by posting a work freely.

I began to write on my concerns about Creative Commons, as soon as they were released. Years ago, in response to a comment by Sam Ruby, I wrote about the potential problems for confusion associated with the CC licenses:

Sam, in the legal world there is no ‘seed’ planting. There is clarification or confusion.

Not all forward motion is positive. I’d rather see people hesitate on using the CCL, and the CC open a dialog with the community (through a weblog with comments or a discussion group or like), then to continue using the CCL, perhaps incorrectly, all based on wonderful sounding words and a cute movie.

I appreciate the nobility of the Creative Commons intent and effort. But I’d appreciate it more if they combined that with an interactive element that allows us all to understand better what it all means.

I guess we have a better idea of what it all means now. But I wrote that over five years ago.

In response to this issue, Suw Charman wrote:

I like to think that the world is based on goodwill. People are, generally speaking, nice and, by default, they will respect and help others. Certainly humans are fundamentally and inescapably social creatures that need each other on a minute-by-minute and day-to-day basis, and I think that being nice is one of the attributes that which fuels the reciprocation that makes helping someone else ultimately worth it for us ourselves.

I also think that the social web is an expression of the niceness that lubricates society. All the mores that have built up around blogging and wikis and sharing and Creative Commons are based on being nice: if you quote someone’s blog, it’s being nice to credit them; Wikipedia encourages everyone to be nice to newbies; sharing anything with strangers is an act of niceness in itself; and Creative Commons licences are predicated on the idea that people will be nice and respect them.

Whilst niceness isn’t universal – there are people who aren’t nice – it is a desirable attribute, so much so that niceness is taught and enforced from birth. I doubt there’s anyone reading this who wasn’t told as a child to “be nice” or to “play nicely”. Nice is good. We need nice.

This might explain why I get so cross when I come across examples of people, or especially businesses, not playing nice. But thanks to the internet, we now get to call out companies who, whilst sticking to the letter of the law (or Creative Commons licence), are flagrantly abusing its spirit.

The online world–Suw’s ‘social web’–is no different than the offline world: there are people who give all, and people who take all, and the rest of us in the middle just trying to get by. The online world–with its Creative Commons, Wikipedia, Citizen Journalism, Social Network/Web/Graph goodness–is no more ‘nice’ than the towns, cities, or hamlets we live in; it’s just newer is all and we don’t have to worry about landfill. Continuing to set any of this up on a pedestal only serves to generate a false sense of trust and security that inevitably leads to disillusionment.

In the post associated with the comment I quoted earlier, I wrote (with some modifications to grammar):

Pessimists see the world from its dark side—always the glass half empty. They never see that the world can be made better, or that problems can be solved. They’re not constructive, but they aren’t destructive, either.

Idealists, on the other hand, only see the light. In their world, the sun always shines (except for that bit of rain needed for the trees), the birds always sing, and humanity exists in harmony. They are pleasant, but they can also be destructive.

The idealist is destructive where the pessimist isn’t by introducing change without concern for the consequences. They say, “Look at this wonderful thing I have given you!”, but don’t provide the user manual. After you’ve managed to blow up a city block, when you look for the idealist they’ve moved on to another part of the world, to drop yet another idealism bomb on some unsuspecting poor sod.

Idealists. You gotta love em, because if you didn’t you’d want to strangle them.

Where this is all leading is the release this week of the Creative Commons licenses: those digital goodies that one can attach to our creative efforts to let others know if they can use these efforts in defined ways. Collaboration and community, 101. Like our idealist, the Creative Commons have dropped this little bomb in our lap and then left it up to us to determine how to use these things, and what they really mean.

Jonathon Delacour, who has been called, usually with respect and affection, many things but I don’t think ‘nice’ was one of them, shared some of my misgivings about the CC licenses. He wrote:

Picasso and Braque stood on each other’s shoulders as they invented Cubism but they were careful (and sufficiently smart) to maintain the copyright on their works. The Creative Commons Licenses, on the other hand, typify Thomas Sowell’s unconstrained vision of human nature by relying on people (“I’ve never met”) to behave honorably and to respect the integrity of my work. Spend five minutes on “this Internet” and tell me I’m not bound for disappointment.

I wouldn’t be so skeptical if the Creative Commons Licenses relied less on a rose-tinted vision of benign collaboration and instead provided greater safeguards for the real interests of those licensing their original works; or if, to borrow Thomas Sowell’s words, they replaced—to at least some degree—their “moral vision of human intentions” with a more pragmatic acceptance of the “inherent moral and intellectual limitations of human beings.”

In other words—and pardon my bluntness—what’s in it for me? Really? Other than distress and disillusionment?

It is this determination to manufacture an online Utopia, to hold fast to the rose-tinted vision that Jonathon described, of the Creative Commons–promoted by shrewd, sharp people who should have known better–that spurred me to write my criticisms years ago, and to continue to write on topic in the times since.

The Creative Commons web site has never, to my knowledge, responded to challenges, or discussion regarding the issues surrouding the licenses. When I derived a test of CC licenses, or when Creative Commons figured in a Dutch law suit, or Virgin Mobile grabbed several CC licensed photos from Flickr for its campaign, the Creative Commons community seemed to focus more on eliminating anything other than the type of license that caused the initial problems, rather than respond to the issues, or reflect on perhaps providing stronger warnings.

Ultimately, who really does benefit from the Creative Commons? Andrew Orlowski, who has never been referred to as ‘nice’ either, as far as I know of, wrote one of the most eye opening summations of the Creative Commons I’ve read:

Few participants who slap a CC license on their work understand that the mechanism was designed to benefit the network, not the humans, by removing “frictions” such as compensation or consent.

Some would say it is not the CC organization’s responsibility to answer the critics, to meet the challenges–that the organization doesn’t have an obligation to warn as much as it promotes. I say to stubbornly persist in wearing those rose-tinted glasses, to mark only the sunny hours, as the sun dial would say, is the ultimate irresponsibility. The Virgin Mobile lawsuit was inevitable, and it didn’t have to be.

It would seem that the online site Babble has been taking photos from Flickr, assuming they’re CC licensed, even when the photos they take are copyrighted by their owners.

No, I don’t blame CC. However, there is a growing assumption that photos at Flickr are CC licensed, and this is causing additional confusion. In addition, a CC licensed photo, even one designated as non-commercial, can be used in a magazine or newspaper, because that’s not necessarily considered ‘commercial’ use of the photo.

Just one of the many uncertainties and confusions around CC licenses, copyright, and fair use. That’s the main reason we shouldn’t be making it easier for people to license their work with CC.

Categories
Culture Diversity

The Jena 6

The big story in these parts today is the protest rallies in support of the group of black youths known as the “Jena 6”. Jena is a small town in Louisiana, and the focus of about 10,000 protesters from all over the country, today.

It’s difficult to find the facts to the story of the Jena 6, because there’s no person in the world better at burying unpleasantness than a Southerner.

Ostensibly, two events happened that no one disputes:

The first event is that during a high school assembly a black student asked the principal if the black kids could sit underneath the ‘white tree’– a big old shade tree that had previously been occupied by white students. The very next day, three white kids hung three (and I’ve heard two) nooses from the same tree in the town square

A noose is a known symbol in the South for lynching. More specifically, whites lynching blacks. The principal expelled the students, but the school superintendent overrode the Principal and gave the white kids in-school 3 day suspensions. It jez a prank, everyone says.

The second event takes place three months later. Six black kids are identified as having beaten up a white kid in the school hallway. The white kid ends up in the hospital, is treated, and released two hours later. The six black kids are charged with attempted second degree murder.

Right off that bat, you probably noticed, as I did, that it’s hard to think of a school fight as an attempt or a conspiracy to commit murder. At the same time, fair play suggests that it’s wrong of of a group of students to gang up on another. This isn’t Rosa Parks we’re talking about here, the lines are not cut and dried.

But then the story gets even more interesting. I want to point out two writings on this event.

The first is by Mel Didier a teacher in a nearby Lafayette Parish high school. Mr. Didier wrote that there were more than just these two incidents, and in fact there’s a pattern of racist violence in this small town:

A black student was beaten at a social function, and no one was charged. The DA goes into a hastily-called assembly and, looking directly at the African-American students, warns them that he can end their life with the stroke of his pen.

A white graduate pulls a gun on three black students who take the gun away and no charges are brought against the white grad, but the students were charged with theft when they didn’t give the gun back.

A white student taunts a black student beaten at a party and is jumped and beaten by six African-American students. Fox News points out that Justin Barker went to the hospital and was released the same day, attending a ring ceremony and social function that same day.

The DA charges those guilty of the attack with aggravated assault, and, when certain teachers and locals object, he ups the charges to attempted second-degree murder.

It’s difficult to deny a pattern of racial tension in this predominantly white community. What’s absolutely fascinating, though, is to read the front page of the Jena Times today, with a so-called timeline of events.

I’d copy text excerpts, but the paper actually made the story into a JPEG image and then inserted this into a frame. So I did the next best thing and copied pieces of the JPEG, highlighted phrases, and am copying the result here. I encourage you, though, to read the original. If it gets pulled, let me know and I’ll post a copy.

Part of the Jena Times newspaper article

I can’t be the only one who finds it odd that the author kept downplaying any racial tension in the community, while listing event after event that is inspired by racial tension. In addition, the author also stressed the ‘playfulness’ of the request about sitting under the white tree, when from other accounts, this wasn’t a playful request. In addition, it’s pretty obvious that when white kids are mentioned, they’re mentioned in a positive or neutral manner, but the actions of the black kids are portrayed negatively.

In fact reading this timeline, I feel like I’ve been transported back in time to the late 50’s and early 60’s, when white kids beating up blacks was considered nothing more than ‘juvenile spirits’. What’s amazing is that the town newspaper thought to publish this to downplay the racial problems the town has, when all it did for me was convince me that they exist.

According to an MTV story at one assembly where white students and black sat separate from each other, the DA held up a pen and said, specifically to the blacks, with one stroke of my pen, I can make your life disappear. Of course one person who lives near the area said the DA is more of a megalomaniac than a racist, but the end result is the same: justice is not prevailing.

I’m not fond of Al Sharpton who is leading much of the protest, and not condoning what the kids did: no matter how angry, six against one is wrong (if there were six, that hasn’t necessarily been proved). But this isn’t a case where these black kids decided to jump this white kid for nothing. Even the town’s most fervent supports acknowledge the white kid taunted the black kids. This was a hall fight triggered by anger that got out of control, and should have been prosecuted this way.

Attempted second degree murder?

Other weblogs covering this story.

The weblog that seems to be following this event the closest is Pursuing Holiness, including a detailed weblog post on the events the day of the fight.

update

From the Chicago Tribune today:

The judge overseeing the racially-charged case of the Jena 6 declined Friday to release the only one of the six black teenagers still being held in jail, despite the fact that the youth’s conviction for aggravated second-degree battery was vacated a week ago by an appeals court, family members and court sources confirmed.

Bell has been jailed since the beating incident last December, unable to post $90,000 bond. That bond was rendered moot when Bell’s battery conviction was overturned by Louisiana’s Third Circuit Court of Appeals on Sept. 14, which ruled that Bell, who was 16 at the time of the incident, should have been tried as a juvenile. The local district attorney prosecuting the case, Reed Walters, has vowed to appeal that ruling, and to press ahead with his cases against the other five youths, who are free on bond.

But Bell remains in jail, and under the jurisdiction of juvenile court, because he is now being prosecuted as a juvenile on a count of conspiracy in connection with the beating. Mauffray’s ruling Friday means he declined to set any conditions for Bell’s release.

That will show us uppity outsiders how they do justice in Louisiana, yessir indeed.

Categories
Diversity Technology

Being Nice

Recovered from the Wayback Machine.

O’Reilly has been running a series this month titled, Women in Technology. I contributed one of the earlier essays, titled So, What?.

I had ambivalent feelings about participating, not the least of which I wasn’t sure that grouping essays by a bunch of women together for publication during the same month was necessarily a ‘good thing’ for women. It becomes a little too much, “Powderpuff O’Reilly”, a little too easy to tune out. By lack of response from most of the regular O’Reilly writers, and readers, too, this concern has been born out, but it’s been interesting to see who has participated, and what they’ve written.

I don’t agree with all that’s written, and I have more than a suspicion that most of the other participants don’t agree with me, which is good because it just confirms my own decision not to write on this topic again–at least not in this environment or these pages. Especially when I read a post about one of the essays I disagreed with most strongly.

Flock of gulls

Carmelyne wrote:

Why does everyone argue negatively? The people who made comments argued negatively with the author. I can understand then why Amy didn’t like Articles about Women in Tech…Who needs that negativity? As Naruto would put it: “I like my positive chakra”. I don’t dwell any more on the negative side of being one of the few females in a male dominated environment/career.

This is not to pick on Carmelyne, who has a nicely designed site with a fun sense of color and pattern, in addition to a valid viewpoint: why dwell on the negative? Wanting to focus on the positive is understandable. What surprised me, though, is how much Carmelyne’s writing sounded like something else I had read, this time a comment by Tantek Celik in a post by Robert Scoble.

…thanks to all the social web technologies at our disposal, perhaps for the first time in history, people that are capable, humble, and nice can find each other in such numbers as to prioritize and focus their energies on each other rather than the emotional vampires that would otherwise sap them and drag them and their projects, companies etc. down with them.

Dave Rogers also made a small note on this general movement to niceness, but then moves on to discussions of Heroes (opinions of which, I agree with) and music because, really, what more is there to say?

Birds flying

Which leads me back to not writing on “women in technology”. I always felt I had to write on this topic: to point out the conferences where women were missing, the all male publications, the exclusively male panels–not to mention the lack of opportunities for women, as well as acknowledgment of what we’ve accomplished.

It’s not with disappointment but relief that I realized that such writings in this environment don’t work, haven’t worked, and are unlikely to work in the future. There are a hundred other things I’d rather write on, and now I no longer feel like I’m betraying womankind, and my own sense of responsibility, by doing so.

This series has been remarkably freeing for me.

Birds not

Categories
Diversity Technology

So What?

Recovered from the Wayback Machine. Part of the O’Reilly Women in Tech series.

A few weeks back, the book Beautiful Code: Leading Programmers Explain How They Think hit the streets. What a terrific concept: get several prominent programmers to write about their own unique perspective on programming and donate the money to a good cause (Amnesty International). It was, and still is, a good idea and book. What problem could I possibly have with it?

A quick look at the Table of Contents gives you a hint: of the 38 programmers who contributed to the book, only one was a woman. Just one woman. Even by today’s standards with few women staying in the field—and even fewer entering—the ratio of men to women in this book, frankly, sucks.

A discussion arose about the lack of women authors in this book, which included the fact that some of the women who were invited to contribute declined. There were the usual statements, the usual questions asked: “Give us lists of relevant women,” “Who should we have invited?” Yada, yada, yada, business as usual.

However, there were several comments that I found disquieting because they reflected other discussions we’ve had in the last year about the issue of the declining numbers of women in technology. The wording used has differed, but the views basically reduced down to, “So what?”

There is only one woman who contributed to the book. So what?

There are no women presenting at the conference. So what?

No women are listed among these top designers/developers/experts. So what?

Before this last year—regardless of the situation and the participants, regardless of the reasons people give for the growing lack of diversity in the tech field, regardless of solutions offered—one thing all participants in these discussions seemed to agree on was that this lack of women was not a good thing. Lately, though, I’m seeing a disinterest in the whole issue; an increasingly vocal opinion that it just really doesn’t matter.

I never lack for opinion, but this one has me stumped. Here we are in 2007, in an era where the numbers of women in “non-traditional” professions have been increasing, sometimes even past the 50 percent mark. No longer do women have to stay at home or choose only “soft” professions. We now have more choices, and the only limits we seemingly face are those we bring with us. Women serve in the military and die in action, lead major corporations, argue cases in the Supreme Court, and are anything from rocket scientists to neurosurgeons.

Yet in the IT fields, our numbers are dwindling. Significantly. We all have ideas why this is occurring, but nothing concrete that we can point at and say, “There, that’s why!” It’s a true puzzle. What’s more puzzling, though, is how many in the technology field just don’t care. They don’t see that a field that is becoming increasingly only male is a problem.

Is it a problem? Probably not, if only men use the gadgets, only men use the software, only men are impacted by the applications, and so on. Yet, we know that women typically use software as much or more than men. Women use the Internet, as much or more than men. Women buy and use the gadgets. What’s happening is that all the population is using an increasing number of applications that are architected, designed, developed, quality tested, and documented by only half the population. Less than half, because the tech industry lacks diversity when it comes to race, too.

Maybe I’m just being a woman and all, but I look at this and I think to myself: are we really creating the best software? Are we really designing the best gadgets, the most useful web sites, the superior applications? How can we be, if more than half the population has no input in any aspect of the development and design process?

So, so what.

I’ve long felt that the IT field is one of the few where the participants are focused on the tools, rather than the tasks. I believe that integrating IT into the engineering field as a complete and separate discipline was a huge mistake—not the least of which is that engineering is the only other discipline where the numbers of women are dropping (big hint, there).

Our field would be better if it were integrated with the librarian sciences, psychology, business, English, art—associated with tasks and topics, rather than grouped around the tools and processes. This makes even more sense when you realize that many people who enter the field do so with no degrees or with degrees in completely unrelated disciplines. It’s not unusual to hear from both sexes that they drifted into development or design because of a growing interest that was unrelated to their initial course of study. Imagine how much stronger the IT field would be if we bring in all these diverse viewpoints right from the start.

My recommendation? Break up the computer science programs, split the participants into specialized fields within other disciplines, and stop spending all our time on talking about Ruby and how cool it is. See? There’s a solution, and all it requires is basically ripping apart the entire field and rearranging the chunks.

Whatever solutions we arrive at to increase the number of women in technology, none are going to work if there isn’t general consensus that the lack of diversity is a problem. That we all, at a minimum, agree that the computer field, as it is now, is broken. That we need to find solutions. More importantly that we all have to buy into the solutions, because whatever we come up with is going to impact on all of us, including those who say, “So what?”

Categories
Web

Web 9.75

The precision of naming takes away from the uniqueness of seeing. Pierre Bonnard.

Nick Carr comments on Google’s Web 3.0, pointing out the fact that Web 3.0 was supposed to be about the Semantic Web, or, as he puts it, the first step in the Machine’s Grand Plan to take over.

For all the numbers we flash about there really are only so many variations of data, data annotation, data access, data persistence, and whatever version of “web” features the same concepts, rearranged. Perhaps instead of numbers, we should use descriptive terminology when naming each successive generation of the web, starting with the architecture of the webs.

Application Architectures

thin client

This type of application is old. Older than dirt. A thin client is nothing more than an access point to a server, typically managing protocols but not storing data or installing application locally. All the terminal traditionally does is capture keystrokes and pass them along to the server-based application. The old mainframe applications were, and many still are, thin clients.

There was a variation of thin client a while back when the web was really getting hot: the network computer. Oracle did not live up to its name when it invested in this functionality, long ago. The network computer was a machine that was created to access the internet and serve up pages. In a way, it’s very similar to what we have with the iPhone and other hand held devices. There is no way to add third-party functionality to the interface device, and any functionality, at all, comes in through the network.

Is a web application a thin client? Well, yes and no. For something like an iPhone or Apple TV, I would say yes, it is a thin client. For most uses, though, web applications require browses and plug-ins and extensions, all of which do something unique, and require storage and the ability to add third-party applications, as well as processing functionality on the client. I would say that a web application where most of the processing is done on the server, little or none in the browser, is a thin client. However, beyond that, then the web application would be…

client/server

A client/server application typically has one server or group of servers managed as one, and many clients. The client could be a ‘thin’ client, but when we talk about client/server, we usually mean that there is an application, perhaps even a large application, on the client.

In a client/server application, the data is traditionally stored and managed on the server, while much of the business processing as well as user interface is managed on the client. This isn’t a hard and fast separation, as data can be cached on the client, temporarily, in order to increase performance or work offline. Updates, though, typically have to be made, at some point, back to the server.

The newest incarnation of web applications, the Rich Internet Applications (RIA), are, in my opinion, a variation of client/server applications. The only difference between these and application that have been built with something like Visual Basic is that we’re using the same technologies we use to build more traditional web applications. We may or may not move the application out of the browser, but the infrastructure is still the same: client/server.

However, where RIA applications may differ from the more traditional web applications is that RIA apps could be a variation of client/server–a three tier client/server application…

n-tier

In a three, or more properly n-tier client/server application, there is separation between the user interface and the business logic, and the business logic and the data, creating three levels of control rather than two. The reasoning behind this is so that changes in the interface between the business layer and the data don’t necessarily impact on the UI, and vice versa. To match the architecture, the UI can be on one machine, the business logic on a second, and the data on a third, though the latter isn’t a requirement.

Some RIA applications can fit this model, because many do incorporate a concept of a middleware component. As an example, the newer Flex infrastructure can be built as a three-tier with the addition of a Flex server.

Some web applications, whether RIA or not, can also make use of another variation of client/server…

distributed client/server

Traditional client/server: many clients working against one set of business logic mapped to database server running serially. Easiest type of application to create, but one less likely to be able to scale, and from this arises the concept of a distributed client/server or computing architecture.

The ‘distributed’ in this title comes from the fact that the application functionality can be split into multiple objects, each operating on possibly different machines at the same time. It’s the parallel nature of the application that tends to set this type of architecture apart, and which allows it to more easily scale.

J2EE applications fit the distributed computing environment, as does anything running CORBA or the older COM and the newer .NET. It is not a trivial architecture, and needs the support of infrastructure components such as WebLogic or JBoss.

This ‘distributed parallel’ functionality sounds much like today’s widget-bound sidebars, wherein a web page can have many widgets, each performing a small amount of functionality on a specific piece of data at the same time (or as parallel as can be considering that the the page is probably not running in a true multi-threaded space).

Remember, though, that widgets tend to operate as separate and individual applications, each to their own API (Application Programming Interface) and data. Now, if all the widgets were front ends to backend processes running in parallel, and working together to solve a problem, then the distributed architecture shoe fits.

There’s a variation of distributed computing–well, sort of –which is…

Service Oriented Applications

Service Oriented Applications (SOA). Better known as ‘web services’. This is the APIs and the RESTful service requests, and other services that run the web we seem to become more dependent on every day. Web services are created completely independent of the clients, supporting a specific protocol and interface that makes the web services accessible regardless of the characteristics of the client.

The client then invokes these services, sending data, getting data back, and does so without having any idea of how the web services are developed or what language they’re developed with, other than knowing the prototype and the service.

Clean and elegant, and is increasingly running today’s web. The interesting thing about web services is that they can be almost trivially easy to tortuously complex to implement. And no, I didn’t specifically mention the WS-* stack.

Of course, all things being equal, no simpler architecture than…

A stand alone application

A stand alone application is one where no external service is necessary for accessing data or processes. Think of something like Photoshop, and you get a stand alone application.

The application may have internet capabilities but typically these are incidental. In addition, the data may not always be on the same machine, but it doesn’t matter. For instance, I run Photoshop on one Mac, but many of my images are on another Mac that I’ve connected through networking. However, though I may be accessing the data on the ‘net, the application treats the data as if it is local.

The key characteristic of a stand alone application is that you can’t split the application up across machines — it’s an all or nothing. It’s also the only architecture that can’t ‘speak’ web, so we can’t look for the Web 3.0 among the stand alones.

Alone again, naturally…

No joy in being alone; what we need is a little help from our friends.

P2P

P2P, or peer-to-peer applications are built in such a way that once multiple peers have discovered each other through some intermediary they communicate directly–sharing either process, data, or both. A client can become a server and a server can become a client.

Joost is an example of a P2P application, as is BitTorrent. There is no centralized place for data, and the same piece of data is typically duplicated across a network. Using a P2P application, I may get data from one site, which is then stored locally on my machine. Another person logging on to the P2P network can then get that same piece of data from me.

The power to this environment is it can really scale. No one machine is burdened with all data requests, and a resource can be downloaded from many resources rather than just one. It is not a trivial application, though, and requires careful management to ensure that any one participant’s machine isn’t made overly vulnerable to hacking, that downloads are complete, that data doesn’t get released into the wild, and so on. Communication and network management is a critical aspect to a P2P application.

These are the architectures, at least, the ones I can think of off the top of my head. Which, then, becomes the ‘next’ Web, the Web 3.0 we seem to be reaching for?

Web 3.0

Ew Ew Ew! The next generation of the web must be Google’s cloud thing, right. So that makes Web 3.0 a P2P application, and we call it “Google’s P2P Web” or “MyData2MyData”?

Ah, no.

The concept of ‘cloud’ is from P2P (*correct?). It is a lyrical description of how data is seen to a P2P application…coming from a cloud. When we make requests for a specific file, we don’t know the exact location of where the file is pulled; chances are, it’s coming from multiple machines. We don’t see all of this, though, hence the term ‘cloud’. Personally, I prefer void, but that’s just semantics.

The term cloud has been adopted for other uses. Clouds are used with ‘tags’ to describe keyword searches, the size of the word denoting the number of requests. I read once where a writer called the entire internet a cloud, which seems too generic to be useful. Dare Obasanjo wrote recently on the discussions surrounding OS clouds, which, frankly, don’t make any sense at all and, me thinks, using cloud in the poetic sense: artful rather than factual.

The use of ‘cloud’ also occurs with SOA, which probably explains Google’s use of the term. And Microsoft’s. And Apple, if they wanted, but they didn’t–being Apple (Stickers on our machines? We don’t need no stinking stickers!) Is the next web then called, “BigCo SOA P2P Web”?

Let’s return to Google CEO Schmidt’s use of the cloud, as copied from Carr’s post, mentioned earlier:

My prediction would be that Web 3.0 would ultimately be seen as applications that are pieced together [and that share] a number of characteristics: the applications are relatively small; the data is in the cloud; the applications can run on any device – PC or mobile phone; the applications are very fast and they’re very customizable; and furthermore the applications are distributed essentially virally, literally by social networks, by email. You won’t go to the store and purchase them. … That’s a very different application model than we’ve ever seen in computing … and likely to be very, very large. There’s low barriers to entry. The new generation of tools being announced today by Google and other companies make it relatively easy to do. [It] solves a lot of problems, and it works everywhere.

With today’s announcement of Google shared space we’re assuming that Google thinks of third-party storage as ‘cloud’, similar to Microsoft with its Live SkyDrive or Apple with its .mac. It’s the concept of putting either data or processes out on third party systems so that we don’t have to store on our local machines or lease server space to manage such on our own.

In Google’s view, Web 3.0 is more than ‘just’ the architecture: it’s small, fast applications built on an existing infrastructure (think Mozilla, Silverlight, Flex, etc.) that can run locally or remotely; on phones, hand helds, and/or desk sized or laptop computers; store data locally and remotely; built on web services run on one or many machines, created by one or more than one company. I guess we could call Google’s web, the Small, Fast, Device Independent, Remote Storage, SOA P2P Web, which I will admit would not fit easily on a button, nor look all that great with ‘beta’ stuck to its ass.

Not to mention that it doesn’t incorporate all that neat ‘social viral’ stuff. (I knew I forgot something.)

The social viral stuff

Whatever makes people think that Facebook or MySpace or any of the like is ‘new’? Since the very first release of the internet we’ve had sites that have enabled social gathering of one form or another. The only thing the newer forms of technology provide is a place where one can hang one’s hat without having to have one’s own server or domain. That’s not ‘social’–that’s positional.

Google mentions how we won’t be buying software at the store. I had to check the date on the talk, because we’ve been ‘spreading’ software through social contact for years. Look in the Usenet groups and you’ll see recommendations for software or links to download applications. Outside of an operating system and a couple of major applications, I imagine most of us download our software now.

What Google’s Schmidt is talking about isn’t downloaded software so much as software that has a small installation footprint or doesn’t even need to be installed at all. Like, um, just like the software it provides. (Question: What is Web 3.0? Answer: What we’re selling.)

Anyone who has ported applications is aware of what a pain this is, but the idea of a ‘platformless’ application has been around as long as Java has been around, which is longer than Google. It’s an attractive concept, but the problem is you’re more or less tied into the company, and that tends to wear the shininess off ‘this’ version of the web–not to mention all that ‘not knowing exactly what Google is recording about us, as we use the applications’ thing that keeps coming up in the minds of we paranoid few.

Is the next web then, the Small, Fast, Device Independent, Remote Storage, SOA P2P, Proprietary Web? God, I hope not.

Though Schmidt’s bits and cloudy pieces are a newer arrangement of technology, the underlying technology and the architectures have been around some time: the only thing that really differs is the business model, not the tech. In this case, then, ‘cloud’ is more marketing than making. Though the data could end up on multiple sites, hosted through many companies, the Google cloud lacks both the flexibility and freedom of the P2P cloud, because at the heart of the cloud is…Google. I’ve said it before in the past and will repeat: you can’t really have a cloud with a solid iron core.

Though ‘cloud’ is used about as frequently as lipstick at a prom, I don’t see the next generation of the web being based on either Google’s cloud, or Microsoft. Or Adobe’s or Mozilla’s or Amazon’s or any single organization.

If Google’s Web 3.0, or, more properly, Small, Fast, Device Independent, Remote Storage, SOA P2P, Proprietary, Web with an Iron Butt, is a bust, does this mean, then, that the Semantic Web is the true Web 3.0 after all?

Semantic Web Clouds…and stuff

Trying on for size: a Semantic Client/Server Web. Nope. Nope, nope, nope. Doesn’t work. There is no such thing as a semantic client/server. Or a semantic thin client, or even distributed semantics, or SOA RDF, though this one comes closest, while managing to sound like something that belongs on a Boyscout badge.

Semantics on the web is basically about metadata–data about data. Our semantic efforts are focused on how metadata is recorded and made accessible. Metadata can be recorded or provided as RDF, embedded in a web page as microformat, or even found within the blank spaces of an image.

We all like metadata. Metadata makes for smarter searches, more effective categorization, better applications, findability. If data is one dimension of the web, then metadata is another, equally important.

The semantic web means many things, but “semantic web” is not an application architecture, or profoundly new way of doing business. Saying Web 3.0 is the Semantic Web implies that we’ve never been interested in metadata in the past, or have been waiting some kind of solar congruence to bring together the technology needed.

We’ve been working with metadata since day one. We’ve always been interested in getting more information about the stuff we find online. The only difference now from the good old days of web 1.0 is we have more opportunities, more approaches, more people are interested, and we’re getting better when it comes to collecting and using the metadata. Then again, we’re also getting better with just the plain data, too.

Web 3.0 isn’t Google’s cloud and it isn’t the Semantic Web and it certainly isn’ t the Small, Fast, Device Independent, Remote Storage, Viral, SOA, P2P, Proprietary, Smart Web with an Iron Butt. Heck, even Web 3.0 isn’t Web 3.0. So what is the next great Web, and what the devil are we supposed to call it?

Web 9.75

It is a proprietary thing, this insistence on naming things. “From antiquity, people have recognized the connection between naming and power”, Casey Miller and Kate Swift wrote.

We can talk about Web 1.0, or 2.0, or 3.0, but my favorite is Web 9.75, or Web Nine and Three-Quarters. It reminds me of the train platform in the Harry Potter books, which could only be found by wizards. In other words, only found by the people who need it, while the rest of the world thinks it’s rubbish.

There are as many webs as there are possible combinations of all technologies. Then again, there are many webs as people who access them, because we all have our own view of what we want the web to be. Thinking of the web this way keeps it a marvelously fluid and ever changing platform from which to leap unknowing and unseeing.

When we name the web, however, give it numbers and constrain it about with rigid descriptions and manufactured requirements, then we really are putting the iron into the cloud; clipping our wings, forcing our feet down paths of others’ making. That’s not the way to open doors to innovation; that’s just the way to sell more seats to a conference.

Instead, when someone asks you what the next Web is going to be, answer Web 9.75. Then, when we hear it, we’ll all nudge each other, wink and giggle because we know it’s nonsense, but no more nonsense then Web 1.0, Web 2.0, Web 3.0 or even Google’s Web-That-Must-Not-Be-Named.

*As reminded in comments, network folks initially used ‘cloud’ to refer to that section of the network labeled “…and then a miracle happens…”