Women and visibility panel canceled

Recovered from the Wayback Machine.

The Sunday, March 12 panel on Women and Visibility at SxSW has been canceled.


Point of clarification: I was given responsibility for this panel, unexpectedly, late Monday afternoon. Because of the number of panel members who had dropped out, I made a call to cancel. The SxSW organizers did not make this decision.

However, some interest has been expressed about still having this, but with new panel members. If you’re interested in being a part of this, or being panel leader, contact the SxSW organizers.


Never need

Recently, I was the target of recruiters for a well known company. I wasn’t particularly interested in working for the company, especially since it meant I would have to move back to the Silicon Valley area (something I didn’t want to do).

The recruiters were nice, and I was flattered. However, I was also aware that there was a hiring blitz of women happening within many of the tech companies so I wasn’t too flattered.

(Not sure of the reason for the sudden interest in hiring women. It could be the class action lawsuits successfully won by women being discriminated against in other industries. Or perhaps the companies are finally starting to realize that, hey! They have women customers, too. Anyway, I digress. Back to the recruiters. )

They tried a couple of different approaches to get me interested in the company, the most recent of which had some mild appeal (not working for the company, but what they offered). At that point, the recruiters had me speak with a person from their technology department. I did, and chattered on enthusiastically about the topics he brought up until he had to make another call.

I never did hear back from the recruiters. To be honest, I don’t think any of us, myself or the recruiters, expected anything to come from this conversation. So why did it occur? Simple: they had to bring the relationship to the point where they were the ones who did the rejection. At no point could they tolerate that they didn’t have the final say in the decision: will I, won’t I work for them.

As long as I kept saying no, I had value; once I said yes, my value deteriorated. It wasn’t me that was of interest; it was the fact that I said ‘no’ that made me stand out.

This is a nasty by-product of our increasingly marketing-oriented mentality: we want that which is unobtainable; we don’t value that which is within reach. So this holiday season, there are three things of great worth: iPods, XBoxes, and people who are hard to get. Or already gone.

I am learning, though. After a while, even Pavlov’s dogs learned to react to the bell.


Ain’t no cobwebs here

Bob Wyman has made a point of clarifying that Structured Blogging is a thing you do not a format. This is a good point to make, because there has been some strong association between the first release SB-generated metadata format, and the concept of Structured Blogging, itself.

The SB plugins can be (and are being) modified to generate microformats and RDF in addition to the embedded x-subnode format currently supported. I myself am not overly fond of the current implementation of metadata embedding, primarily because I don’t think embedding makes sense with today’s web applications.

Increasingly, generated web content is replacing static pages, especially when it comes to online businesses and personal web sites. There are some applications that generate static web pages, but these are becoming more the exception than the rule. Most web and CMS tools generate dynamic content, as do the majority of commerce-based sites such as Amazon, eBay, and the many stores based on OSCommerce.

Even sites that create static pages, such as those based on Userland’s Radio, MT, or TypePad, do so from dyanamic data and change frequently. The ’static’ in these instances is more of a delivery mechanism than a philosophy of web content. However, as we’re finding, creating or generating static web pages that are fresh and timely, takes resources. As such, pages that are created in a more on-demand fashion are becoming, more and more, the norm. For those times when on-demand paging becomes, itself, a resource hog (such as with syndication feed access), many of the on-demand tools now provide functionality to generate a static snapshot of a page, if it makes more sense to do so.

Regardless of approach, though, the concept behind all these pages is the same: web content that is dynamic.

Because of this increasingly dynamic web, it doesn’t make sense to embed different levels of data, or even uses of data, within the same page. After all, we don’t expect to ’scrape’ a page to provide our syndication feeds; so why would we expect to embed our metadata directly in a page, when it can be easily and simply provided in much the same manner as the syndication feed data.

Take a look at this site. My tool, Wordform (a variation of WordPress), generates the page content based on demand. The page contains posts, comments, and sidebar items that enable site navigation. If I port the SB plugins over to Wordform, when I do my Saturday matinee movie reviews, I could use these the plugins to add a more structured view of the data in addition to the already provided unstructured text.

Now, I can generate the more structured data as microformats, and it might make sense to do so because some tools may only work with microformatted data. I could also use the embedded metadata approach behind SB. However, my preferred approach would be to generate RDF/XML metadata for the movie review, and then just add this to the other metadata associated with the page. To make this data accessible, I only need to add a META link in my header, pointing to an URL the same as the post URL, except with an ‘/rdf/’ attached to the end.

In fact, since it is the same data that generates all three, I could have a META link to the current SB-flavored XML, accessible by attaching an ‘/xml/’ to the end of the URL; provide the same data formatted as RDF/XML, accessible by attaching ‘/rdf/’; and then add the microformatting into the page elements. That way, no matter what the tools want, the data would be available.

This is no different than what we do with syndication feeds. Many people provide more than one type of syndication feed, and depending on the tool, can be accessed from tools just by attaching a ‘/feed/’ to the end of the weblog URL (or post, to subscribe to a post’s comments). Since feeds themselves are nothing more than another view of the page data, it makes little sense to use a different delivery mechanism for metadata then what one uses for feeds.

Best of all, with this approach, the data is formatted specifically for the use; rather than trying to warp and twist one format to meet the needs of all uses. Embedding all manner of data into the web page delivered to a consumer interested in only a portion of it, comes from an outmoded way of thinking. It’s based on the idea that web pages are, themselves, costly to maintain and that the more files one has, the more difficult it is to maintain a web site.

However, dusty web page content comes from a time when BLINK ruled, and the only formatting we had, regardless of use, was HTML tables. Then the only issue on most of our minds was keeping pages up to date–keeping the cobwebs off, as it were. Since we had to do this manually, no wonder we didn’t want to create too many pages.

Now, though, the days of static web pages are over; long live the cobwebs.


Web 2.0 and hamster wheels

Dare Obasanjo wrote a post about flipping your Web 2.0 startup and gave three reasons why a bigger company would gobble up a startup: users, technology, and people. Paul Kedrosky replied that Dare was wrong, wrong, wrong and that building companies to flip is also wrong, wrong, wrong.

I happen to agree (agree, agree) somewhat with Mr. Kedrosky, in that I wish technologies weren’t being built for the express purposes of flipping (i.e. quickly going from VC funding to purchase by a Player), but, as Dare writes, it does happen. Where I disagree with Dare is in his list of reasons why a company would buy a Web 2.0 startup. He left an important one off the list: image.

One week the web is full of talk about Google, the next Yahoo, the week after that might be MSN (though it is falling far behind the other two–Ozzie just doesn’t have the 2.0 stink; maybe it can buy; then the cycle continues anew. How these companies make money is as much image as software provided. It’s important to all three search engine companies to be seen as the company that’s the leader into a whole new version of the web. However, this doesn’t mean that each company is going to change the way it does business.

Google bought Blogger years ago, and I remember we talked about how this would change Google searches, and we discussed what this purchase meant. Now we can see that it didn’t mean anything, other than Google bought into the hip 2.0 kid at the time–weblogging. Blogger hasn’t changed all that much, other than new features to keep up with other tools; Google didn’t change at all.

When Yahoo bought, I read comments here and there to the effect that this is going to change search dynamics and the old algorithmic approaches will soon give way to new tagging ones. Yet the statistics don’t support this. Delicious has, what, 300,000 users? Out of how many billions of web users? The number of people who tag–which is really what the tech is all about, tagging and storage–compared to those who don’t is so skewed as to make tagging a non-starter.

(Not to mention that no one has effectively explained how tagging is going to make for more accurate searches.)

But Yahoo’s buying of put it into the front page of many publications. It kept it even, or barely even, in the fight with Google for being the ‘hip’ company — the one that people will use for their searches. The site where advertisers should place their ads; the place to connect for other companies pushing themselves as Web 2.0.

Both Yahoo and Google made a lot of money selling space for ads; enough so that they can afford to invest a few millions in small startups that could add to their web 2.0 goodness. Heck, Google just plunked a billion or so into AOL, and we all know that’s an elephant bound for the graveyard. Until it ambles its rattly bones in that direction, though, it still brings an immense number of eyeballs to Google–eyeballs that Google would rather have then concede to Microsoft (it’s competitor in this deal).

Besides: investing money is a good corporate move at tax time. And it doesn’t hurt when investors see these companies seem to diversify — there’s been a lot of talk about bursting bubbles this year.

As for the technology, most of the Web 2.0 startups are based on copious amounts of data stored, accessible via search and subscription, tagged, and wrapped in an API. None of the companies are based on what I would call revolutionary uses of technology. Much of the early popularity of the companies is because the services each offered was free. Oh, and the fact that we have, personally, come to know the folks behind the companies. This then leads to the question: do companies benefit from bringing in the people behind these acquisitions?

Of course they do. After all, the folks behind Flickr and and Bloglines and so on were the originators of ideas that took off –if I were Google and Yahoo, (eBay, Microsoft, and so on) I’d rather have these people on my team then on the competitor’s. But as we’ve seen with Google/Blogger and Yahoo/Flickr, the startup seems to benefit more from the new association than the other way around. Storage costs money, scaling isn’t cheap. Or easy. (Maybe Microsoft can buy TypePad.)

Before we make an assumption that Google and Yahoo, in particular, are going to throw out their web 1.0 cash cows in favor of shiny new web 2.0 branded calves, the evidence of our eyes does not support the what ifs generated by our fevered imaginations. Disappointing, true; but it is fun to watch each company take a turn on the hamster wheel each week. (Saaayyy, that’s who Microsoft can buy….)

More on Google and AOL.

More on Yahoo and