Categories
Standards Web

Issues of accessibility

Recovered from the Wayback Machine.

Unless you’ve been living under a rock, you’ve probably heard about Mark Pilgrim’s Thirty Days to a more Accessible Web. The series covers basic steps we can take to make sure our weblogs and web sites are accessible.

His first tip is on DOCTYPES.

I tested my weblog against the 508 accessibility test at Bobby and according to the results, not necessarily trivially easy to read, I should meet this standard. However, I don’t meet the Web Content Accessibility Guidelines 1.0 standard.

Does anyone meet the Web Content Accessibility Guidelines 1.0 standard?

Once I’m settled, I’m enlisting the help of experts among my virtual neighbors (weblog translation – I’m whining, begging, and groveling for help because everyone knows I’m a back-end developer and know shit about front end stuff) to make sure my weblog and web sites are accessible.

If you have a weblog, don’t you have something to do about now?

(And once you’re done with that, move your tushie over to AKMA’s and give him some requirements and suggestions for Thread the Needle.)

Categories
Technology

A day in the life of a technical architect

Client: When can you tell me what you think of the software?

Me: When do you need the evaluation?

Client: Tomorrow.

Me: Tomorrow?

Client: Yes, we’re meeting with our clients tomorrow.

-sigh-

Me: Well, what’s the potential user load for the software

Client: Half a million customers

-pause-

Me: At once?

Client: Yes. What do you think it will take?

Me: A miracle

Categories
RDF Technology Weblogging

Technology to enable community

Recovered from the Wayback Machine.

Serendipity is such a major component of my life, never more so than when I read Gary’s attempt to manually connect the multiple threads to the whole discussion about Identity.

While I’m on my long journey through distance and time, I’m working on a new application that will provide a means to track cross-blog discussions, such as those my own virtual neighborhood (and others) participate in. The specs for the application are:

 

Project is called Thread the Needle, or “Needley” for short. Its purpose is to track cross-blogging threads.

How it works:

You register your weblog, once, with an online application I’ll provide (i.e. provide your weblog location, name of weblog, email). Frequently throughout the day, the Needle service bot will visit the weblog looking for RDF (an XML meta-language, used for RSS and other applications) embedded within the weblog page. Note that this may change to scan weblogs.com for changed weblogs that are registered, or based on the first time a person clicks the link or some other procedure – testing these out as you read this.

The RDF will be generated by the service now and copied and pasted into the posting; hopefully someday it will be generated automatically by the weblogging tools.

The RDF either starts a weblogging subject thread – starts a new subject – or continues an existing thread. The bot pulls this information in and when someone clicks on a small graphic/link attached to the posting, a page opens showing all related threads and their association with each other.

Example:

AKMA writes a posting on Identity. Because he starts the discussion thread he creates and embeds RDF “thread start” XML into the posting (generated by the tool using very simple to use form, results cut and pasted into posting). Included in this RDF is thread title, brief description, posting permalink, weblog name, and posting category, accessed from pulldown list.

The generated code also contains a small graphic and link that a person clicks to get to the Needley page. Clicking another small graphic/links opens up a second form for a person wanting to respond to this posting, with key information already filled in.

The posting would look like:

 

This is posting stuff, posting stuff, words, more words more words
more words and so on.

link/graphic to view page Needle thread page,
link/graphic to respond to current posting

Posted by person, date, comment

 

The embedded RDF is invisible.

David Weinberger creates his own posting related to AKMA’s posting, and clicks AKMA’s “respond” link and a form opens with pre-filled fields. He adds his own permalink info, pushes a button and a second page opens with generated RDF that David then embeds into his posting.

Stavros comes along wanting to continue on David’s discussion and follows same process. Jeneane responds directly to AKMA, and Jonathon, responds to Stavros, and Mike responds to David, and Steve responds to Jeneane and AKMA responds to David and Steve, who responds back to AKMA.

The Needle page for this thread shows:

AKMA
David
Stavros
Jonathon
AKMA
Mike

Jeneane
Steve
AKMA

Each of the above names is a hypertext link to the discussion posting. Some visual cue will probaby be added to assist in the reading of the hierarchy of discussion. (I’ll also work to make sure that this page and its contents are fully accessible.)

If a person is responding to two or more of the threaded postings, they can add the generated RDF for each posting they’re responding to – there’s no limit. So Dorthea responds to Jonathon’s and AKMA’s original posting:

AKMA
David
Stavros
Jonathon
Dorothea*
AKMA
Mike

Jeneane
Steve
AKMA

Dorothea*

The asterisk shows that the posting is one response to multiple postings.

It will take approximately 30 seconds to click, complete, generate, cut and paste the RDF for a response; about 1 minute for starting a thread.

The results can either be hierarchy ordered, by response, or time ordered. The thread page starts with the thread title, category, description, date started, date of last update and each weblog entry is associated with a link that will take a person directly to the specific posting.

With this, people can see all those who’ve responded, can reply with new posting, and the conversation can continue cross-blog, many threaded.

I’ll probably try to add in graphics to create a flow diagram, similar to the RDF validation tool (see at http://www.w3.org/RDF/Validator/ and use http://burningbird.net/example12f.rdf as test RDF file to demonstrate).

Discussion thread titles and associated descriptions and categories will go on a main page that is continuously updated, with a link to the main thread page for each discussion. I’d like to add search capability by category, weblog, and keyword.

(e.g. “Show me all discussions that AKMA has originated that feature Identity”)

 

I’ve already incorporated RDF into Movable Type postings and have been able to successfully scrape and process the information.

I’ll be asking for beta testers of this new technology in July, and will be hosting the discussion server at first. My wish is to distribute this application rather than centralize it, and will look at ways this can occur (one major reason why I went with embedded RDF).

Update: AKMA and Gary Turner are collecting suggestions and requirements from the weblogging community for this application. A basic infrastructure is in place, but the user community needs to provide information about how this product will work, and what it will do. Please see AKMA’s posting to get additional information.


 

Just read Meg’s What we’re doing when we blog article. Though I can agree with many of Meg’s sentiments, I totally disagree with Meg’s philosophy that the weblogging format is the key to weblogging. Last time I looked, I thought it was the people. Meg truly missed the boat on this one. In fact, she wasn’t even at the dock to wave her handkerchief good-bye when the boat left.

The Thread the Needle application will help weblogger discussions, but it’s just an enabler – weblogging discussions can continue without it. We are connecting because of what we say, not the technology we use. Weblogging tools help, but they don’t create community.

Another instance of serendipity because the same day Meg’s article appears, I stated in the Pixelview interview:

 

Too many people focus on the technology of the web, forgetting that technology is nothing more than a gateway to wonderous things. The web introduces us to beauty, creativity, truth, new people and new ideas. I genuinely believe there are no limits to what we can accomplish given this connectivity.

Categories
Technology

P2P Discovery

What kind of core do Kazaa and its supernodes have? Is it iron? Gold? Or is it more of an aluminum core because the cloud that supports the Kazaa P2P network is still malleable — the Supernodes that provide the cloud services are fluid and can change as well as go offline with little or no impact to the system.

I imagine, without going into the architecture of the system, that more than one Supernode is assigned to any particular subnet, others to act as backups, most likely pinging the primary Supernode to see if it’s still in operation. Out of operation, the backup Supernode(s) takes over and a signal is sent to the P2P nodes to get services from this IP address rather than that one. The original Supernode machine may even detect a shutdown and send a signal to the secondaries to take over.

Or perhaps the Supernode IPs are chained and the software on each P2P node checks at this IP first and if no response occurs, automatically goes to the second within the Supernode list and continues on until an active Supernode is found. This would take very little time, and would, for the most part, be transparent to the users.

Again without access to any of the code, and even any architecture documentation (which means there’s some guesswork here) the algorithm behind the Supernode selection list looks for nodes that have the bandwidth, persistent connectivity, and CPU to act as Supernodes with little impact to the computer’s original use. The member nodes of each KaZaA sub-net — call it a circle — would perform searches against the circle’s Supernode, which is, in turn, connected to a group of Supernodes from other circles so that if the information sought in the first circle can’t be found, it will most likely be found in the next Supernode and so on. This is highly scalable.

So far so good — little or no iron in the core, because no one entity, including KaZaA or the owner’s behind KaZaA, can control the existence and termination of the Supernodes. Even though KaZaA is yet another file sharing service rather than a services brokering system, the mechanics would seem to meet our definition of a P2P network. Right?

Wrong.

What happens when a new node wants to enter the KaZaA network? What happens if KaZaA — the corporate body — is forced offline, as it was January 31st because of legal issues? How long will the KaZaA P2P network survive?

In my estimation, a P2P network with no entry point will cease to be a viable entity within 1-2 weeks unless the P2P node owners make a determined effort to keep the network running by designating something to be an entry point. Something with a known IP address. Connectivity to the P2P circle is the primary responsibility of a P2P cloud. KaZaA’s connectivity is based on a hard-coded IP. However, small it is, this is still a kernel of iron.

We need a way for our machines to find not just one but many P2P circles of interest using approaches that have worked effectively for other software services in the past:

We need a way to have these P2P circles learn about each other whenever they accidentally bump up against each other — just as webloggers find each other when their weblogging circles bump up against each other because a member of two circles points out a weblog of interest from one circle to the other.

We need these circle to perform an indelible handshake and exchange of signatures that become part of the makeup of each circle touched so that one entire P2P circle can disappear, but still be recreated because it’s “genetic” makeup is stored in one, two, many other circles. All it would take to restart the original circle is two nodes expressing an interest.

We need a way to propagate the participation information or software or both to support the circles that can persist regardless of whether the original source of said software or information is still operating, just as software viruses have been propagated for years. Ask yourselves this — has the fact that the originator of a virus gone offline impacted on the spread of the said virus? We’ve been harmed by the technology for years, time to use the concepts for good.

We need a way to discover new services using intelligent searches that are communicated to our applications using a standard syntax and meta-language, through the means of a standard communication protocol, collected with intelligent agents, as Google and other search engines have been using for years. What needs to change is to have the agents find the first participating circle on the internet and ask for directions to points of interest from there.

A standard communication protocol, meta-language, syntax. Viral methods of software and information propagation. Circles of interest with their own DNA that can be communicated with other circles when they bump in the night, so to speak. Internet traversing agents that only have to be made slightly smarter — given the ability to ask for directions.

Web of discovery. Doesn’t the thought of all this excite you?

Categories
Technology

Iron Clouds

A true P2P cloud does not have a core of iron. By this I mean that there can be no static IP or server providing the gateway or facilitating the communication between nodes within a distributed application.

You can argue this one with me for years and you won’t convince me otherwise. I know that Groove has an iron core cloud. I know that Userland is thinking of an iron core cloud that can move about the nodes. UDDI is based on the premise of a centralized source of information about services that just happens to get striped and mirrorer. Striped — chunked off. Mirrored — distributed to different servers. And don’t focus on the the distributed in the latter, keep your eye on the server.

Server == iron

iron == control

Freenet comes closest to being the truest form of a cloud but there is an assumption that the gateway to the cloud must be known in some way, a pre-known entrance. According to the Ian Clarke’s Freenet: A Distributed Anonymous Information Storage and Retrieval System, “A new node can join the network by discovering the address of one or more existing nodes through out-of-band means, then starting to send messages”.

Can we have P2P clouds without some touch of iron? Can we have transient gateways into P2P networks without relying on some form of pre-knowledge, such as a static IP?

Ask yourselves this — I’m looking for information about C#, specifically about the CLR (Common Language Runtime) and the Common Language Interface (CLI).

Keys are: C# CLR CLI

Go to Google, enter the words, click on I’m Feeling Lucky — and say hi to me in passing.

We don’t need P2P clouds with cores of iron; what we need is new ways of looking at existing technologies.