Categories
RDF

First looks at Joost

s I edited the book today, while the snow blew in an oddly unendearing blizzard–alas, we missed the copper moon–I watched Joost. Specifically I watched a nice show on sleeper sharks, several episodes of National Geographic, and explored a bit with the other channels.

A television network hosted entirely through the web (Internet Protocol TV or IPTV) is the way of the future, and I wouldn’t be surprised to see the major networks go this route eventually. At issue is bandwidth, not to mention integrity of signal. With Joost I found that sometimes the picture would be remarkably clear; other times, barely viewable. Still, all in all, for a beta product it was quite good.

I liked being able to pick any show I wanted, stop it, re-start it, and re-watch it if I wanted. The commercials are short and sweet, and I don’t begrudge the few minutes per show for them. What didn’t work is that Joost interrupts the program literally mid-word–it makes no use of markers to insert commercials in natural lulls. In addition, the commercials ended up being a uniform loud volume while the shows wavered in their loudness–leading me to having to hastily turn down the volume.

There’s a ‘young person’ feel to the service that I think is a serious mistake. A host from the service cracks about the ‘old people’ and the music videos seemed to feature young women who all sounded alike and all equally bared their navels, complete with navel rings.

Then there are the stripper shows. Ha! That got your attention.

Seriously, making assumptions about an audience could end up acting as a natural filter, which will end up hurting overall client numbers.

I don’t have cable or satellite, so Joost seems like a good alternative. However, I especially wanted to try out Joost because RDF plays a strong role in its infrastructure. According to Leo Simons:

We make extensive use of RDF in different places. It all starts with a core RDFS/Owl schema that is used to capture various kinds of information (think FOAF +imdb+RSS+a lot more). I suspect some parts of the modelling work that was done here will make it into future standards for online video.

We have a custom distributed digital asset management system (or DAM), built around jena-with-postgres at the moment for storage and (CRUD-like) management off all that RDF-ized information over a REST protcol.

Not only will this research go into Joost–at least part of the effort is going into TripleSoup:

TripleSoup is the simplest thing that you can do to turn your apache web server into a SPARQL endpoint.

TripleSoup will be an RDF store, tooling to work with that database, and a REST web interface to talk to that database using SPARQL, implemented as an apache webserver module.

Joost signed a deal with Viacom, which should begin to add to the content offered (Daily Show!) It’s a P2P service, a term we don’t hear very often now with the world’s seeming focus on all things Web 2.0. If this is P2P, what should happen is that my using the service also means that my PC participates in the network, providing resources for said network. I can’t find anything on that part of the application, but I’m assuming this is so. Or perhaps the true P2P is on the part of the networks involved.

Joost has a widget interface, which includes chat, a clock, and various other items of that nature, but to me, this isn’t something that’s too interesting. Either I want to watch television or I want to chat. If I wanted to do both at the same time, I’d get married again.

The interface is clean and relatively intuitive, especially if you’re used to Windows Media Center. All in all, it’s a pretty decent beta offering–true beta rather than than the, “Hi, we’re alpha, but beta is so much more cool. And it’s all fun anyway! *giggle*”. It has a way to go, though, as the current show I’m watching demonstrates, being full of stops and starts. Time to put on a movie.

Screenshots and another review.

Categories
Social Media

Wikipedia Walking

Seth Finklestein provided great coverage on the recent controversy over Wikipedia editor/community manager “Essjay” (onetwothreefourfive, and six).

The gentleman in question misrepresented himself as a tenured professor, both in an interview and in Wikipedia. Rather than show him the door, Jimmy Wales defended him–boys will be boys or some rot. It was only when Wales found out that Essjay lied to people ‘within’ the Wikipedia community that he was subsequently banished.

Essjay’s apology, if such can be said about it, was that he fabricated the information about himself to protect himself in this dangerous world. You don’t know how much my fingers itched to go out and do a little ‘self-protecting’ with my own page. Letsee…triple PhD holder, Pulitzer Prize winner, former Ms. Universe.

I refrained though. Instead, I invite you all to do the same–the three most colorful entries get a copy of either Practical RDF or Learning JavaScript, or the upcoming Adding Ajax.

Essjay’s ‘apology’ was an unbelievably silly excuse, but the irony doesn’t enter the picture until you view Essjay’s farewell page. Checking the history, most of the critical comments have been edited out.

I’ve recently stopped using Wikipedia, or stopped using it as an original source. I’ve found two things:

First, Google’s results have degraded in the last year or so. When one ignores Wikipedia in the results, on many subjects most of the results are placement by search engine optimization–typically garbage–or some form of comment or usenet group or some such that’s not especially helpful. Good results are now more likely found in the second or third pages.

Second, I find that I’m having to go to more than one page to find information, but when I do, I uncover all sorts of new and interesting goodies. That’s one of the most dangerous aspects of Wikipedia (aside from the whole ‘truth’ thing), or any single-source of information: we lose the ability to discover things on the net through sheer serendipity.

I still respect many of the authors in Wikipedia, and think it’s a good source. However, this event only strengthens my belief that Wikipedia should be pulled to the side for search engine results, like the Ask definition for words that match in Google, and people go back to searching the web by actually searching the web.

PS, also read the comments associated with Seth’s posts.

Interesting how hard items like ethics, honor, and truth metamorphose in the the soft environment encompassed by so-called social software.

Jason Scott has more on this issue.

Nick Carr’s thoughtful take.

Categories
Media

Watch Now

I watched my first movie last night through Netflix’s “Watch Now” feature. I learned via Gizmodo that this feature is available for everyone, just by clicking a link in the accounts page. This is an effective way to roll this feature out, until it’s available for all accounts automatically in June: if you know what it is, you can get access to it; if you don’t, you don’t know what you’re missing and Netflix has a chance to work through scaling issues.

I watched the old 1959 Sci-Fi classic, Angry Red Planet last night. There wasn’t any interruption in the movie streaming and the quality is as good as anything iTunes provides. Better, actually. As for the viewing experience, nothing is better than the old Technicolor movies on my HD television. I may not have the resolution, but I sure get the color. Red! Really red!

As for selection, I’m really impressed with the eclectic mix of movies: CasablancaThe Day the Earth Stood StillA Streetcar Named Desire, as well as several lesser known and more modern releases. I can easily use up my allocated hours per month.

As far as I know, this is a US-based service only, as is Netflix. Still, if it catches on, I wouldn’t be surprised to see it being offered in other countries.

Categories
Programming Languages

Perfect example

Recovered from the Wayback Machine.

Here’s a perfect example of how the computer field is broken:

In a post at Coding Horror, based on earlier posts at Imran on Tech and Raganwald, the author parrots what the others state, that programmers can’t program. With lots of exclamation points.

Why make such a breathtakingly grandiose claim? Because of what happens in interviews. It would seem that the originator of this newest fooflah created a series a tests given during the interview process and found:

After a fair bit of trial and error I’ve discovered that people who struggle to code don’t just struggle on big problems, or even smallish problems (i.e. write a implementation of a linked list). They struggle with tiny problems.

So I set out to develop questions that can identify this kind of developer and came up with a class of questions I call “FizzBuzz Questions” named after a game children often play (or are made to play) in schools in the UK. An example of a Fizz-Buzz question is the following:

Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”.

Most good programmers should be able to write out on paper a program which does this in a under a couple of minutes. Want to know something scary? The majority of comp sci graduates can’t. I’ve also seen self-proclaimed senior programmers take more than 10-15 minutes to write a solution.

Jeff Atwood of Coding Horror also goes on to quote others who run into the same problems: interviewees can’t seem to do even the simplest coding tasks during interviews. These gentlemen completely ignore the environment and focus on the grossest of generalities:

Programmers can’t program.

Here’s a clue for you: I don’t do well in programming tasks during interviews, and I’ve love someone to come into my comments and tell me I can’t program based on this event. No, I’ve only faked it while working for Nike, Intel, Boeing, John Hancock, Lawrence Livermore, and through 14 or so books–not to mention 6 years of online tech weblogging.

In fact, you’ll find a lot of people who don’t necessarily do well when it comes to programming tasks or other complex testing during job interviews. Why? Because the part of your brain that manages complex problem solving tasks is the first that’s more or less scrambled in high stress situations. The more stress, the more scrambled. The more stressed we are, the more our natural defensive mechanisms take over, and the less energy focused into higher cognitive processes.

Why do you think that NASA, the military, and other organizations training people for high risks jobs spend so much time in simulation? Because they want the tasks to be so ingrained that in a stress situation, the people’s responses are almost automatic.

If you add the potential for embarrassment on to the strong desire to do well, the need to get the job, toss in a panel of arrogant twats sitting around a table looking directly at you while you do your little tests, and you have the makings of an environment that almost guarantees the elimination of many fine candidates.

Who does well in these kinds of testing situations? Good testers, the supremely self-confident and equally, typically arrogant, and the people who don’t care: none of which is necessarily the best candidate.

The whole purpose of tests such as these are not to determine if a person has programming capability–how can one stupid test determine this? What these tests do, though, is add to the self-consequence of the person doing the interview.

“I can do this, but all these people can’t. Therefore, I’m so much better.”

It’s also a lazy interview technique, which shows that HR associated with the company doesn’t give a crap about the IT department.

Some justify such tests with, “We need people who can do well in stress situations.” Bilge water.

The stress one goes through when one is an outsider faced with a bank of insiders, is completely different than the stress an individual goes through when they’re part of a team trying to fix a problem or roll out a product. Comparing the two is ludicrous, and nothing more than a demonstration of completely two-dimensional thinking: one form of stress is completely the same as another. My god, no wonder we’ve had few tech innovations lately if this is demonstrative of leadership in IT.

Having candidates bring in samples of code and having the interviewer and interview team review such, and question why decisions were or weren’t made is an excellent way of getting insight into the person’s problem solving skills, without the trained dog and pony show. Asking a person what approach they would use in a situation is superior to doing a random memory test on keywords. Providing applications and having the person provide their own critique is an amazingly effective way of getting insight, not only into their problem solving skills, but also into their personality. If they point out errors but do so in a thoughtful manner, it’s a heck better than doing so in as scathing a manner as possible.

Looking at past applications or effort is another effective approach. New programmers with no job experience can provide pointers to open source applications; experienced people who have worked in an NDA situation can provide pointers for discussions and work online: heck, Google the person’s name–that will tell the interviewer much more about the person than a silly programming test.

That primitive techniques such as the abysmally stupid “FizzBuzz” approach are used shows that companies are still missing out on good people because they have idiots doing most of the interviews.

And making the leap between how people do on interviews into such grand claims that programmers can’t program demonstrates that idiocy travels up the food chain.

You know what’s especially humorous? All the people who solve the test questions in the comments. What possible reason would a person have to do such a thing? It’s completely irrelevant to the environment in which these so-called tests are given. This no more shows that these people can program, then it shows that the other people can’t.

The lack of logic in this whole thread is amazing.

What’s less funny, though, is the slavish adherence to Joel Spolky’s elitist crap. Joel runs a smallish computer company with limited products: what the hell makes him the definitive answer on these topics? Perhaps the people should spend less time making pronouncements, and more time developing independent thinking skills.

Many of the comments in the Coding Horror post do mention these concerns, and provide other effective approaches to interview. If the people who create these tests will actually read these responses, some good will have come from the discussion.

I have found, though, that people who write these kinds of tests aren’t always willing to considering other options. The other approaches just aren’t ‘clever’.

Categories
Burningbird

Known universe

I’m not sure why I’m getting 404 log entries for pages that WordPress serves. The material that mentions this online this seems confused. If anyone has any suggestions, I have ears enough.

I think I have all my redirects, other than managing a 401 document, which Phil suggested. That’s what’s causing the conflict between WordPress’ htaccess that ate the world and authenticated subdirectories.

I have too many categories, and need to merge these, and also provide redirects when I do. That’s the number one reason NOT to use categories in permalinks. Oh well, life needs challenges.

What I also need to do is create 410 gone entries in my .htaccess file for the permanently removed resources. Until finished, and until I recover from my merge, people will have to suffer through my 404 page.