Categories
Writing

O’Reilly and the goodies

Kathryn Barrett recently responded to an O’Reilly’s author who was unhappy about not having Safari Online access. I’ve seen these complaints before, which puzzle me because I’ve had Safari Online access since the online site was first launched. Which, I guess, means I’ve been an O’Reilly author for a long time.

O’Reilly is also good about sending us free books, which I appreciate. Reading what other people write on a topic helps ensure that I’m not covering too much of the same ground in any of my ongoing efforts. In addition, new stuff keeps my synapses shiny and sharp.

Instead of just passively receiving the books, as I’ve done in the past, I’ve decided to start writing reviews of some of the books I receive from O’Reilly. I’ll also write reviews of books sent from other publication companies. I like free books.

Kathryn also writes about the new author portal. I haven’t been making as much use of it as I should. The one aspect I really like, maintaining errata, isn’t fully operational, yet. As for the publicity pieces of the portal, much of my indifference to the site is because O’Reilly fosters a competitiveness between the authors I find off-putting. For instance, I am not a “five star” best selling author, and therefore I don’t rate the front page. I can see the company’s point of view, but it’s difficult enough being motivated without being constantly reminded that my books are not rated as high, nor selling as well as others.

The computer book industry is a different industry now than it was five years ago. I guess we either have to adapt, or leave.

Categories
Copyright

A quiet take on the AP

Recovered from the Wayback Machine.

Some people are still “waiting” on the AP to deliver a definitive guide to what can or cannot be copied of the AP material without risk of a DMCA notice. We really don’t need to wait, nor do we need anything from the AP. We have copyright laws in this country, and they include the concept of “fair use”, which we can continue to use as guide for our own writing.

People do need to look at how they quote and use other’s work. If you feel that your use is justified and covered under Fair Use provisions, than full speed ahead and damn the consequences. You may be served a DMCA; you may not. Receiving one is not a judgment, and you won’t be pulled into jail. In fact, you don’t even have to respond by pulling the material if you really feel you’re on the side of the law.

I wouldn’t necessarily expect that you would get legal help, though. This environment tends to favor the noisy and the known. If you’re neither, chances are you’ll be on your own if you get a DMCA. That doesn’t mean you shouldn’t feel free to quote others, or to use AP material. It just means that you have to accept the consequences of your actions when you publish online, and use other’s material.

As for the AP’s DMCA notices being supposedly based on title and lede/lead, alone, whereby the lede is the first few sentences of the story, I think we were misdirected into focusing on the content of each individual quote, rather than the context of all the quotes, combined.

AP licenses entire stories, but it also licenses a feed of AP news items reflecting just the title and lede of the story. You can see an example of licensed material at the Huffington Post. Notice that the copyrighted material in this context is not limited to an individual story, but to the grouping of titles and ledes for several different stories.

People have been making an assumption that the AP is upset that people are quoting one title, and one lede. We’ve ignored the hints given in relation to Drudge Retort that it was a pattern of posting, of quoting multiple titles and multiple ledes over time that ultimately resulted in the AP issuing the DMCA.

If we consider that the ledes are only 30 or 50 words, it seems unreasonable for the AP to resort to the DMCA. However, if something like the Drudge Retort duplicates 3, or 5, or more of these syndicated story titles and ledes, what the site is doing is actually “copying” what amounts to 10, 30, 30% or more of the AP copyrighted material— not a few words of an individual story, as first discussed.

If the AP charges a site like the Huffington Post to publish this syndicated set of titles/ledes at the site, and something like the Drudge Retort is duplicating a significant number from this set, using virtually the same titles and lede wording, without adding additional commentary, the Drudge Retort could very well be violating the AP’s copyright, and doing so in such a way as to cause financial harm to the AP.

The issue really is, and the AP stressed this, copy and paste publication. If you copy and past the title and the lede, add no commentary, you’re not adding value to what you’re publishing. You’re just duplicating the content. There’s nothing wrong with pulling out an individual quote from a story you like and publishing it by itself. However, if your publication falls into a pattern that is very similar or even equivalent to an individual or group’s copyrighted publication of the same, don’t expect to get all huffy because you only publish a few words from each story.

We shouldn’t extrapolate from the AP to something like delicious or the Planets (RDF, Drupal, Intertwingly, and others), because they’re not the same. I don’t know of anyone that licenses their syndication feed and would feel financial harm if this syndicated feed was republished with a group of others. The purpose of the Planets is to give exposure to individual publications/people who do not get exposure from being part of a major news source, like the AP. However, taking our syndicated feed and republishing it in its entirety at another site, which then runs ads that benefit the second site is a different story. In fact, if we decry the existence of “splogs” we should find ourselves on the side of the AP, if we’re being intellectually honest.

Now, some would say that the AP really will go after us if we only publish one title and one lede. Please forgive if I doubt any such thing would happen. Commonsense would dictate this, if nothing else. And commonsense is what we should be using when it comes to copyright and fair use.

I’m really not defending the AP so much as I am disappointed at how quickly people are willing to pile-on when the right stereotypes are triggered. We see the AP, big company, the Drudge Retort, small publication, and we become effectively blind—to both reason and fairness. More disturbingly, we become ripe for manipulation from those who care little for the consequences of the event, as long as the attention keeps flowing. The AP can protect itself, but the same cannot be said of every target of the pile-on effect.

Categories
Stuff

Four shorts make a long

Protect your Naughties

Seth Finkelstein has a timely Guardian article on Judge Kozinkski and his exposed naughty bits.

I’m usually careful about making sure whatever I don’t want exposed to general access either is not located in a web accessible position, or is password protected. I don’t depend on robots.txt to ensure web bots don’t access or expose what I don’t want found. As it is, copies of my book, “Painting the Web”, have been appearing on BitTorrent downloads, and I’m not sure if these were based on the copies of chapters I hosted online for my reviewers to download. I password protected the material, but I don’t know how else the material came to be exposed to the P2P “Gimme it for free” crowds.

What think? .burningbird?

Virginia DeBolt writes about the new ICANN boutique domain names, which will spawn chaos, while generating money for a select few registrars (not to menion ICANN, which has now become an organization seemingly interested in profit).

What I want to know is, how am I going to be able to buy .burningbird? Think I should start a PayPal account, and ask for donations?

The bird is back!

Stavros the Wonderchicken is back, weblogging! If you don’t know Stavros, you’re in for a treat. The man is twisted, but in a, well, twisted sort of way.

Every time someone I’ve known a long time re-appears after a lengthy hiatus, I think of the others who I’d like to see writing online again: Jonathon Delacour, Phil Ringnalda, Kathy Sierra, to name just a few. I must not get greedy, though. It’s good to see Chris back writing again.

California, the Squid are coming

An Architheuthis Dux, or giant squid was discovered off the coast of California, a rare location for these elusive creatures. This one was 25 feet if you extrapolate the missing parts. A good size, but not the biggest, by any means.

The dissection shows massive damage from bites, but the researchers don’t know if the bites are pre- or postmortem. (via Laughing Squid)

Categories
Books JavaScript

Douglas Crockford’s Good Parts of JavaScript

Recovered from the Wayback Machine.

My editor at O’Reilly sent me a copy of Douglas Crockford’s JavaScript: The Good Parts, which I found to be both interesting and useful. The volume is slim, 153 pages in all, but packed full of information about JavaScript—the good parts and the bad.

I found myself nodding more than once, raising my eyebrow a couple of times, and chuckling a time or two, especially when reading about the equality operators, those “evil twins” of JavaScript, according to Crockford.

The book is not intended for neophyte developers, or even those new to JavaScript. It does, however, give you insight into the mind of a very experienced JavaScript developer. Such insight can provide the information necessary to take you from being familiar with JavaScript to being experienced with JavaScript. In particular, the book’s fourth chapter on Functions is, by itself, worth the price of the book.

Crockford writes clearly and without pretension, which I found refreshing. His aim is to clarify, but without pandering. He doesn’t hold you by the hand, and don’t expect him to explain every last bit of the subjects introduced. However, reading through his material is a nice confirmation as to whether your understanding of JavaScript is comprehensive, or if you’re missing out on some of the bits. I particularly liked his chapter on regular expressions, because I suck at regular expressions.

You’ll also be served a hefty dose of Crockford’s opinions on the JavaScript language, which is no bad thing. I didn’t necessarily agree with all of his opinions, such as avoiding the use of new, but I liked reading the opinions because they help me question my own use of the JavaScript language: is this necessary? Could this be improved? Why am I doing this?

I don’t usually have opinions, good or bad, about components of a language. I either like the language, and learn to adapt to the awkward pieces; or I don’t like the language at all. I like JavaScript, so I tend to like all of JavaScript, even the grungy parts. If there’s one thing I consider to be a “bad” part of JavaScript, it is the experts in JavaScript who tell us not to do this or that, but either give no reason for their opinion, or the reason they give borders on the obtuse and cryptic—assuming that we, of course, know exactly what they’re talking about (if we’re worth anything as programmers, goes the implication).

Reading Crockford laying out his opinion as to what he considers “bad” in JavaScript, and why, in clear, unambiguous terms—with examples—is like a breath of fresh air. His doing so is also worth the price of the book (leaving me to wonder whether I should, in all fairness, pay twice). I can only hope other experts, in whatever language, follow his lead.

My only quibble with the book is one paragraph in the chapter on Objects, and that’s because I’m still puzzled by what I read. Crockford writes:

The simple types of JavaScript are numbers, strings, booleans (true and false), null, and undefined. All other values are objects. Numbers, strings, and booleans are object-like in that they have methods, but they are immutable. Objects in JavaScript are mutable keyed collections.

My understanding of immutable objects is that these are objects, not some form of pseudo-object, or second class object. If I had been a tech reviewer for this book, I would have asked for additional clarification of this paragraph in the text.

Other than this relatively minor quibble, though, I really enjoyed this book. It is a nice read, and invaluable for any JavaScript developer.

Categories
RDF

RDF Not

I must reluctantly put away the RDF and SPARQL modules for Drupal, at least for now. Both are very new, mostly undocumented, and support seems fragmented. I’ve tried for hours to get the SPARQL endpoint to work on this site, with no luck, and not sure who to ask for clarification.

I can read the code, but much of it focuses on the Drupal development. Until I’m more familiar with Drupal, looking through the code isn’t going to be overly useful.

Once I finish the second edition of JavaScript, I can spend more time with Drupal module development and these modules. By then, more mature versions of the modules might be out, as well as some documentation in how to use the beasties.