Categories
Writing

O’Reilly and the goodies

Kathryn Barrett recently responded to an O’Reilly’s author who was unhappy about not having Safari Online access. I’ve seen these complaints before, which puzzle me because I’ve had Safari Online access since the online site was first launched. Which, I guess, means I’ve been an O’Reilly author for a long time.

O’Reilly is also good about sending us free books, which I appreciate. Reading what other people write on a topic helps ensure that I’m not covering too much of the same ground in any of my ongoing efforts. In addition, new stuff keeps my synapses shiny and sharp.

Instead of just passively receiving the books, as I’ve done in the past, I’ve decided to start writing reviews of some of the books I receive from O’Reilly. I’ll also write reviews of books sent from other publication companies. I like free books.

Kathryn also writes about the new author portal. I haven’t been making as much use of it as I should. The one aspect I really like, maintaining errata, isn’t fully operational, yet. As for the publicity pieces of the portal, much of my indifference to the site is because O’Reilly fosters a competitiveness between the authors I find off-putting. For instance, I am not a “five star” best selling author, and therefore I don’t rate the front page. I can see the company’s point of view, but it’s difficult enough being motivated without being constantly reminded that my books are not rated as high, nor selling as well as others.

The computer book industry is a different industry now than it was five years ago. I guess we either have to adapt, or leave.

Categories
Books JavaScript

Douglas Crockford’s Good Parts of JavaScript

Recovered from the Wayback Machine.

My editor at O’Reilly sent me a copy of Douglas Crockford’s JavaScript: The Good Parts, which I found to be both interesting and useful. The volume is slim, 153 pages in all, but packed full of information about JavaScript—the good parts and the bad.

I found myself nodding more than once, raising my eyebrow a couple of times, and chuckling a time or two, especially when reading about the equality operators, those “evil twins” of JavaScript, according to Crockford.

The book is not intended for neophyte developers, or even those new to JavaScript. It does, however, give you insight into the mind of a very experienced JavaScript developer. Such insight can provide the information necessary to take you from being familiar with JavaScript to being experienced with JavaScript. In particular, the book’s fourth chapter on Functions is, by itself, worth the price of the book.

Crockford writes clearly and without pretension, which I found refreshing. His aim is to clarify, but without pandering. He doesn’t hold you by the hand, and don’t expect him to explain every last bit of the subjects introduced. However, reading through his material is a nice confirmation as to whether your understanding of JavaScript is comprehensive, or if you’re missing out on some of the bits. I particularly liked his chapter on regular expressions, because I suck at regular expressions.

You’ll also be served a hefty dose of Crockford’s opinions on the JavaScript language, which is no bad thing. I didn’t necessarily agree with all of his opinions, such as avoiding the use of new, but I liked reading the opinions because they help me question my own use of the JavaScript language: is this necessary? Could this be improved? Why am I doing this?

I don’t usually have opinions, good or bad, about components of a language. I either like the language, and learn to adapt to the awkward pieces; or I don’t like the language at all. I like JavaScript, so I tend to like all of JavaScript, even the grungy parts. If there’s one thing I consider to be a “bad” part of JavaScript, it is the experts in JavaScript who tell us not to do this or that, but either give no reason for their opinion, or the reason they give borders on the obtuse and cryptic—assuming that we, of course, know exactly what they’re talking about (if we’re worth anything as programmers, goes the implication).

Reading Crockford laying out his opinion as to what he considers “bad” in JavaScript, and why, in clear, unambiguous terms—with examples—is like a breath of fresh air. His doing so is also worth the price of the book (leaving me to wonder whether I should, in all fairness, pay twice). I can only hope other experts, in whatever language, follow his lead.

My only quibble with the book is one paragraph in the chapter on Objects, and that’s because I’m still puzzled by what I read. Crockford writes:

The simple types of JavaScript are numbers, strings, booleans (true and false), null, and undefined. All other values are objects. Numbers, strings, and booleans are object-like in that they have methods, but they are immutable. Objects in JavaScript are mutable keyed collections.

My understanding of immutable objects is that these are objects, not some form of pseudo-object, or second class object. If I had been a tech reviewer for this book, I would have asked for additional clarification of this paragraph in the text.

Other than this relatively minor quibble, though, I really enjoyed this book. It is a nice read, and invaluable for any JavaScript developer.

Categories
Semantics Writing

RDF too

Congratulations to the RDFa folks for rolling out a release candidate of RDFa for XHTML. Now that I’ve finished tweaking site designs, my next step is to see about incorporating smarts into my pages, including the use of RDFa. In addition, I also want to incorporate the RDF Drupal modules more closely into the overall functionality. The SPARQL module still seems broken, but the underlying RDF modules seem to be working now.

The RDFa release candidate is timely, as I gather the BBC has decided to forgo microformats in favor of RDFa. This has re-awakened the “microformats vs. RDFa” beast, which we thought we had lulled to sleep. I guess we need to switch lullabies.

Speaking of lullabies, I had hoped to start work on the second edition of Practical RDF later this year, but it is not to be. The powers-that-be at O’Reilly aren’t comfortable with a second edition and have accepted another book proposal that covers some of what I would have covered in order to make the book livelier. There just isn’t the room for both.

I am disappointed. The first version of “Practical RDF” was difficult because the specification was undergoing change, the semantic web wasn’t generating a lot of interest, and there weren’t that many RDF-based applications available. Now, the specs are mature, we have new additions such as RDFa, increased interest in semantics, and too many applications to fit into one book. I feel as if I started a job, and now won’t be able to finish it.

One issue in the book decision is the “cool” factor. RDF and associated specifications and technologies aren’t “cool”, in that people don’t sit around at one camp or another getting hot and bothered talking about how “RDF is going to change the world!” However, the topic doesn’t necessarily have to be “cool” if the author is “cool”, and I’m not. I don’t Twit-Face-Space-Friend-Camp-Chat-Speak-Shmooze. What I do is sit here quietly in my little corner of waterlogged Missouri, try out new things, and write about them. That’s not really cool, and two not-cools do not make a hot book.

I don’t regret my choice of lifestyle, and not being “cool”. I do regret, though, leaving the “Practical RDF” job undone. Perhaps I’ll do something online with PDFs or Kindle or some such thing.

Categories
Writing

Timing

Recovered from the Wayback Machine.

Now, we have wonderful tools to make it easy to put writing or other content online. We can think of a topic, create a writing about it, and publish it—all in five or less minutes. We’ve also come to expect that whatever is published is read as quickly. We’ve moved from multi-page writings, to a single page, to a few paragraphs, to 140 characters or less. Though there is something to be said for brevity, and it takes a true master to create a mental image that can stand alone in 140 characters or less, there still is a place for longer writings. We don’t have to be in a continuous state of noise; a race to create and to consume.

That’s a quote from my first writing at Just Shelley. It seems serendipitous, then, that Nick Carr’s article in the New Atlantic, Is Google Making Us Stupid? is published the same day, because in this multi-page article, Nick questions the internet’s impact on our ability to read just such longer works.

For me, as for others, the Net is becoming a universal medium, the conduit for most of the information that flows through my eyes and ears and into my mind. The advantages of having immediate access to such an incredibly rich store of information are many, and they’ve been widely described and duly applauded. “The perfect recall of silicon memory,” Wired’s Clive Thompson has written, “can be an enormous boon to thinking.” But that boon comes at a price. As the media theorist Marshall McLuhan pointed out in the 1960s, media are not just passive channels of information. They supply the stuff of thought, but they also shape the process of thought. And what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.

I’m not the only one. When I mention my troubles with reading to friends and acquaintances—literary types, most of them—many say they’re having similar experiences. The more they use the Web, the more they have to fight to stay focused on long pieces of writing.

Nick wonders if our brains are being subtly altered by the internet. That perhaps we are truly losing the ability to focus, to stay in one place, to even sit and read a book. I find the idea that the internet is actually altering how our brains work unlikely, or at least, no more likely than any other activity. He uses Nietzsche’s use of a typewriter as anecdotal evidence of the medium’s impact on the message, writing, Under the sway of the machine, writes the German media scholar Friedrich A. Kittler, Nietzsche’s prose “changed from arguments to aphorisms, from thoughts to puns, from rhetoric to telegram style.” Yet Kittler and now Nick both ignore the elephant in the corner: Nietzsche was a very ill man, becoming increasingly more ill as he got older, and more mad as time progressed. I find it more likely his illness impacted on his writing then the fact that he now used a typewriter over pen and paper.

What I think is happening—without any basis in research other than my own intuition and observation—to Nick and others, especially those who weblog, Twitter, IM and so on, is that we’ve adapted to a set of stimuli that rewards both brevity of focus, as well as speed of response, over long-term study and thoughtful response. There is “pleasure” associated with receiving both acclaim and attention, and those who receive both excel at the 10 minute read, the five minute response. We’re not adapting so much as we’re mimicking what we see to be “successful” behavior in others so that we may, also, partake of the same “pleasure”. This is compounded by an artificial sense of urgency that has been generated in this environment that we have to read more, and then more, in order to “be informed”—informed in this context being the breadth of knowledge, rather than the depth.

In other words, we become less like the computers we use, as Nick presumes, and more like the rats in the lab box, pushing the lever that gives a sexual stimulus over the lever that gives food, because the short term gratification outweighs the longer term need.

Regardless of cause, physical or behavioral, the end result is the same: we run the risk of losing an essential part of ourselves. As Nick eloquently puts it:

Perhaps those who dismiss critics of the Internet as Luddites or nostalgists will be proved correct, and from our hyperactive, data-stoked minds will spring a golden age of intellectual discovery and universal wisdom. Then again, the Net isn’t the alphabet, and although it may replace the printing press, it produces something altogether different. The kind of deep reading that a sequence of printed pages promotes is valuable not just for the knowledge we acquire from the author’s words but for the intellectual vibrations those words set off within our own minds. In the quiet spaces opened up by the sustained, undistracted reading of a book, or by any other act of contemplation, for that matter, we make our own associations, draw our own inferences and analogies, foster our own ideas. Deep reading, as Maryanne Wolf argues, is indistinguishable from deep thinking.

If we lose those quiet spaces, or fill them up with “content,” we will sacrifice something important not only in our selves but in our culture.

There is much to think of from Nick’s writing. Much to absorb and more to write about later. In the meantime, do take the time to read all of the article.

(Also discussed, briefly, at CNet. I’d be curious how many people who wrote comments have actually read the entire work, because as far as I can see, CNet isn’t linking directly to Nick’s article.)

Categories
Media SVG Writing

Working…

I’m almost ready to go live with the site. Right now I’m trying to create a custom Drupal theme from this site’s design. Once that’s finished, then we’ll be in business.

The image below was created by converting two bitmap graphics, the book cover and a painter’s easel, into one combined image using SVG–Scalable Vector Graphics.

Though the book cover image was large enough for my intended use, the easel wasn’t and using SVG allows us to resize images beyond the original and without pixelation. The combined image was sized to what you see here, and then re-converted back into a bitmap graphic, in this case a PNG.

I used Vector Magic to convert the bitmap images to SVG and Inkscape to convert back to the bitmap. Inkscape also has a bitmap trace function to convert from bitmap to vector (SVG), but I’ve not found it to be as good as Vector Magic for my purposes.

I received my inspiration for the drop shadowed clip art used in all of my sites from the old English/Victorian toy theaters. These wonderful creations featured static backdrops painted like a theater set, with characters that could be clipped or cut out from a book, pasted to a stick and then used to re-create a specific play. Ironically enough, toy theaters lost their popularity with the advent of television, itself endangered by the increasing use of the web to deliver video content. What goes around, comes around.

All is not lost for toy theater, though. Released last year and with a planned US release of this summer, a new movie adaption of Dante’s Inferno was created with modern theme and as toy theater. If your computer can swing it, select the HD trailer. Note that this trailer does have a mature theme.

For the more ambitious, a laptop framed in a toy theater box.