Categories
JavaScript Technology

Sun sells out and there goes Java and MySQL

I guess I will now be looking at how to port my Drupal installations to PostGreSQL, since Sun sold out to Oracle. The Java issue doesn’t impact me, as I saw the writing on the wall as regards to Java a long time ago.

However, support for MySQL will most likely be completely undercut, if not eliminated. Or it will go through that fine Oracle touch, which means you can’t depend on support for the database in the future—not without it being either bloated, or “monetized” in some way. This is how Oracle works.

I can hear it now: But MySQL is open source. Oracle can’t hurt it, because it’s open source!

Being “open source” will protect MySQL. Yeah, right. And I believe I’m Superwoman and can’t be hurt by bullets, so just shoot me now.


I completely forgot about Sun and OpenOffice. I use OpenOffice for all of my writing. Guess I can kiss that good-bye, too.

I’d like to just kick IBM right now, for transmorphing back into the stupid, clumsy Big Blue dinosaur of days of yore. It let itself down, by not buying Sun. And it let the rest of us down, too.


Interesting reading the old post on Sun buying MySQL AB, from last year.

I think Sun is the best possible buyer, because of the following reasons: (Note that this is of course my interpretation)

  • Sun is committed to open source.
  • Sun doesn’t have an database of their own; In other words, no risk of internal conflicts between similar products.
  • Sun understands what it means to be a virtual company where people work from home.
  • Sun has a good understanding of developers needs and there is a good chance that the integration of the two companies will be relative smooth.
  • Sun has said they will let the MySQL developers continue work as before in their own unit and without big changes (except of course changes for the better!).

Of course, the early founder of MySQL left Sun, and started another open source MySQL company. We’ll see where this goes.


Last update, but the original founder of MySQL, Michael Widenius, has posted a note on the Oracle/Sun merger and MySQL.

The biggest threat to MySQL future is not Oracle per se, but that the MySQL talent at Sun will spread like the wind and go to a lot of different companies which will set the MySQL development and support back years.

I would not like to see this happen and I am doing everything I can do to keep this talent pool together (after all, most of them are long time personal friends of mine). I am prepared to hire or find a good home (either at Monty Program Ab or close to it) for all core MySQL personnel.

The man is probably inundated with resumes right now.

Categories
Social Media Web

My abbreviated self

I discovered that a URL has to be less than 30 characters, or Twitter automatically creates a Tinyurl version of the URL. This, even if the entire message is less than 140 characters.

There’s no way I can create URLs that are less than 30 character and still maintain my subdomain designations. Therefore I’m not going to try, and will most likely be removing any short URL stuff here. With all the recent “one million followers” foo flah, including the breathless designation that one person achieving one million Twitter followers is equivalent to landing a man on the moon and space flight, in scientific importance, I would just as soon stick with stodgy old weblogging.

Weblogging, where no one really knows how many people are following you, most people don’t care, we can actually communicate complete thoughts, and do what we want with our URLs.


From today’s WhatWG IRC:

hsivonen: I can imagine all sorts of blog posts about evil HTML5 raining on the rev=canonical backpattery parade

svl: Mostly (from what I’ve seen) it’s been “let’s all use this en-masse, so html5 will be forced to include this”.

Of all the items in contention with the HTML5 working group, the use of rev=canonical is not high on my list. Why? Because there’s no real argument for it’s use, and a lot of good arguments against its use, and it’s just as easy to use something else.

This all came about because Twitter was built first, designed later. One of the difficulties to keeping a message to 140 characters is that URLs can take 140 characters, and more. Yet there is no URL shortening mechanism built into Twitter. Not only is there no URL shortening mechanism built into Twitter, Twitter, itself, uses another, 3rd party, service: tinyurl.com.

Now, all of a sudden, people are in a dead cold panic about using a service that may go away, leaving link rot in Twitter archives. I hate to break it to the folks so worried, but it will probably be a cold day in hell before anyone digs into Twitter archives. Most of us can’t keep up with the stream of tweets we get today, much less worry about yesterday’s or last week’s.

But there are a lot of other problems associated with using a 3rd party service. Problems such as the recent Twitter follies, otherwise known as Twitter Been Hacked, that ended up being a not particularly fun Easter Egg this weekend. When you click on a Tinyurl URL, you don’t know what you’re going to get, where you’re going, or worse, what will happen to you when you get there. Even Kierkegaard would have a problem with this leap of faith.

There’s also an issue with search engine link credit, not to mention everyone using different URL shortening services so you can’t tell if someone has referenced one of your posts in Twitter, or not. This didn’t use to be a problem, but since everyone does most of their linking in Twitter now, it gets mighty quiet in these here parts. You might think, sigh, no one likes what you’re doing, only to find out that a bunch of people have come to your party, but the party’s been moved to a different address.

So I think we can agree that third party URL services may not be the best of ideas. I, personally, like that we provide our own URL shorteners. Not only would we get the search engine credit, it should encourage the use of the same URL in Twitter, which might help us find the party we lost. Plus, wouldn’t you rather click a link that has burningbird.net in it, then one that has dfse.com? Implementation of our own short URLs should be simple in this day and age of content management systems. All we need to do is agree on a form.

Agree? Did someone say, agree?

As I wrote earlier, I’ve heard too many good arguments against rev=canonical, including the fact it’s too easy to make a typo and write rev=canonical, when we mean rel=canonical, and vice versa. In addition, rel is in HTML5, rev is not, and I’m not going to hammer a stake in the ground over rel/rev. I’m keeping my stakes for things that are important to me.

Note to HTML5 WG: she has a hammer. And stakes.

As for what attribute value to use with rel, whether it’s shortlink or shorturl or just plain short, I don’t care. I took about five minutes to implement shortlink in this space. I implemented shortlink, because this is the option currently listed in the rel attribute wiki page. However, it would only take about a minute to change to shorturl. I even added the short link to the bottom of my posts, which can be copied manually and used to paste into a Twitter post, if you’re so inclined. See, I don’t have to wait for anyone’s approval; I am empowered by Happy Typing Fingers.

Regardless of what we do, I agree with Aristotle: way too much effort on something that should be easy to decide, quick to implement, giving us time to worry about things that are important in HTML5. Things such as SVG, RDFa, and accessibility.

Other discussions related to rel/rev/tiny:

And that’s my 4424 character take on tiny URLs.


Another reason tiny URLs are getting attention is because of the evil new DiggBar. Goodness gracious, people, why on earth do you use crap like this?

Categories
Browsers

A browser is more than script

Chrome released on Linux, and IE8 released from beta. Now people are beginning to question Firefox’s increasingly bigger piece of the blogger pie. Case in point, PC World.

Mozilla have several grand aims, and there’s much to be admired, but they’ve forgotten how to make a decent browser. I feel plenty of loyalty for them, because they’ve done more than anybody else to further the cause of open source software in the real world. But when I tried Chrome, as incomplete as it was, I realized I’d found a replacement for Firefox. As soon as it gets to beta under Linux, I will switch to Chrome. No question. It’s just infinitely better. It’s like when we all switched from Alta Vista (or Yahoo!) to Google back in the early noughties. The king is dead! Long live the king!

I was asked my opinion about the future of JavaScript applications this week, especially in light of the blazingly fast Chrome. I was rather surprised at the emphasis on JavaScript, because a browser is more than just a machine to consume script. A browser must also render a web page, as the designers built her; must display photographs accurately, hopefully using any photographer supplied profiles; to render the more complex SVG, in addition to the simpler Canvas; to handle complex file types, including video files, not to mention supporting different markups, such as XHTML in addition to HTML; to provide the utility to enhance the user’s experience, up to and including any extensions, such as the one I use to collect a page’s RDFa. Why, then, are we reducing the browser to nothing more than a device to to render HTML and JavaScript?

Firefox is working on its scripting engine, but it’s also been improving its graphical rendering engine, including adding in built-in support for color profiles, as well as improvements in support for CSS3 and SVG. Chrome has no support for color profiles, it’s graphical rendering engine sucks, as can be seen if you look at CSS3 curved corners in the browser, and it regularly fails my SVG tests. Try this SVG file in Chrome, but don’t blame me if your CPU spikes. Luckily, it seems that Chrome just aborts SVG files it can’t handle now, rather than fry the CPU. Then try the same page in Safari or Firefox; though both render the page slowly, they do render it—Chrome only rendered the file the third time through. It aborted the page the first two times. And the quality of the rendering? Well, see for yourself.

Look at my photos at MissouriGreen. Most use a color profile. Now, the photos look relatively good in Chrome on Windows, because I’m favoring a sRGB color profile to ensure maximum coverage, but if Chrome is ever implemented in the Mac, the photos will look plain, and washed out, as they do now with Opera. Not so the latest Firefox, and Safari.

Lastly, look at this site, or Just Shelley in Chrome, as compared to Safari, or Firefox, even the latest beta of Opera. I make extensive use of box and text shadows, as well as CSS3-based curved corners. No browser is perfect in its implementation of CSS3 curved corners yet, but the anti-aliasing in Firefox and Safari is vastly superior than what you’ll find in Chrome. I have noticed, though, that Chrome has improved its text and box shadows: it doesn’t plaster them half way down the page, now.

Why, then, do we talk about how “superior” Chrome is? And how Firefox is dying? When one looks at all of the browsers from an overall web experience, only IE8 is worse than Chrome.

I apportion blame for an over-emphasis on fast script over everything else equally between Google and the current HTML5 effort. I found it telling that, at the same time people are lambasting Firefox for “slowing” down, and praising Chrome for “speeding” up, Douglas Bowman is leaving Google primarily because the company relies on engineering practices, at the expense of fundamentals of design. One doesn’t have to stretch one’s intuition in order to see that the “machine” is also the emphasis in Chrome. But the same could also be said about the HTML5 effort: an emphasis on mechanistic aspects, such as client-side storage and drag-and-drop, at the expense of a more holistic environment, such as including support for SVG and ensuring continued support for accessibility—though I think this week, at least, client side storage has been pulled for inclusion…elsewhere.

Speed is important in a web browser, speed and efficiency, and Firefox isn’t perfect. Newer versions have been locking up on my Leopard machine, to the point where I now prefer Safari on the Mac. If I had to take a guess, Firefox has threading issues. It also needs to work on isolating extensions to the point where they can’t harm the overall browsing experience—or at least put something in place so that one knows certain extensions can adversely impact on browser performance.

At the same time, Chrome desperately needs to improve its graphics rendering capability. As this occurs, and as Chrome gets loaded down with extensions, I don’t think we’ll see the same fast speeds when rendering pages we see now.

It’s all a question of balance—the best browsers are the most balanced browsers, and sometimes this means slower page loading in support of better page rendering. As it is, Chrome, Firefox, Safari, and Opera are all giants towering over the anemic and disappointing IE8. If we want to talk about a browser “dying”, I have a better candidate in mind than Firefox.

Categories
Burningbird Technology Weblogging

Drupal and PHP Safe Mode

I had asked about a new hosting company on Twitter. My main interest was cutting costs, but I’m also having problems with my current hosting company. Frankly, I think the problem is all of the Ruby-on-Rails applications running—database access almost comes to a standstill at times.

Regardless, the company also turned on PHP safe mode, which is going to cause nothing but havoc with my Drupal installation. I don’t have much choice about moving, now. I had a couple of suggestions for sites, including InMotion, which I’m considering. I’m concerned, though, about sites that offer unlimited bandwidth, and unlimited storage. These companies tend to oversell the severs. However, InMotion does have the advantage of being very inexpensive.

Any thoughts on InMotion? Any other suggestions? I need SSH, PHP 5+, ImageMagick, prefer cPanel, Drupal friendly, and also a host that doesn’t change things on the fly.

Categories
Technology

Wolfram Alpha

Recovered from the Wayback Machine.

Sheila Lennon asked my opinion on the Nova Spivack’s recent writing about Wolfram Alpha, and posted my response, as well as other notes. Wolfram Alpha is the latest brainchild of Mathematica creator, Stephan Wolfram, and is a stealth project to create a computational knowledge engine. To repeat my response:

First of all, it’s not a new form of Google. Google doesn’t answer questions. Google collects information on the web and uses search algorithms to provide the best resources given a specific search criteria.

Secondly, I used Mathematica years ago. It’s a great tool. And I imagine that WolframAlpha will provide interesting answers for specific, directed questions, such as “what is the nearest star” and the like. But these are the simplest of all queries, so I’m not altogether impressed.

Think of a question: who originated the concept of “a room of one’s own”. Chances are the Alpha system will return the writing where the term originated, Virginia Woolf’s “A Room of One’s Own”, and the author, Virginia Woolf. At least, it will if the data has been input.

But one can search on the phrase “A room of one’s own” and get the Wikipedia entry on the same. So in a way, WolframAlpha is more of a Wikipedia killer than a Google killer.

Regardless, when you look via Google, then you get link to Wikipedia, but you also get links to places where you can purchase the book, links to essays about the original writing, and so on. You don’t get just a specific answer, you also get context for the answer.

To me, that’s power. If I wanted answers to directed questions, I could have stayed with the Britannica years ago.

Nova Spivack’s writing on the Alpha is way too fannish. And too dismissive of Google, not to mention the human capacity for finding the exact right answer on our own given the necessary resources.

Again, though, all we have is hearsay. We need to try the tool out for ourselves. But other than helping lazy school kids, I’m not sure how overly useful it will be. If it’s free, yeah. If it’s not, it will be nothing more than a novelty.

I also beg to differ with Nova, when he states that Wolfram Alpha is like plugging into a vast electronic brain. Wolfram Alpha isn’t brain-like at al.

The human brain is amazing in its ability to take bits and pieces of data and derive new knowledge. We are capable of learning and extending, but we’re really shite, to use the more delicate English variation of the term when it comes to storing large amounts of data in an easily accessible form.

Large, persistent data storage with easy access is where computers excel. You can store vast amounts of data in a computer, and access it relatively easily using any number of techniques. You can even use natural language processing to query for the data.

Google uses bulk to store information, with farms of data servers. When you search for a term, you typically get hundreds of responses, sorted by algorithms that determine the timeliness of the data, as well as its relevancy. Sometimes the searches work; sometimes, as Sheila found when querying Google for directions to cooking brown rice in a crockpot, the search results are less than optimum.

Wolfram Alpha seems to take another approach, using experts to input information, which is then computationally queried to find the best possible answer. Supposedly if Sheila asked the same question of Wolfram Alpha, it would return one answer, a definitive answer about how to cook brown rice in a crockpot.

Regardless, neither approach is equivalent to how a human mind works. One can see this simply and easily by asking those around us, “How do I cook brown rice in a crockpot?” Most people won’t have a clue. Even those who have cooked rice in a crockpot won’t be able to give a definitive answer, as they won’t remember all the details—all the ingredients, the exact measurements, and the time. We are not made for perfect recall. Nor are we equipped to be knowledge banks.

What we are good at is trying out variations of ingredients and techniques in order to derive the proper approach to cooking rice in a crockpot. In addition, we’re also good at spotting potential problems in recipes we do find, and able to improve on them.

So, no, Wolfram Alpha will not be like plugging into some vast electronic brain. And we won’t know how well it will do against other data systems until we all have a chance to try the application, ourselves. It most likely will excel at providing definitive answers to directed questions. I’m not sure, though, that such precision is in our best interests.

I also Googled for a brown rice crockpot recipe, using the search term, “brown rice crockpot”. The first result was for RecipeZaar, which lists out several recipes related to crockpots and brown rice. There was no recipe for cooking just plain brown rice in a crockpot among the results, but there was a wonderful sounding recipe for Brown Rice Pudding with Coconut Milk, and another for Crocked Brown Rice on a Budget that sounded good, and economical. I returned to the Google results, and the second entry did provide instructions on how to cook brown rice in a crockpot. Whether it’s the definitive answer or not, only time and experimentation will tell.

So, no, Google doesn’t always provide a definitive answer to our questions. If it did, though, it really wouldn’t much more useful than Wikipedia, or our old friend, the Encyclopedia Britannica. What it, and other search engines provide is a wealth of resources for most queries that not only typically provide answers to the questions we’re asking, but also provide any number of other resources, and chances for discovery.

This, to me, is where the biggest difference will exist between our existing search engines and Wolfram Alpha: Alpha will return direct answers, while Google and other search engines return resources from which we can not only derive answers but also make new discoveries. As such, Alpha could be a useful tool, but I’m frankly skeptical whether it will become as important as Google or other search engines, as Nova claims. I don’t know about you all, but I get as much from the process of discovery, as I do the result.


Nova released a second article on Wolfram Alpha, calling it an answer engine, as compared to a search engine. In fairness, Nova didn’t use the term “Google killer”, but by stating the application could be just as important as Google does lead one to make such a mental leap. After all, we have human brains and are flawed in this way.

As for artificial intelligence, I wrote my response to it on Twitter: It astonishes me that people spend years and millions on attempting to re-create what two 17 year olds can make in the back seat of a car.