Categories
Technology

The CPU Bug: when being clever bites us in the butt

Update:

Ars Technica has produced the best write up on what’s happening with Meltdown and Spectre.

Earlier:

Two days ago, the tech world ran into a wall. Probably the best summation of the event is the following tweet:

https://twitter.com/EmilyGorcenski/status/948969633683558401

Yes, this is about the CPU bug. The CPU bug. The bug that demonstrates that sometimes being clever isn’t the same as being smart. The one that gave us Meltdown and Spectre.

I’m not going to repeat what others have reported, including The Register, the VergeBusiness Insider, and Google. But I wanted to specifically point out a New York Magazine piece because it does a good job of explaining why your IT person is currently under the desk, sobbing. Money section:

So basically every computer in the world is broken for the foreseeable future?
To quote Reverend Lovejoy: Short answer, “yes” with an “if.” Long answer, “no” with a “but.”

We’re likely looking at a world where there are pre- and post-Spectre processors. The lead time for new processors is measured in years, so there aren’t going to be quick fixes. There will be continuous software patches as new Spectre vulnerabilities are found in the meantime.

But the root of the problem for both Meltdown and Spectre is one of unforeseen consequences. Guessing what your computer is going to need to do next is a tremendously clever and relatively cheap way to speed up processors, and once it was discovered, Intel pushed aggressively to see how far they could go with it. (This is why Meltdown has hit Intel so hard.)

Designing processors and computers is an extremely difficult and extremely expensive proposition, and the market incentive for manufacturers is always going to be speed, not security. Even after Spectre disappears from the landscape, it’s a near certainty that some other vulnerability will show up on the scene.

In the meantime, stuff is getting patched. Not fixed…patched.

Microsoft just released an emergency patch (which just showed up on my PC, time to update), the current version of Firefox protects against vulnerabilities, upcoming Android and Chrome updates should help in these environments, Intel is putting out patches, and the fixes go on.

But we’re in this one for the long haul.

Categories
Diversity Technology

Robert Scoble: Tech’s Weinstein moment

Earlier today I was stunned to read about the accusations of sexual harassment against Robert Scoble.  We aren’t friends, but we have friends in common and we have interacted remotely in the past.

I had no idea, no clue, that Scoble had harassed women. There are some people you might suspect of doing so, and some people you don’t, and before today I would have listed Scoble in the latter category. It just goes to show that on the internet, people don’t always know you’re a dog.

Categories
Technology Web

Moving to HTTP/2

I upgraded my server to Ubuntu 16.04, converted my websites over to HTTPs, and locked them in using HSTS. It would be a shame to stop here without making one last change: upgrading to the HTTP/2 protocol.

The web has been spinning along on HTTP 1.1 since 1997. It’s been a good and faithful servant. However, the protocol is also showing its age, leading to gimmicks and workarounds just to more readily process today’s web pages.

Categories
Social Media Technology

Brought to you by HTTPS

As you can see when you access this page, I’ve made the move to HTTPS. I detail the experience at my new technology-only site, Shelley’s Toy Box.

I upgraded my server before I made the move, and eliminated all the cruft. I also moved my DNS records over to my name registrar, rather than manage on the server.

All in all, the experience was challenging at times, but also interesting. It was fun tweaking with the tech, and I need to do more tech tweaking in the future.

One of the downsides to the move is removing my archived statically generated HTML pages. I now get, on average, over seven hundred 404 requests a day. The numbers will go down as I gradually add the older content into this site, and as search engines drop references to the missing pages. Still, I feel like one big link black hole right now.

The Wayback Machine is extremely helpful when it comes to recovering pages that, for whatever reason, I don’t have backups for. I even found a link to my earliest weblog, a Manila site, hosted by Dave Winer and Userland.  I was excited when I found the link. My reactions to the events of 9/11 were recorded in my Manila weblog, and I don’t have a backup of the old posts.

I could have dropkicked Dave Winer when I discovered all the pages have the same message:

Your crawler is hitting our servers too hard. Please slow down, it’s hurting the service we provide to our customers. Thanks. webmaster@userland.com.

Thankfully most of the pages for my many other sites and weblogs are intact. When I restore a page, I try to include a link to the Wayback Machine archive page, because the site also archived the comments.

Seriously, if you’re not donating to the Internet Archive, you should think about starting. It’s our history.

Categories
Internet Technology Web

The slowness of IPv6

When I set up my new server and moved my DNS records to my name registrar, I also included records for my server’s IPv6 address (2600:3c00::f03c:91ff:fecf:250d), as well as the familiar IPv4 address (72.14.184.192).  Supporting both is known as dual stack.

I didn’t have to support IPv6 since I do have an IPv4 address, but if I’m going to do the shiny new with my site, I’m going to go shiny, new all the way.

Besides, there’s no more room at the inn with the old IPv4 system. We’ve run out of web addresses under the old IPv4 addressing system.  The current IPv4 system only allows for 4.3 billion addressed, and they’ve all been assigned.

Yeah, haven’t we been prolific on the web.