Categories
Burningbird Social Media Technology Weblogging

Mastodon and Burningbird

The social media upheaval continues but things are starting to quiet down a bit. Oh you can’t tell this from the media, which is full of stories leading with “Elon Musk says…”, but that’s primarily because the media hasn’t figured out how to wean itself off Twitter, yet.

I quit Twitter the day that Musk reactivated the Project Veritas account. Even a lie would be ashamed to be associated with Project Veritas. Not so Twitter and Musk.

Out with Twitter

I didn’t delete my two Twitter accounts, because bots and trolls can snap up a previously existing username in 30 days once deleted. And I didn’t deactivate them because deactivated accounts are deleted in 30 days. What I did was post a last note where to find me on Mastodon, set both to private, and then walked away. I won’t even look at Twitter now, because doing so triggers ad impressions and that gives Musk money. I don’t plan on ever giving that guy money, and I’m severely curtailing the amount of attention I’ve giving him.

I’ll miss the folks that stubbornly stay on Twitter, but they’ve made their choice, I’ve made mine, and someday maybe they’ll wise up.

On to Mastodon

In the meantime, my move to Mastodon has had ups and downs, but has ended up on an up. My choice of kickoff point on mastodon.social was a good one (@burningbird@mastodon.social) because the creator of Mastodon (Eugen Rochko), who is also the administrator of mastodon.social, is quite welcoming of new Twitter folks. No nonsense about content warnings.

Speaking of content warnings, I was told to use them, and told not to use them. My account on democracy.town was frozen and I believe it was because I did use content warnings when cross posting from Twitter. But I got into a disagreeable argument with another person about not using them when cross posting. A lose/lose.

Well, to hell with that server and any server administered by hypersensitive admins letting the power go to their heads. And to hell with other people’s CW demands.

Now, I use content warnings sparingly—primarily for larger posts or posts that contain what I deem to be sensitive material. If people don’t like it, they don’t have to follow me.

Mastodon and RSS

I did add some Mastodon stuff to my weblog. You’ll see a “post to Mastodon button” at the end of a story. And you’ll see my latest Mastodon entries in the footer. The latter comes from the RSS feed appended to each account in Mastodon (mine: https://mastodon.social/@burningbird.rss).

The really nice thing about Mastodon having an RSS feed is you can follow a person’s Mastodon entries in the same RSS reader you use for weblogs. Pretty soon, we’ll no longer be able to tell the difference between a weblog and a micro-blog.

Post to Mastodon

The post button is interesting (how-to). Unlike one centralized location for Twitter and Facebook, each person is on a specific Mastodon server, so you have to specify what server you’re on in the ‘toot this’ web page that opens. This is the nature of the federated beast. It’s no different than if you have a weblog or web page and you have to provide its unique URL when asked for it.

I also bookmarked the Toot dialog and use it when I post a link to Mastodon. I found using the dialog helps to trigger the link excerpt, while posting a link directly in Mastodon usually leaves the link as just a link.

The downside to using the Toot dialog is it logs me out of Mastodon, every time. This is a PITA when you’re using two-factor authentication.

Mastodon and Burningbird

My plan is to create my own Mastodon server, but I’m working through how I want to do so. I can spin up another Linode for it, or try putting in on this server. There are Mastodon hosting sites that are attractive, if for no other reason than you have to have SMTP access (for email), and it will be a cold day in hell before I try to run an SMTP service again. But I’m leaning towards spinning up another Linode and then using a 3rd party SMTP server such as Gmail.

The great thing about this federated universe is when I do create my own Mastodon instance, I can move all my follows/followers to it. I don’t believe I can move my posts to it, but really I don’t care about my older Mastodon posts. In fact, I’ve set my account up to delete them automatically after two weeks. Why burden mastodon.social with my old crap? I might be restoring my old weblog posts, but I don’t care about old Twitter/Facebook/Mastodon postings. These are just social media blurbs.

I do care about the people, so I don’t want to lose those connections.

When I do setup a Mastodon instance, I’ll spin you a tale of my trials and tribulations setting up a Ruby on Rails project. The one downside to Mastodon is it’s Ruby on Rails, an environment I have no experience with. I may also install something like PixelFed, which at least is good, honest PHP and Node.

 

 

Categories
Technology

Moving servers

It was time for me to upgrade my version of Ubuntu, from 18.04 to 20.04. I upgraded my software, thought I had a clean site, and tried to upgrade in place. After all, upgrading from 16.04 to 18.04 was a simple one line command.

Moving from 18.04 to 20.04 was not so simple, and the upgrade failed. Time to do a manual build of a new server and port my site to it. Which also ended up being more complicated than I thought it would be.

Moving to 22.04

First, if I was going to go through all the work, I was going with the latest Ubuntu LTS: Jammy Jellyfish, otherwise known as 22.04. I spun up a new Linode instance of 22.04 and set to work.

The LAMP installation went very well. I ended up with not quite the latest Apache, since the absolute latest wasn’t supported on 22.04 yet. However, I added the famous Ondřej Surý repository and I was good to go:

sudo add-apt-repository ppa:ondrej/apache2 -y

MySQL is 8.0.29 and PHP is 8.1.

All that certbot stuff

I had manually built a new server when I went from 14.04 to 16.04, but times have changed. That move was pre-HTTPS, pre-HTTP/2, pre-HSTS (HTTP Strict Transport Security), well, basically, pre-everything. I had the support in my existing server, so I know my pages and installation are clean. But the sheer amount of work to set it up again was a bit daunting.

Thankfully, since I had made these moves in the past, my site was already clean. All that I needed to worry about was installing certbot to manage my Let’s Encrypt digital certificates.

You’d think moving a server wouldn’t be that unusual, but neither Let’s Encrypt nor certbot cover what to do when your certificates are on one server and you need to set them up on another. Searching online gave me two options:

– copy everything and change symbolic links for the certificates

– just install new certificates on your new server, and delete the old

And that’s when things got sticky.

Where am I and who is managing my IP address?

When I made the move to 16.04, I was manually setting up my network configuration using ifupdown and editing the /etc/network/interfaces file. But when I went to 18.04, netplan was the new kid on the block for network configuration.

The problem is, I had one foot in both camps. So when I tried to access a test page on the new server, it failed. I certainly couldn’t run the certbot web validation for installing a new digital certificate if I couldn’t even serve a simple page on port 80.

In addition, Linode has the ability to manage network configuration for you automatically, so if you change servers and IP addresses, you don’t have to do a thing. But when I tried to turn it on, even SSH no longer worked. I had to restore the site from a backup.

It took a bit of painful digging around, but I finally upgraded my network configuration to netplan, and only netplan. I could now use SSH again, and install a new digital certificate for my test domain. But then, things got tricky again.

I hate the old propagation thing

When I created the new Linode server, I installed it in the Atlanta data center rather than the Dallas center I was using with the old. After all, Atlanta is now only a couple of hours away.

But doing so meant when I switched, I had to update my name registrar to set my DNS entries to the new server IP addresses. This is a pain, in itself, but it’s also a bit of a fragile time when trying to determine if my site will work on the new server. After all, you don’t want to permanently change your IP address only to find out your site doesn’t work, and then have to change it back. And digital certificates kind of mean you have to have all or nothing.

Thankfully, Linode had a peachy keen workaround: swap IP addresses. If two servers are in one data center, you can swap the IP address between them.

Of course, doing so meant I had to migrate my existing site to the new data center and change the DNS entries, but still, it would be worth it to be able to switch back and forth between servers when making major modifications. And the migration should be a painless button click from the Linode cloud control manager.

So, I migrated my old Linode VPN to Atlanta, and then tried to swap the IP addresses. Crash and Burn.

IPv4 and IPv6

What I didn’t know about the Linode IP swap facility is that it only swapped the IPv4 address, not the IPv6 address. So when I did the following

ip -a

My IPv4 address reflected the new server, but my IPv6 address reflected the old, and everything was just broken. Again.

The only recourse at this point was to bite the bullet, make the move to the new server, do the DNS propagation, and then deal with the digital certificates. I put up a warning page that the site might be off for a time, had a coffee. and just made the move.

After the move, I thought about doing the Let’s Encrypt digital certificate copying like some folks recommended, but it seemed messy—sort of like the network configuration issue I had just cleaned up.

I used certbot to do a new installation, and the move was flawless. Flawless. This is really the only clean way to move your site to a new server when you’re using digital certificates:

– Make sure you site can support port 80, at least temporarily

– use certbot to generate new digital certificates for your site(s)

– delete the old server and certificates

Five Years. I’m good for five years.

So here you are and here I am: on the new server with all new software on the data center closest to me, with clean, uncrufty network configuration and sparkly digital certificates.

Best of all?

Jammy Jellyfish has standard support until April, 2027. I’m good for five years; ten if I want extended support. And who knows where I’ll be in ten years.

Probably back here, doing it all over again.

 

 

 

Categories
JavaScript Technology Writing

My Last O’Reilly Book

The editor for JavaScript Cookbook, 3rd edition, just sent me a draft to review and comment on. This is the last of the O’Reilly books that will feature my name on it. I wasn’t actively involved in it; my name is on it because they kept about 15% of my old content. New authors, Adam Scott and Matthew MacDonald, took on the majority of the work.

They did a good job, too. I particularly like the chapter devoted to error handling. The days of using an alert message to debug have been gone for decades, and JS developers now have sophisticated error-handling techniques and tools. It is no longer your Mom’s JS.

Good. I’m happy the language and its uses have evolved. But…

I sometimes miss the simpler days when an alert was your JavaScript best friend.

I’ve been working with JavaScript since it was first introduced over 25 years ago. Not just traditionally in the web page, either. I worked on a small project that used Netscape’s server-side JavaScript (LiveWire) in the 1990s. It was so very different from an application made with Node. as you can see for yourself from an old Dr. Dobb’s article on LiveWire.

Writing about JavaScript today requires a different mindset than writing about it years ago. Then, JavaScript was laughably simple. It’s simplicity, though, was also its strength. In its infancy JavaScript was so embraceable. You could literally toss a tiny blurb of JS into an HTML file, load it into a browser, and see an immediate implementation. You didn’t have to fiddle with compilation, installing special developer tools, or figure out a framework.

My first book on JavaScript, the JavaScript How-To for Waite Group Press was published in 1996. The hardest part of writing it was trying to find enough about JavaScript to actually fill a book.

JavaScript today, or ECMAScript if you want to be pure, is not so simple. And oddly enough, that’s its strength now: it is powerful enough to meet today’s very demanding web applications. And the hardest part of working on a book such as the JavaScript Cookbook is limiting yourself to the essentials, because you could easily write three or four books and still not envelop the world of JavaScript as it exists now.

When O’Reilly asked me to do a new edition of the Cookbook I knew I just didn’t want to take on that kind of burden again. It was hard to give up control of one of my favorite books, but after 25 years of working to deadlines and dealing with tech editors, I knew I just didn’t have the energy or patience to do the book justice. I knew it was time for me to hang up my drafts.

Thankfully, the new Cookbook team have done an exceptionally good job. I’m happy to have my name on the Cookbook, one last time.

If I write now, it’s solely for me: my ideas, my deadlines, my expectations. I may only write in this space. I may try my hand at self-publication.

Who knows? Maybe all I’ll do is write incendiary commentary in Facebook and Twitter and see how often I can get banned.