Categories
Social Media Technology

Testing WordPress/Mastodon ActivityPub integration

I just installed the ActivityPub plug-in for WordPress. With it, you can follow posts here at Burningbird by following bosslady@burningbird.net. Supposedly, then, this post would show up on Mastodon as a new post.

There’s also a plugin that would allow my WordPress weblog to subscribe to one or more folks on Mastodon, and publish their posts here. However, I won’t publish someone else’s work in my space, and I also have code to print out the RSS entries at the footer for my Mastodon space. Still, new toy…

I’m still debating on whether to install my own Mastodon instance. I don’t have a lot of interest in learning Ruby/Rails, so the technology doesn’t interest me, in and of itself. The ActivityPub protocol does interest me. Mastodon, itself, interests me, but I do have a comfortable home on mastodon.social.

If I do install Mastodon, then I have to decide if I want to spin up a new Linode for it, or learn to live with WordPress/MySQL duking it out with Mastodon/PostGreSQL. And then there’s the email service. Rumor has it that GMail is not Mastodon-friendly, so using it as a third-party email service won’t work. I don’t want to pay for an email service, so that would mean installing email support. And that’s just uglier than sin.

Decisions, decisions.

Categories
Burningbird Social Media Technology Weblogging

Mastodon and Burningbird

The social media upheaval continues but things are starting to quiet down a bit. Oh you can’t tell this from the media, which is full of stories leading with “Elon Musk says…”, but that’s primarily because the media hasn’t figured out how to wean itself off Twitter, yet.

I quit Twitter the day that Musk reactivated the Project Veritas account. Even a lie would be ashamed to be associated with Project Veritas. Not so Twitter and Musk.

Out with Twitter

I didn’t delete my two Twitter accounts, because bots and trolls can snap up a previously existing username in 30 days once deleted. And I didn’t deactivate them because deactivated accounts are deleted in 30 days. What I did was post a last note where to find me on Mastodon, set both to private, and then walked away. I won’t even look at Twitter now, because doing so triggers ad impressions and that gives Musk money. I don’t plan on ever giving that guy money, and I’m severely curtailing the amount of attention I’ve giving him.

I’ll miss the folks that stubbornly stay on Twitter, but they’ve made their choice, I’ve made mine, and someday maybe they’ll wise up.

On to Mastodon

In the meantime, my move to Mastodon has had ups and downs, but has ended up on an up. My choice of kickoff point on mastodon.social was a good one (@burningbird) because the creator of Mastodon (Eugen Rochko), who is also the administrator of mastodon.social, is quite welcoming of new Twitter folks. No nonsense about content warnings.

Speaking of content warnings, I was told to use them, and told not to use them. My account on democracy.town was frozen and I believe it was because I did use content warnings when cross posting from Twitter. But I got into a disagreeable argument with another person about not using them when cross posting. A lose/lose.

Well, to hell with that server and any server administered by hypersensitive admins letting the power go to their heads. And to hell with other people’s CW demands.

Now, I use content warnings sparingly—primarily for larger posts or posts that contain what I deem to be sensitive material. If people don’t like it, they don’t have to follow me.

Mastodon and RSS

I did add some Mastodon stuff to my weblog. You’ll see a “post to Mastodon button” at the end of a story. And you’ll see my latest Mastodon entries in the footer. The latter comes from the RSS feed appended to each account in Mastodon (mine: https://mastodon.social/@burningbird.rss).

The really nice thing about Mastodon having an RSS feed is you can follow a person’s Mastodon entries in the same RSS reader you use for weblogs. Pretty soon, we’ll no longer be able to tell the difference between a weblog and a micro-blog.

Post to Mastodon

The post button is interesting (how-to). Unlike one centralized location for Twitter and Facebook, each person is on a specific Mastodon server, so you have to specify what server you’re on in the ‘toot this’ web page that opens. This is the nature of the federated beast. It’s no different than if you have a weblog or web page and you have to provide its unique URL when asked for it.

I also bookmarked the Toot dialog and use it when I post a link to Mastodon. I found using the dialog helps to trigger the link excerpt, while posting a link directly in Mastodon usually leaves the link as just a link.

The downside to using the Toot dialog is it logs me out of Mastodon, every time. This is a PITA when you’re using two-factor authentication.

Mastodon and Burningbird

My plan is to create my own Mastodon server, but I’m working through how I want to do so. I can spin up another Linode for it, or try putting in on this server. There are Mastodon hosting sites that are attractive, if for no other reason than you have to have SMTP access (for email), and it will be a cold day in hell before I try to run an SMTP service again. But I’m leaning towards spinning up another Linode and then using a 3rd party SMTP server such as Gmail.

The great thing about this federated universe is when I do create my own Mastodon instance, I can move all my follows/followers to it. I don’t believe I can move my posts to it, but really I don’t care about my older Mastodon posts. In fact, I’ve set my account up to delete them automatically after two weeks. Why burden mastodon.social with my old crap? I might be restoring my old weblog posts, but I don’t care about old Twitter/Facebook/Mastodon postings. These are just social media blurbs.

I do care about the people, so I don’t want to lose those connections.

When I do setup a Mastodon instance, I’ll spin you a tale of my trials and tribulations setting up a Ruby on Rails project. The one downside to Mastodon is it’s Ruby on Rails, an environment I have no experience with. I may also install something like PixelFed, which at least is good, honest PHP and Node.

 

 

Categories
Technology

Moving servers

It was time for me to upgrade my version of Ubuntu, from 18.04 to 20.04. I upgraded my software, thought I had a clean site, and tried to upgrade in place. After all, upgrading from 16.04 to 18.04 was a simple one line command.

Moving from 18.04 to 20.04 was not so simple, and the upgrade failed. Time to do a manual build of a new server and port my site to it. Which also ended up being more complicated than I thought it would be.

Moving to 22.04

First, if I was going to go through all the work, I was going with the latest Ubuntu LTS: Jammy Jellyfish, otherwise known as 22.04. I spun up a new Linode instance of 22.04 and set to work.

The LAMP installation went very well. I ended up with not quite the latest Apache, since the absolute latest wasn’t supported on 22.04 yet. However, I added the famous Ondřej Surý repository and I was good to go:

sudo add-apt-repository ppa:ondrej/apache2 -y

MySQL is 8.0.29 and PHP is 8.1.

All that certbot stuff

I had manually built a new server when I went from 14.04 to 16.04, but times have changed. That move was pre-HTTPS, pre-HTTP/2, pre-HSTS (HTTP Strict Transport Security), well, basically, pre-everything. I had the support in my existing server, so I know my pages and installation are clean. But the sheer amount of work to set it up again was a bit daunting.

Thankfully, since I had made these moves in the past, my site was already clean. All that I needed to worry about was installing certbot to manage my Let’s Encrypt digital certificates.

You’d think moving a server wouldn’t be that unusual, but neither Let’s Encrypt nor certbot cover what to do when your certificates are on one server and you need to set them up on another. Searching online gave me two options:

– copy everything and change symbolic links for the certificates

– just install new certificates on your new server, and delete the old

And that’s when things got sticky.

Where am I and who is managing my IP address?

When I made the move to 16.04, I was manually setting up my network configuration using ifupdown and editing the /etc/network/interfaces file. But when I went to 18.04, netplan was the new kid on the block for network configuration.

The problem is, I had one foot in both camps. So when I tried to access a test page on the new server, it failed. I certainly couldn’t run the certbot web validation for installing a new digital certificate if I couldn’t even serve a simple page on port 80.

In addition, Linode has the ability to manage network configuration for you automatically, so if you change servers and IP addresses, you don’t have to do a thing. But when I tried to turn it on, even SSH no longer worked. I had to restore the site from a backup.

It took a bit of painful digging around, but I finally upgraded my network configuration to netplan, and only netplan. I could now use SSH again, and install a new digital certificate for my test domain. But then, things got tricky again.

I hate the old propagation thing

When I created the new Linode server, I installed it in the Atlanta data center rather than the Dallas center I was using with the old. After all, Atlanta is now only a couple of hours away.

But doing so meant when I switched, I had to update my name registrar to set my DNS entries to the new server IP addresses. This is a pain, in itself, but it’s also a bit of a fragile time when trying to determine if my site will work on the new server. After all, you don’t want to permanently change your IP address only to find out your site doesn’t work, and then have to change it back. And digital certificates kind of mean you have to have all or nothing.

Thankfully, Linode had a peachy keen workaround: swap IP addresses. If two servers are in one data center, you can swap the IP address between them.

Of course, doing so meant I had to migrate my existing site to the new data center and change the DNS entries, but still, it would be worth it to be able to switch back and forth between servers when making major modifications. And the migration should be a painless button click from the Linode cloud control manager.

So, I migrated my old Linode VPN to Atlanta, and then tried to swap the IP addresses. Crash and Burn.

IPv4 and IPv6

What I didn’t know about the Linode IP swap facility is that it only swapped the IPv4 address, not the IPv6 address. So when I did the following

ip -a

My IPv4 address reflected the new server, but my IPv6 address reflected the old, and everything was just broken. Again.

The only recourse at this point was to bite the bullet, make the move to the new server, do the DNS propagation, and then deal with the digital certificates. I put up a warning page that the site might be off for a time, had a coffee. and just made the move.

After the move, I thought about doing the Let’s Encrypt digital certificate copying like some folks recommended, but it seemed messy—sort of like the network configuration issue I had just cleaned up.

I used certbot to do a new installation, and the move was flawless. Flawless. This is really the only clean way to move your site to a new server when you’re using digital certificates:

– Make sure you site can support port 80, at least temporarily

– use certbot to generate new digital certificates for your site(s)

– delete the old server and certificates

Five Years. I’m good for five years.

So here you are and here I am: on the new server with all new software on the data center closest to me, with clean, uncrufty network configuration and sparkly digital certificates.

Best of all?

Jammy Jellyfish has standard support until April, 2027. I’m good for five years; ten if I want extended support. And who knows where I’ll be in ten years.

Probably back here, doing it all over again.

 

 

 

Categories
JavaScript Technology Writing

My Last O’Reilly Book

The editor for JavaScript Cookbook, 3rd edition, just sent me a draft to review and comment on. This is the last of the O’Reilly books that will feature my name on it. I wasn’t actively involved in it; my name is on it because they kept about 15% of my old content. New authors, Adam Scott and Matthew MacDonald, took on the majority of the work.

They did a good job, too. I particularly like the chapter devoted to error handling. The days of using an alert message to debug have been gone for decades, and JS developers now have sophisticated error-handling techniques and tools. It is no longer your Mom’s JS.

Good. I’m happy the language and its uses have evolved. But…

I sometimes miss the simpler days when an alert was your JavaScript best friend.

I’ve been working with JavaScript since it was first introduced over 25 years ago. Not just traditionally in the web page, either. I worked on a small project that used Netscape’s server-side JavaScript (LiveWire) in the 1990s. It was so very different from an application made with Node. as you can see for yourself from an old Dr. Dobb’s article on LiveWire.

Writing about JavaScript today requires a different mindset than writing about it years ago. Then, JavaScript was laughably simple. It’s simplicity, though, was also its strength. In its infancy JavaScript was so embraceable. You could literally toss a tiny blurb of JS into an HTML file, load it into a browser, and see an immediate implementation. You didn’t have to fiddle with compilation, installing special developer tools, or figure out a framework.

My first book on JavaScript, the JavaScript How-To for Waite Group Press was published in 1996. The hardest part of writing it was trying to find enough about JavaScript to actually fill a book.

JavaScript today, or ECMAScript if you want to be pure, is not so simple. And oddly enough, that’s its strength now: it is powerful enough to meet today’s very demanding web applications. And the hardest part of working on a book such as the JavaScript Cookbook is limiting yourself to the essentials, because you could easily write three or four books and still not envelop the world of JavaScript as it exists now.

When O’Reilly asked me to do a new edition of the Cookbook I knew I just didn’t want to take on that kind of burden again. It was hard to give up control of one of my favorite books, but after 25 years of working to deadlines and dealing with tech editors, I knew I just didn’t have the energy or patience to do the book justice. I knew it was time for me to hang up my drafts.

Thankfully, the new Cookbook team have done an exceptionally good job. I’m happy to have my name on the Cookbook, one last time.

If I write now, it’s solely for me: my ideas, my deadlines, my expectations. I may only write in this space. I may try my hand at self-publication.

Who knows? Maybe all I’ll do is write incendiary commentary in Facebook and Twitter and see how often I can get banned.

Categories
Technology

Google Fi and Pixel 4a: How Google can you get?

I had ATT mobile service for years. I also had Samsung phones.

After moving to Georgia, we decided to try something new. We were paying too much to ATT for two phones for two people who don’t use a lot of data. I also wasn’t interested in paying the price of a small farm in a third world country for a phone.

I thought about going with Xfinity’s mobile since I have Xfinity for internet. However, the company’s systems were so terribly broken I decided to escape while the escaping was good. We also looked at OnePlus 8 Pro phones, but support for them is still sketchy.

Google Fi

About this time I stumbled across Google Fi, which I hadn’t heard about previously. Fi is an MVNO or mobile virtual network operator. Google doesn’t actually have a physical network. Instead it channels three different networks it has agreements with: Cingular, Sprint, and T-Mobile. In my area, I have access to all three, including T-Mobile’s 5G network.

Where Google Fi differs from other MVNOs is that it will silently switch you between carriers depending on signal strength. So, if I move out of the T-Mobile area into Cingular, it will switch me over and I won’t even know it.

This silent switching occurs if, big if, you have a compatible phone. So this takes us to the phones.

The Pixel

I had already decided to look at the Google Pixel when I shopped for a new phone. Specifically: the Google Pixel 4a 5G. They’re a good mid-priced option, you know you’ll get the first upgrades, and you know exactly when support for the phone will expire. Google can get wonky when it comes to their products and services, but they seem to be committed to the Pixel phones. For now.

The best thing about Google Pixels and Google Fi is they’re made for each other. The phone silently switches between data networks, and I’ve rarely had problems with connections. Best of all, most of the connections have been 5G. Not that I care about 5G, other than being a cool kid.

Fractional GB and eSim

Our data networks don’t matter that much because we rarely use mobile data. When I’m out of the house, I’m not on the phone. I might use Google maps, and the phone camera, but I’m not going to check into Facebook to see what everyone is doing. When I’m away from my computer, I want to be away from the computer.

This leads us to why I went with Google Fi: the service only charges you for the exact amount of data you use. This last month, it cost me $.97 for the tiny bit of data we used for the month. Of all the options, Google Fi is actually the cheapest we could use.

Setup was easy, too, because we decided to go with new phone numbers to match our new location. Didn’t have to mess with sim cards. We didn’t have sim cards at all: the Pixel/Google Fi supported eSim, which is a software-based sim setup. The only difficulty was changing the mobile number for all the two-factor authentications for many of my services.

What’s interesting is I can get a sim card for another service, and actually switch between Google Fi and that service on the same phone. I wouldn’t, but I could.

Web privacy and the philosophy of licorice

Of course you’re all thinking now: oh my god, Google is really tracking her!

Of course it is. I have an Android phone, Google is always going to be tracking me. Now, though, I don’t have Samsung and ATT joining in the fun. I figure I saved over 10GB of space not having their crapware on my phone. The Google stuff was going to be there because of Android, regardless. To me, it’s just no big thing.

My philosophy about companies tracking is, they don’t care about any of us as individuals: we’re just a collection of related data they can use to sell us something. I used to use ad-blockers and other technologies to try and hide my movements until I realized I was spending more time doing all of this than I was just ignoring what each company is pushing at me.

It really hit home when I mentioned licorice in a Facebook post once. Most of the ads that popped up after that were selling licorice. Not just in Facebook. The licorice ads showed up everywhere.

It led me to what I call my licorice philosophy: You can control what companies know about you by giving them exactly the type of data they want.

So, now I control what type of ads I get (and what companies learn about me) by giving the data machines exactly the data they want to get. What does Google know about me? I like food, I like critters, I’m interested in arborvitae and 3D printers. I don’t like Trump. I like licorice.

It’s become a wonderfully fun game, watching the ads change from site to site. Some days I have a bit of fun and click oddly disparate items and watch the ad machines bust into convoluted exercises trying to hit all the areas of interest at once.

It reminded me of a time when I attended a group team building exercise when I worked at Boeing. Each table of people was a team. Each team competed against each other. We were supposed to come up with tasks that were difficult for the other teams.

What a pain. I convinced my table that we could have more control over the outcome if instead of making it hard for the other team, we made it super easy. The other team caught on quick, and they started doing the same. Between them and us, we ended up first and second, had a blast, and really pissed off the person leading the exercise.

Fun times.

Here I am Google, Take me I’m yours

The service and the phones have been working quite well. The only issue I had for a time is Google’s Assistant was answering the phone for me, and taking messages. I finally was able to persuade it to let the non-spam calls through.

I fed it the spam, though. It was happy.