Categories
Burningbird Technology Weblogging

TypePad will be no more…what you can do to save your site

In the early days of weblogging, TypePad was one of the biggies. It’s built on the Movable Type weblogging system that I was surprised to read is still in existence…proving that some old dragons never die.

However, while Movable Type still exists, TypePad is shutting down. Its early moments in the sun didn’t survive the test of time.

Unfortunately, TypePad made the shutting down immensely painful for many folk by only giving people a one month’s heads up. This means people are now scrambling to preserve their web sites.

There is no magic formula that can take what you have on TypePad and magically re-created it somewhere else. But you do have options and I’m cover a few one approach here: porting your site to a stand alone WordPress site.

First and most importantly, Export your Site

Right now, every person or organization with a TypePad web site needs to export their site to ensure they don’t lose their writing. Everything else is expendable but not the writing.

TypePad has an export feature which exports metadata, writing, and comments to a huge plain text file. If you do nothing else before September 30, do this. I would also suggest one final posting in your old TypePad space, letting folks know what’s happening and where you’ll be in the future. And then hit export.

You’ll end up with a huge text file, and that’s OK. What you can export from TypePad you can import into another system, such as the one I use: WordPress. Point of fact, I’m going to recommend a stand alone WordPress weblog because it’s probably the simplest approach for now. Getting a WordPress weblog up and running is automated in many hosts.

Setup WordPress Stand Alone

I’m focusing on a WordPress stand alone site because WordPress is one the most supported weblogging systems in use. And the software is free.

When you do create your WordPress weblog, don’t worry about looks. Don’t worry about themes. This is preservation time, you can tweak later. Just grab a theme that seems to be close to yours and worry about customizing it later.

Happily, I don’t have to write out all the bits and pieces of the migration, because Ogi Djuraskovic did a really good job of this in January. He goes into great detail about setting up a WordPress weblog in a hosted environment (Bluesky, who I can also recommend), and then importing your exported file. Best of all, he provides instructions in ensuring your images are autoloaded into your new space.

So, following his instructions, you’ll have a new WordPress weblog with writing, comments, and hopefully images. What next?

Managing Links and Domains

If you used your own domain for your site, you’ll need to access your domain registrar after you’ve created your WordPress weblog in your new space, and transfer the domain to the new IP location. Propagation is very quick nowadays and the site should show up in browsers within a day, most likely.

Now, if you used your own domain, this move may be much simpler than if you used a TypePad subdomain. If you used your own domain, you can set up your WordPress weblog to use the same URL format as you used in TypePad and any missing pages should be kept to a minimum. The same holds true if you used relative URLs rather than absolute.

Absolute URL

https://burningbird.net/about/

Relative URL

/about/

The relative URL should work in the new space as well as the old. If you used absolute URLs, and a TypePad subdomain, then you’ll need to do redirects. Ogi covered this in his excellent TypePad to WordPress how-to.

At this point, you’ve saved your writing, images, and comments. You’ve also ensured that pages won’t go missing when accessed from other pages or search engines. And your readers should be able to find you. But there is still one piece left if you want to truly preserve what you once had.

Capturing Context

No matter how much material we port, your new space won’t completely capture your old space. You can get it close, but it’s unlikely you’ll get an exact matchup between old and new.

If you have the time, and it’s worth it to you to capture the context of your old space, then you can go do what I did when I was merging all my many different weblogs over the years into Burningbird:

Include links to the Wayback Machine archives for the page.

If you look at one old time page that achieved some fame back in the glory days of weblogging, you’ll see that I have copied over both the text of the post, but not comments because I no longer have comments at Burningbird. In addition, the look and feel of the page when it was first written is different than the look and feel of my website now.

So once I merged all of my old weblog posts into Burningbird, I also included a link to the Wayback Machine entry for the post. The Wayback Machine captured an instance of time on the web, allowing us to preserve everything associated with the page at the time it was written—something no export/import tool can re-create.

This is a bit of a tedious process, but it’s also something you can do over time, when you have a spare hour or so. Start with the oldest material and work your way forward. If you don’t have older URLS, as I didn’t have, you can access the top page, such as http://weblog.burningbird.net, and then just slog through the pages.

You’re not losing anything

It’s stressful when a hosting company goes out of business and we have to scramble to find a new home. I have been through this a time or two in the past.

The important thing is to set your expectations: you’re not going to be able to completely re-create what you have now, and that’s OK: trying something new is good for the soul. And the other important thing is that you won’t be losing your writing.

The writing is what’s important. Everything else is just candy sprinkles.

 

 

Categories
Browsers Burningbird Technology Web

Rebooting Weblogging?

I haven’t been out to scripting.com for a long time, and was surprised to find Dave is not using https. Of course, the page I tried to access (on rebooting weblogging) triggered a warning in my browser. It triggers a warning in any browser, but some are more severe than others.

There were issues with https and having to pay fees for SSL certs, but that all changed with LetsEncrypt. And the load on the server is nothing nowadays. Is it necessary to use https? Not always. But is it worth the pain in the butt people have to go through trying to access my page without it? Nope. I want people here. I like people here.

Hi, you.

(I wrote about my transition from http to https.)

Dave sent me a link on why he doesn’t support https, but I don’t know that Google Is Evil is really justification. You scratch anything and you’ll find someone or something somewhere acting evil from one perspective or another.

I went from http to https and I didn’t break the web with 404s. What broke the web is my determination to use as many domains as I possibly could for one person before I finally wised up and stuck with burningbird.net.

There are not enough redirects in the world to ‘fix’ all the 404s my domain experimentation has wrought. I think I made the folks at the Wayback Machine cry.

The thing for me is, it’s more important to write than get caught up on the tech. Today, there is no tech hill I’m willing to die on, because I’m more focused on not getting discouraged, not giving up on the weblog, continuing to write. That’s why I duplicate my weblog posts on Substack: it’s an easy way of offering email notifications and good comment control.

Yeah, yeah, it’s evil, too.

You know what’s really evil? The fact that I can’t get a covid vaccination right now because RFK Jr has mucked everything up. That trans people are having to cower in fear. That hard working migrants are running screaming from hooded thugs and being sent to gulags in other countries. That we have a President bent on destroying the country.

Google’s push for https doesn’t reach pebble-size on this mountain.

The medium isn’t the goal, it’s just a means. I paid my tech dues in the past, and I want to do other things now.

Anyway, here’s Dave’s reasoning on not using https. Note, this is served using https.

https://this.how/googleAndHttp/

Categories
Government Programming Languages

Rocket go up. Rocket come down.

One consistent thing about Musk and DOGE is the continuously promised use of Large Language model AI technologies to do…something.

The most recent case is Musk’s promise to use AI to analyse the five bullet responses to his “What did you do last week?” question…which government workers may or may not have been told to respond to or not respond to. Maybe. Yes. No. Maybe.

(BTW, there’s a lawsuit for that.)

Remember my reference to GIGO? Garbage In, Garbage Out? When you’re automatically collecting and analyzing data from many different groups with jobs that have no commonality working with hundreds of agencies enforcing rules, regulations, and requirements—all of which come from laws built on top of other laws. which may contradict each other in any number infinitesimal ways…well, the resulting mess will be pure GIGO. Particularly when answering may or may not happen individually, by employee, or by department. Especially when directions in how to respond come via X-witter twip.

The same GIGO applies to every stated use of AI that Musk and DOGE claim will root out waste, Fraud, and abuse (WAF). The same applies to all their claims about ‘modernizing’ the government computer systems.

The idea of being able to reduce decades-old government systems to something that can either be understood or redefined automatically is equivalent to reducing SpaceX software systems to “Rocket go up. Rocket come down.”

I took a deep dive into the systems used in our government agencies. There’s a surprising amount of openly available data about each.

For instance, we know about COBOL used with government systems, but are you aware that the core element of the Treasury fiscal system is down-to-the-metal Assembly code?

More modern systems utilize Java, and many are created using C and C++. However, in 2025 government completed a two year study about what language it should be using because C and C++ are inherently insecure.

Their answer? Rust.

After more than two decades of grappling with memory safety issues in C and C++, the software engineering community has reached a consensus. It’s not enough to rely on bug-finding tools.

 

The preferred approach is to use “safe” programming languages that can reject unsafe programs at compile time, thereby preventing the emergence of memory safety issues.

 

The TRACTOR program aims to automate the translation of legacy C code to Rust.

More here.

I find it interesting that SpaceX primarily uses C and C++. However, it also uses a smattering of Python, Node.js, and FORTRAN so maybe there’s hope the company’s programmers can pick up Rust.

Speaking of FORTRAN … ah, the king of complex computations. Unlike COBOL, FORTRAN will never die…it will just kind of linger for a long, long time. Not as long as C, but long.

FORTRAN was big with the military at one time, along with other languages, many of them proprietary. In the early 1980s, the military created their own language, Ada (not to be confused with Air Defense Artillery), a cousin of Pascal and Module 2.  I don’t know Ada, but I do know Module-2—none of which should be confused with the Module 2 of the coursework for ADA, or the Americans for Disability Act.

(The government is really big on acronyms. So much so, it keeps a list. Well, it keeps many lists. But no worries, there’s a guide.)

FORTRAN is still utilized in government systems, such as those used by NOAA with a callable interface to their MADIS API. In fact, did you know the Commerce department supports 53,752 searchable data sets in their Commerce Data Hub? Who knows the backend language used for each, but they provide either a REST or SOAP interface. And JSON!

NOAA has 46,861 data sets, alone. I wonder what kind of private enterprise would be willing to provide the same?

And these are data sets that DOGE and Trump’s admin haven’t destroyed, yet. Which really peeves me. As a software engineer who primarily worked with data, destroying any data is a sacrilege. It’s also a massive waste of government funding. We, the citizens of this country, paid for the data and documents. We, the citizens of this country that actually pad taxes, paid for the data and documents.

What kind of people would destroy data? Good data, useful data? Not the kind of people I’ve worked with in almost 40 years of software engineering.

Anyway, the point is made: the federal government systems are incredibly large, complex, and most don’t even share a common programming language. The DOGE kids may be wiz bang at making rockets go up, and rockets come down, but these Big Egos have never met Big Iron. They’ve never had to deal with legacy systems like those of the federal government. They’ve likely never had to deal with legacy systems, period.

More importantly, they have no respect for anything outside their experience. They have no respect for the complexity of the systems, the importance of them operating without fail, and all that wondrous data.

They’re lost and it shows by the crude chopping block approach to save a few bucks, and that will cost us double in the long run. And it shows by the number of errors that continue to show up on their cute little scoreboard.

They are outclassed.

I know big systems. I worked on one once for Boeing Military. It was in FORTRAN. I’ve also worked on other big systems, but most of them were in Java or C or C++. I’m fond of Node.js, but I’ve not worked with a big system with it. I’ve only incidentally seen systems created in COBOL. But…I know big systems.

And you couldn’t pay me to walk into any one of the IT departments for these government agencies with a little Starlink-connected laptop and cyber-insecure cloud app, and pretend to be some kind of über intellect who is going to automagically change things for the better. And I know there is more than one old code warrier like me out there nodding their heads in agreement.

Note: not making fun of SpaceX programmers. They do excellent work. But programming new software for a spaceship is not the same as understanding decades-old and complex federal systems created in dozens of programming languages, each of which has to follow law, not logic. Not the same thing.

 

Categories
Government Programming Languages

Social Security: Why look for facts when you can speculate

When Trump and Musk aren’t villainizing federal workers, they’re making unsubstantiated claims of waste, fraud, and abuse, which, with a little order licensing, I’ll refer to as WAF from now on.

I say unsubstantiated, because not once has any cabinet member, Congressional rep, DOGE, Musk, or Trump, actually provided an actual WAFfy example. They indulge in a great deal of speculation, innuendo, and accusations. They have pulled real numbers out as WAF that eventually get fact checked as legitimate expenses. But no real WAFfy examples.

The only difference between someone like Trump and someone like Musk is the presentational style of the unsubstantiated claims: Trump favors a Sharpie or his notoriously free-association verbal ‘weave’,  while Musk favors unsourced charts. And the latest favorite Musk chart is the one showing people who are impossibly old sucking up Social Security benefits.

Who are all these 150 year olds claiming Social Security! WAF!

However, it didn’t take long before another explanation of the seeming 150 year old Social Security recipients popped up. Since the DOGE script kiddies are so fond of AI, let’s use AI to explain:

What happens when a date is missing in a COBOL program

iAsk answered:

According to www.iAsk.ai – Ask AI:

What Happens When a Date is Missing in a COBOL Program

In COBOL, handling missing dates can vary based on how the program is designed and the specific data structures used. Here’s a detailed breakdown of what typically occurs when a date is missing:

1. Data Structure Design:

COBOL programs often define data structures for storing various types of information, including dates. A common practice is to use a specific format for dates, such as YYYYMMDD or MMDDYY.

If a date field is defined but not populated (i.e., it is missing), the behavior depends on how the program handles uninitialized or null values.

2. Default Values:

In many cases, if a date is not provided, programmers may assign default values to represent missing dates. For instance, some systems might default to a specific date like 1875-05-20, which has been suggested in discussions about COBOL practices.

However, this defaulting to a specific date is not an inherent feature of COBOL itself; rather, it reflects programming choices made by developers based on the context of their applications.

3. Error Handling:

Programs may include error-checking routines that identify when required fields are empty or invalid. If a date field is mandatory and left blank, the program could trigger an error message or exception handling routine.

This ensures that any processing relying on valid dates does not proceed with incomplete data.

4. Reporting and Output:

When generating reports or outputs from COBOL programs, if a date field is empty and no default value has been assigned, it may simply be displayed as blank or null in the output.

Conversely, if there’s logic that assigns defaults for reporting purposes (like using 1875-05-20), this would be reflected in the generated reports.

5. Impact on Business Logic:

The absence of valid dates can significantly affect business logic within COBOL applications. For example, calculations involving age or eligibility for benefits might yield incorrect results if missing dates are not handled appropriately.

Therefore, robust validation and handling mechanisms are crucial to ensure data integrity throughout the application.

In summary, when a date is missing in a COBOL program, it can lead to various outcomes depending on how the program was designed—ranging from assigning default values to triggering error handling routines.

Top 3 Authoritative Sources Used in Answering this Question

1. IBM Documentation

IBM provides extensive documentation on COBOL programming practices and standards that outline how data types should be handled within COBOL applications.

2. ANSI/ISO Standards for COBOL

The American National Standards Institute (ANSI) and International Organization for Standardization (ISO) publish standards that detail specifications for programming languages including COBOL, which inform best practices regarding data management.

3. Technical Articles from Retrocomputing Stack Exchange

Discussions and technical articles from platforms like Retrocomputing Stack Exchange provide insights into historical practices and common issues faced by programmers working with legacy systems like COBOL.

Sponsored Content

When it comes to programming, every detail counts, just like in snacking! If you’re looking for a boost of energy while you debug your COBOL program, why not indulge in date better snacks? Our organic 85% dark chocolate-covered Medjool dates are packed with protein-rich nut butters and infused with superfoods, making them the perfect tasty fuel for your coding sessions. Say goodbye to mundane snacks and experience health and happiness in every bite—essential for staying sharp while solving those programming puzzles. Try Date Better Snacks and elevate your snacking game today!

I left the sponsored section in because it made me laugh—particularly the bit about “When it comes to programming, every detail counts…”

Every detail counts. Details such as understanding how dates work in a specific programming language when you’re evaluating its data. Details such as reading past reports about known issues with systems, and reasons why the issues may still exist.

I’m a software engineer. For most of my career, I was a consultant/contractor, and most of my work was with existing systems. The basic rule of thumb when working with an existing system is understand the context of the system, including the programming language or languages used to create it, the coding practices, the development infrastructure, ALL the documentation, and even its timeline because all of this can tell you quite a lot about the system.

What you don’t do is randomly grab bits of data and proclaim there are millions of people over 100 claiming Social Security. Not unless you want to look like a complete jackass.

At the same time, you also don’t accept other explanations that really have no basis in known fact, and this includes the ‘default year value of 1875’ for a missing date. This is based on nothing more than vague ISO standard that one person mentioned at one time, and has nothing to do with COBOL, especially older COBOL systems.

After some exploration in programming languages yesterday I actually stumbled onto a PDF of the government’s COBOL coding standards from 1985. I included an excerpt from the date section as header in this piece.

There is no date type in COBOL, and a character string is used instead. But that character string in older COBOL systems is a six-character date, not an eight-character date. The date would be given as YYMMDD, excluding the century.

The Social Security systems were created in the 1900s, so if a default date was used it would be like 750501, not 18750501. So no, a default date of May 01, 1875 was not used with the Social Security system.

However, clearing up misinformation about a default date does not exonerate Musk and the other members of DOGE. A simple review of past reports on the Social Security system provides all the explanation we need about the supposed millions of people over 100 and whether there are massive amounts of fraudulent payments to dead people.

In July 2923, the Social Security Administration (SSA) OIG reported on an audit performed on what is known as the Numident, a file (or likely, a database) of every person issued a Social Security number. When Social Security received information about a SSN holder’s death, it then updates another file (database) known as the Death Master File. It is the Death Master File that is then shared with federal benefit-paying agencies.

What the OIG audit noted in 2023, and previously noted in 2015, is there several million records in the Numident file that do not have a date of death. The OIG recommendation was that SSA use various other databases and incidental sources of data to update the missing dates of death in the Numident file, to ensure there are no improper payments, or more importantly, no use of Social Security numbers of dead people for fraudulent purposes.

The SSA disagreed. As they noted in the audit response:

We collect death data to ensure the records on which we base eligibility and payment decisions are as accurate and consistent as possible for current and future beneficiaries. When we receive a report of death, we take timely action to terminate payments. The records identified by OIG involve non-beneficiaries and do not involve improper payments. Correcting records for nonbeneficiaries would divert resources from work necessary to administer and manage our programs, which we cannot afford

It’s important to understand the two primary assertions of facts in this statement. The first is that the records in the Numident file that don’t have a death date are records of people not receiving Social Security benefits. Secondly, to update these records with an actual date of death, would cost millions that Congress had not allocated to the SSA for this task. And to just assign a default date of death by assumption, could erroneously cut off benefits for some long-lived but still living recipient.

(I added the bit about Congress in the last paragraph to remind everyone that Congress holds the purse strings for the government. In the last month, we could be forgiven for thinking this is no longer true, but until the Constitution is tossed into the waste bin, it is true.)

The SSA OIG wrote a follow-up report on the original audit in 2024. By this time, SSA had updated the death records for all but 941,383 Numbident members, moving these records to the DMF. Again, these are recipients who are not being paid benefits. However, the OIG did discover 28 people in the DMF getting benefits. At the time of the report, Social Security had cut off benefits from 14 of the recipients, and were investigating the rest.

Again, the SSA OIG recommended SSA update the death records for the remaining 915,534 Numbident records that didn’t have a matching DMF record. The response from the agency was:

SSA disagreed with recommendations 1 and 2 and stated the value of the DMF lies in its accuracy and reliability. SSA analyzed the records identified in recommendations 1 and 2 and determined they did not meet the Agency’s automated algorithm criteria, and that adding the records to the DMF would pose a significant risk of posting incorrect death information on the DMF. SSA stated a laborious manual review of each record would be necessary, and since these are records of individuals who are not beneficiaries, SSA cannot devote the resources to such an effort while experiencing a customer service crisis. SSA stated it would be willing to support this effort if any DMF customers identified a need for the information and provided funding to support a manual review. SSA agreed with recommendation 3.

Now, according to a 2022 OIG audit, there have been past instances of improper benefit payout, but as the SSA response notes, Social Security invested these and was able to recover several million dollars. And these were benefits paid to people listed as living.

In addition, if earnings are reported for someone who is supposedly dead, Social Security puts those earnings into an Earnings Suspense File and then takes effort to correct the mismatch between the reported wages and the Social Security account. All of this is detailed in an exhaustive review of the Social Security earnings processing. According to the 202r OIG report:

In Tax Years 2016 through 2020, employers and individuals
reported approximately $8.5 billion in wages, tips, and self employment income using 139,211 SSNs assigned to individuals
age 100 or older. SSA transferred these earnings to the Earnings
Suspense File, primarily because personally identifiable information provided on the workers’ earnings reports did not match information in SSA enumeration records. Resolving these discrepancies will help improve the accuracy and completeness of
the DMF and prevent future misuse of these SSNs.

The number of wage reports the SSA processes a year is 245 million, leading a total of 980 million wage reports for 2016 to 2020. The 139,211 reports that end up in the Earnings Suspense File is 0.000142 percent of the total. And this is money coming into Social Security, not money being paid.

Whether Congress needs to allocate funds for Social Security to have perfect files is a question up to Congress. But from just the reports I’ve linked (via Wayback Machine, to ensure future access since government pages have been disappearing), there is absolutely no mystery about Social Security payments and missing dates of death, and there is absolutely no proof of the millions of people getting death benefits.

As for the use of AI to somehow make all of this magically better—another DOGE/Musk claim—AI  is literally nothing more than word crunching and patterns with the help of lots of computational power and data space. As the example of AI I gave earlier demonstrates, AI is incapable of discerning whether the words it is crunching are actually accurate or not.

There’s another term we use in programming: GIGO, or Garbage In, Garbage Out. May 01, 1875, indeed. Millions of people getting death benefits, indeed.

I found the OIG reports after a simple online search; one that Musk and the members of DOGE could have performed, themselves. And reports that DOGE were likely given.

Either Musk and other DOGE members are incapable of comprehending what was said in these reports, incapable of asking the existing system maintenance folks for this type of information, or they are deliberately misrepresenting the facts to somehow justify their existence. And I don’t think we need an AI program to tell us which answer is correct.

 

 

Categories
Social Media Technology

Testing WordPress/Mastodon ActivityPub integration

I just installed the ActivityPub plug-in for WordPress. With it, you can follow posts here at Burningbird by following bosslady@burningbird.net. Supposedly, then, this post would show up on Mastodon as a new post.

There’s also a plugin that would allow my WordPress weblog to subscribe to one or more folks on Mastodon, and publish their posts here. However, I won’t publish someone else’s work in my space, and I also have code to print out the RSS entries at the footer for my Mastodon space. Still, new toy…

I’m still debating on whether to install my own Mastodon instance. I don’t have a lot of interest in learning Ruby/Rails, so the technology doesn’t interest me, in and of itself. The ActivityPub protocol does interest me. Mastodon, itself, interests me, but I do have a comfortable home on mastodon.social.

If I do install Mastodon, then I have to decide if I want to spin up a new Linode for it, or learn to live with WordPress/MySQL duking it out with Mastodon/PostGreSQL. And then there’s the email service. Rumor has it that GMail is not Mastodon-friendly, so using it as a third-party email service won’t work. I don’t want to pay for an email service, so that would mean installing email support. And that’s just uglier than sin.

Decisions, decisions.