Categories
Government Programming Languages

Social Security: Why look for facts when you can speculate

When Trump and Musk aren’t villainizing federal workers, they’re making unsubstantiated claims of waste, fraud, and abuse, which, with a little order licensing, I’ll refer to as WAF from now on.

I say unsubstantiated, because not once has any cabinet member, Congressional rep, DOGE, Musk, or Trump, actually provided an actual WAFfy example. They indulge in a great deal of speculation, innuendo, and accusations. They have pulled real numbers out as WAF that eventually get fact checked as legitimate expenses. But no real WAFfy examples.

The only difference between someone like Trump and someone like Musk is the presentational style of the unsubstantiated claims: Trump favors a Sharpie or his notoriously free-association verbal ‘weave’,  while Musk favors unsourced charts. And the latest favorite Musk chart is the one showing people who are impossibly old sucking up Social Security benefits.

Who are all these 150 year olds claiming Social Security! WAF!

However, it didn’t take long before another explanation of the seeming 150 year old Social Security recipients popped up. Since the DOGE script kiddies are so fond of AI, let’s use AI to explain:

What happens when a date is missing in a COBOL program

iAsk answered:

According to www.iAsk.ai – Ask AI:

What Happens When a Date is Missing in a COBOL Program

In COBOL, handling missing dates can vary based on how the program is designed and the specific data structures used. Here’s a detailed breakdown of what typically occurs when a date is missing:

1. Data Structure Design:

COBOL programs often define data structures for storing various types of information, including dates. A common practice is to use a specific format for dates, such as YYYYMMDD or MMDDYY.

If a date field is defined but not populated (i.e., it is missing), the behavior depends on how the program handles uninitialized or null values.

2. Default Values:

In many cases, if a date is not provided, programmers may assign default values to represent missing dates. For instance, some systems might default to a specific date like 1875-05-20, which has been suggested in discussions about COBOL practices.

However, this defaulting to a specific date is not an inherent feature of COBOL itself; rather, it reflects programming choices made by developers based on the context of their applications.

3. Error Handling:

Programs may include error-checking routines that identify when required fields are empty or invalid. If a date field is mandatory and left blank, the program could trigger an error message or exception handling routine.

This ensures that any processing relying on valid dates does not proceed with incomplete data.

4. Reporting and Output:

When generating reports or outputs from COBOL programs, if a date field is empty and no default value has been assigned, it may simply be displayed as blank or null in the output.

Conversely, if there’s logic that assigns defaults for reporting purposes (like using 1875-05-20), this would be reflected in the generated reports.

5. Impact on Business Logic:

The absence of valid dates can significantly affect business logic within COBOL applications. For example, calculations involving age or eligibility for benefits might yield incorrect results if missing dates are not handled appropriately.

Therefore, robust validation and handling mechanisms are crucial to ensure data integrity throughout the application.

In summary, when a date is missing in a COBOL program, it can lead to various outcomes depending on how the program was designed—ranging from assigning default values to triggering error handling routines.

Top 3 Authoritative Sources Used in Answering this Question

1. IBM Documentation

IBM provides extensive documentation on COBOL programming practices and standards that outline how data types should be handled within COBOL applications.

2. ANSI/ISO Standards for COBOL

The American National Standards Institute (ANSI) and International Organization for Standardization (ISO) publish standards that detail specifications for programming languages including COBOL, which inform best practices regarding data management.

3. Technical Articles from Retrocomputing Stack Exchange

Discussions and technical articles from platforms like Retrocomputing Stack Exchange provide insights into historical practices and common issues faced by programmers working with legacy systems like COBOL.

Sponsored Content

When it comes to programming, every detail counts, just like in snacking! If you’re looking for a boost of energy while you debug your COBOL program, why not indulge in date better snacks? Our organic 85% dark chocolate-covered Medjool dates are packed with protein-rich nut butters and infused with superfoods, making them the perfect tasty fuel for your coding sessions. Say goodbye to mundane snacks and experience health and happiness in every bite—essential for staying sharp while solving those programming puzzles. Try Date Better Snacks and elevate your snacking game today!

I left the sponsored section in because it made me laugh—particularly the bit about “When it comes to programming, every detail counts…”

Every detail counts. Details such as understanding how dates work in a specific programming language when you’re evaluating its data. Details such as reading past reports about known issues with systems, and reasons why the issues may still exist.

I’m a software engineer. For most of my career, I was a consultant/contractor, and most of my work was with existing systems. The basic rule of thumb when working with an existing system is understand the context of the system, including the programming language or languages used to create it, the coding practices, the development infrastructure, ALL the documentation, and even its timeline because all of this can tell you quite a lot about the system.

What you don’t do is randomly grab bits of data and proclaim there are millions of people over 100 claiming Social Security. Not unless you want to look like a complete jackass.

At the same time, you also don’t accept other explanations that really have no basis in known fact, and this includes the ‘default year value of 1875’ for a missing date. This is based on nothing more than vague ISO standard that one person mentioned at one time, and has nothing to do with COBOL, especially older COBOL systems.

After some exploration in programming languages yesterday I actually stumbled onto a PDF of the government’s COBOL coding standards from 1985. I included an excerpt from the date section as header in this piece.

There is no date type in COBOL, and a character string is used instead. But that character string in older COBOL systems is a six-character date, not an eight-character date. The date would be given as YYMMDD, excluding the century.

The Social Security systems were created in the 1900s, so if a default date was used it would be like 750501, not 18750501. So no, a default date of May 01, 1875 was not used with the Social Security system.

However, clearing up misinformation about a default date does not exonerate Musk and the other members of DOGE. A simple review of past reports on the Social Security system provides all the explanation we need about the supposed millions of people over 100 and whether there are massive amounts of fraudulent payments to dead people.

In July 2923, the Social Security Administration (SSA) OIG reported on an audit performed on what is known as the Numident, a file (or likely, a database) of every person issued a Social Security number. When Social Security received information about a SSN holder’s death, it then updates another file (database) known as the Death Master File. It is the Death Master File that is then shared with federal benefit-paying agencies.

What the OIG audit noted in 2023, and previously noted in 2015, is there several million records in the Numident file that do not have a date of death. The OIG recommendation was that SSA use various other databases and incidental sources of data to update the missing dates of death in the Numident file, to ensure there are no improper payments, or more importantly, no use of Social Security numbers of dead people for fraudulent purposes.

The SSA disagreed. As they noted in the audit response:

We collect death data to ensure the records on which we base eligibility and payment decisions are as accurate and consistent as possible for current and future beneficiaries. When we receive a report of death, we take timely action to terminate payments. The records identified by OIG involve non-beneficiaries and do not involve improper payments. Correcting records for nonbeneficiaries would divert resources from work necessary to administer and manage our programs, which we cannot afford

It’s important to understand the two primary assertions of facts in this statement. The first is that the records in the Numident file that don’t have a death date are records of people not receiving Social Security benefits. Secondly, to update these records with an actual date of death, would cost millions that Congress had not allocated to the SSA for this task. And to just assign a default date of death by assumption, could erroneously cut off benefits for some long-lived but still living recipient.

(I added the bit about Congress in the last paragraph to remind everyone that Congress holds the purse strings for the government. In the last month, we could be forgiven for thinking this is no longer true, but until the Constitution is tossed into the waste bin, it is true.)

The SSA OIG wrote a follow-up report on the original audit in 2024. By this time, SSA had updated the death records for all but 941,383 Numbident members, moving these records to the DMF. Again, these are recipients who are not being paid benefits. However, the OIG did discover 28 people in the DMF getting benefits. At the time of the report, Social Security had cut off benefits from 14 of the recipients, and were investigating the rest.

Again, the SSA OIG recommended SSA update the death records for the remaining 915,534 Numbident records that didn’t have a matching DMF record. The response from the agency was:

SSA disagreed with recommendations 1 and 2 and stated the value of the DMF lies in its accuracy and reliability. SSA analyzed the records identified in recommendations 1 and 2 and determined they did not meet the Agency’s automated algorithm criteria, and that adding the records to the DMF would pose a significant risk of posting incorrect death information on the DMF. SSA stated a laborious manual review of each record would be necessary, and since these are records of individuals who are not beneficiaries, SSA cannot devote the resources to such an effort while experiencing a customer service crisis. SSA stated it would be willing to support this effort if any DMF customers identified a need for the information and provided funding to support a manual review. SSA agreed with recommendation 3.

Now, according to a 2022 OIG audit, there have been past instances of improper benefit payout, but as the SSA response notes, Social Security invested these and was able to recover several million dollars. And these were benefits paid to people listed as living.

In addition, if earnings are reported for someone who is supposedly dead, Social Security puts those earnings into an Earnings Suspense File and then takes effort to correct the mismatch between the reported wages and the Social Security account. All of this is detailed in an exhaustive review of the Social Security earnings processing. According to the 202r OIG report:

In Tax Years 2016 through 2020, employers and individuals
reported approximately $8.5 billion in wages, tips, and self employment income using 139,211 SSNs assigned to individuals
age 100 or older. SSA transferred these earnings to the Earnings
Suspense File, primarily because personally identifiable information provided on the workers’ earnings reports did not match information in SSA enumeration records. Resolving these discrepancies will help improve the accuracy and completeness of
the DMF and prevent future misuse of these SSNs.

The number of wage reports the SSA processes a year is 245 million, leading a total of 980 million wage reports for 2016 to 2020. The 139,211 reports that end up in the Earnings Suspense File is 0.000142 percent of the total. And this is money coming into Social Security, not money being paid.

Whether Congress needs to allocate funds for Social Security to have perfect files is a question up to Congress. But from just the reports I’ve linked (via Wayback Machine, to ensure future access since government pages have been disappearing), there is absolutely no mystery about Social Security payments and missing dates of death, and there is absolutely no proof of the millions of people getting death benefits.

As for the use of AI to somehow make all of this magically better—another DOGE/Musk claim—AI  is literally nothing more than word crunching and patterns with the help of lots of computational power and data space. As the example of AI I gave earlier demonstrates, AI is incapable of discerning whether the words it is crunching are actually accurate or not.

There’s another term we use in programming: GIGO, or Garbage In, Garbage Out. May 01, 1875, indeed. Millions of people getting death benefits, indeed.

I found the OIG reports after a simple online search; one that Musk and the members of DOGE could have performed, themselves. And reports that DOGE were likely given.

Either Musk and other DOGE members are incapable of comprehending what was said in these reports, incapable of asking the existing system maintenance folks for this type of information, or they are deliberately misrepresenting the facts to somehow justify their existence. And I don’t think we need an AI program to tell us which answer is correct.

 

 

Categories
Social Media Technology

Testing WordPress/Mastodon ActivityPub integration

I just installed the ActivityPub plug-in for WordPress. With it, you can follow posts here at Burningbird by following bosslady@burningbird.net. Supposedly, then, this post would show up on Mastodon as a new post.

There’s also a plugin that would allow my WordPress weblog to subscribe to one or more folks on Mastodon, and publish their posts here. However, I won’t publish someone else’s work in my space, and I also have code to print out the RSS entries at the footer for my Mastodon space. Still, new toy…

I’m still debating on whether to install my own Mastodon instance. I don’t have a lot of interest in learning Ruby/Rails, so the technology doesn’t interest me, in and of itself. The ActivityPub protocol does interest me. Mastodon, itself, interests me, but I do have a comfortable home on mastodon.social.

If I do install Mastodon, then I have to decide if I want to spin up a new Linode for it, or learn to live with WordPress/MySQL duking it out with Mastodon/PostGreSQL. And then there’s the email service. Rumor has it that GMail is not Mastodon-friendly, so using it as a third-party email service won’t work. I don’t want to pay for an email service, so that would mean installing email support. And that’s just uglier than sin.

Decisions, decisions.

Categories
Burningbird Social Media Technology Weblogging

Mastodon and Burningbird

The social media upheaval continues but things are starting to quiet down a bit. Oh you can’t tell this from the media, which is full of stories leading with “Elon Musk says…”, but that’s primarily because the media hasn’t figured out how to wean itself off Twitter, yet.

I quit Twitter the day that Musk reactivated the Project Veritas account. Even a lie would be ashamed to be associated with Project Veritas. Not so Twitter and Musk.

Out with Twitter

I didn’t delete my two Twitter accounts, because bots and trolls can snap up a previously existing username in 30 days once deleted. And I didn’t deactivate them because deactivated accounts are deleted in 30 days. What I did was post a last note where to find me on Mastodon, set both to private, and then walked away. I won’t even look at Twitter now, because doing so triggers ad impressions and that gives Musk money. I don’t plan on ever giving that guy money, and I’m severely curtailing the amount of attention I’ve giving him.

I’ll miss the folks that stubbornly stay on Twitter, but they’ve made their choice, I’ve made mine, and someday maybe they’ll wise up.

On to Mastodon

In the meantime, my move to Mastodon has had ups and downs, but has ended up on an up. My choice of kickoff point on mastodon.social was a good one (@burningbird@mastodon.social) because the creator of Mastodon (Eugen Rochko), who is also the administrator of mastodon.social, is quite welcoming of new Twitter folks. No nonsense about content warnings.

Speaking of content warnings, I was told to use them, and told not to use them. My account on democracy.town was frozen and I believe it was because I did use content warnings when cross posting from Twitter. But I got into a disagreeable argument with another person about not using them when cross posting. A lose/lose.

Well, to hell with that server and any server administered by hypersensitive admins letting the power go to their heads. And to hell with other people’s CW demands.

Now, I use content warnings sparingly—primarily for larger posts or posts that contain what I deem to be sensitive material. If people don’t like it, they don’t have to follow me.

Mastodon and RSS

I did add some Mastodon stuff to my weblog. You’ll see a “post to Mastodon button” at the end of a story. And you’ll see my latest Mastodon entries in the footer. The latter comes from the RSS feed appended to each account in Mastodon (mine: https://mastodon.social/@burningbird.rss).

The really nice thing about Mastodon having an RSS feed is you can follow a person’s Mastodon entries in the same RSS reader you use for weblogs. Pretty soon, we’ll no longer be able to tell the difference between a weblog and a micro-blog.

Post to Mastodon

The post button is interesting (how-to). Unlike one centralized location for Twitter and Facebook, each person is on a specific Mastodon server, so you have to specify what server you’re on in the ‘toot this’ web page that opens. This is the nature of the federated beast. It’s no different than if you have a weblog or web page and you have to provide its unique URL when asked for it.

I also bookmarked the Toot dialog and use it when I post a link to Mastodon. I found using the dialog helps to trigger the link excerpt, while posting a link directly in Mastodon usually leaves the link as just a link.

The downside to using the Toot dialog is it logs me out of Mastodon, every time. This is a PITA when you’re using two-factor authentication.

Mastodon and Burningbird

My plan is to create my own Mastodon server, but I’m working through how I want to do so. I can spin up another Linode for it, or try putting in on this server. There are Mastodon hosting sites that are attractive, if for no other reason than you have to have SMTP access (for email), and it will be a cold day in hell before I try to run an SMTP service again. But I’m leaning towards spinning up another Linode and then using a 3rd party SMTP server such as Gmail.

The great thing about this federated universe is when I do create my own Mastodon instance, I can move all my follows/followers to it. I don’t believe I can move my posts to it, but really I don’t care about my older Mastodon posts. In fact, I’ve set my account up to delete them automatically after two weeks. Why burden mastodon.social with my old crap? I might be restoring my old weblog posts, but I don’t care about old Twitter/Facebook/Mastodon postings. These are just social media blurbs.

I do care about the people, so I don’t want to lose those connections.

When I do setup a Mastodon instance, I’ll spin you a tale of my trials and tribulations setting up a Ruby on Rails project. The one downside to Mastodon is it’s Ruby on Rails, an environment I have no experience with. I may also install something like PixelFed, which at least is good, honest PHP and Node.

 

 

Categories
Technology

Moving servers

It was time for me to upgrade my version of Ubuntu, from 18.04 to 20.04. I upgraded my software, thought I had a clean site, and tried to upgrade in place. After all, upgrading from 16.04 to 18.04 was a simple one line command.

Moving from 18.04 to 20.04 was not so simple, and the upgrade failed. Time to do a manual build of a new server and port my site to it. Which also ended up being more complicated than I thought it would be.

Moving to 22.04

First, if I was going to go through all the work, I was going with the latest Ubuntu LTS: Jammy Jellyfish, otherwise known as 22.04. I spun up a new Linode instance of 22.04 and set to work.

The LAMP installation went very well. I ended up with not quite the latest Apache, since the absolute latest wasn’t supported on 22.04 yet. However, I added the famous Ondřej Surý repository and I was good to go:

sudo add-apt-repository ppa:ondrej/apache2 -y

MySQL is 8.0.29 and PHP is 8.1.

All that certbot stuff

I had manually built a new server when I went from 14.04 to 16.04, but times have changed. That move was pre-HTTPS, pre-HTTP/2, pre-HSTS (HTTP Strict Transport Security), well, basically, pre-everything. I had the support in my existing server, so I know my pages and installation are clean. But the sheer amount of work to set it up again was a bit daunting.

Thankfully, since I had made these moves in the past, my site was already clean. All that I needed to worry about was installing certbot to manage my Let’s Encrypt digital certificates.

You’d think moving a server wouldn’t be that unusual, but neither Let’s Encrypt nor certbot cover what to do when your certificates are on one server and you need to set them up on another. Searching online gave me two options:

– copy everything and change symbolic links for the certificates

– just install new certificates on your new server, and delete the old

And that’s when things got sticky.

Where am I and who is managing my IP address?

When I made the move to 16.04, I was manually setting up my network configuration using ifupdown and editing the /etc/network/interfaces file. But when I went to 18.04, netplan was the new kid on the block for network configuration.

The problem is, I had one foot in both camps. So when I tried to access a test page on the new server, it failed. I certainly couldn’t run the certbot web validation for installing a new digital certificate if I couldn’t even serve a simple page on port 80.

In addition, Linode has the ability to manage network configuration for you automatically, so if you change servers and IP addresses, you don’t have to do a thing. But when I tried to turn it on, even SSH no longer worked. I had to restore the site from a backup.

It took a bit of painful digging around, but I finally upgraded my network configuration to netplan, and only netplan. I could now use SSH again, and install a new digital certificate for my test domain. But then, things got tricky again.

I hate the old propagation thing

When I created the new Linode server, I installed it in the Atlanta data center rather than the Dallas center I was using with the old. After all, Atlanta is now only a couple of hours away.

But doing so meant when I switched, I had to update my name registrar to set my DNS entries to the new server IP addresses. This is a pain, in itself, but it’s also a bit of a fragile time when trying to determine if my site will work on the new server. After all, you don’t want to permanently change your IP address only to find out your site doesn’t work, and then have to change it back. And digital certificates kind of mean you have to have all or nothing.

Thankfully, Linode had a peachy keen workaround: swap IP addresses. If two servers are in one data center, you can swap the IP address between them.

Of course, doing so meant I had to migrate my existing site to the new data center and change the DNS entries, but still, it would be worth it to be able to switch back and forth between servers when making major modifications. And the migration should be a painless button click from the Linode cloud control manager.

So, I migrated my old Linode VPN to Atlanta, and then tried to swap the IP addresses. Crash and Burn.

IPv4 and IPv6

What I didn’t know about the Linode IP swap facility is that it only swapped the IPv4 address, not the IPv6 address. So when I did the following

ip -a

My IPv4 address reflected the new server, but my IPv6 address reflected the old, and everything was just broken. Again.

The only recourse at this point was to bite the bullet, make the move to the new server, do the DNS propagation, and then deal with the digital certificates. I put up a warning page that the site might be off for a time, had a coffee. and just made the move.

After the move, I thought about doing the Let’s Encrypt digital certificate copying like some folks recommended, but it seemed messy—sort of like the network configuration issue I had just cleaned up.

I used certbot to do a new installation, and the move was flawless. Flawless. This is really the only clean way to move your site to a new server when you’re using digital certificates:

– Make sure you site can support port 80, at least temporarily

– use certbot to generate new digital certificates for your site(s)

– delete the old server and certificates

Five Years. I’m good for five years.

So here you are and here I am: on the new server with all new software on the data center closest to me, with clean, uncrufty network configuration and sparkly digital certificates.

Best of all?

Jammy Jellyfish has standard support until April, 2027. I’m good for five years; ten if I want extended support. And who knows where I’ll be in ten years.

Probably back here, doing it all over again.

 

 

 

Categories
JavaScript Technology Writing

My Last O’Reilly Book

The editor for JavaScript Cookbook, 3rd edition, just sent me a draft to review and comment on. This is the last of the O’Reilly books that will feature my name on it. I wasn’t actively involved in it; my name is on it because they kept about 15% of my old content. New authors, Adam Scott and Matthew MacDonald, took on the majority of the work.

They did a good job, too. I particularly like the chapter devoted to error handling. The days of using an alert message to debug have been gone for decades, and JS developers now have sophisticated error-handling techniques and tools. It is no longer your Mom’s JS.

Good. I’m happy the language and its uses have evolved. But…

I sometimes miss the simpler days when an alert was your JavaScript best friend.

I’ve been working with JavaScript since it was first introduced over 25 years ago. Not just traditionally in the web page, either. I worked on a small project that used Netscape’s server-side JavaScript (LiveWire) in the 1990s. It was so very different from an application made with Node. as you can see for yourself from an old Dr. Dobb’s article on LiveWire.

Writing about JavaScript today requires a different mindset than writing about it years ago. Then, JavaScript was laughably simple. It’s simplicity, though, was also its strength. In its infancy JavaScript was so embraceable. You could literally toss a tiny blurb of JS into an HTML file, load it into a browser, and see an immediate implementation. You didn’t have to fiddle with compilation, installing special developer tools, or figure out a framework.

My first book on JavaScript, the JavaScript How-To for Waite Group Press was published in 1996. The hardest part of writing it was trying to find enough about JavaScript to actually fill a book.

JavaScript today, or ECMAScript if you want to be pure, is not so simple. And oddly enough, that’s its strength now: it is powerful enough to meet today’s very demanding web applications. And the hardest part of working on a book such as the JavaScript Cookbook is limiting yourself to the essentials, because you could easily write three or four books and still not envelop the world of JavaScript as it exists now.

When O’Reilly asked me to do a new edition of the Cookbook I knew I just didn’t want to take on that kind of burden again. It was hard to give up control of one of my favorite books, but after 25 years of working to deadlines and dealing with tech editors, I knew I just didn’t have the energy or patience to do the book justice. I knew it was time for me to hang up my drafts.

Thankfully, the new Cookbook team have done an exceptionally good job. I’m happy to have my name on the Cookbook, one last time.

If I write now, it’s solely for me: my ideas, my deadlines, my expectations. I may only write in this space. I may try my hand at self-publication.

Who knows? Maybe all I’ll do is write incendiary commentary in Facebook and Twitter and see how often I can get banned.