I'm guessing they were being treated over the phone as the systems went down. I've been through a similar situation, the person on the phone will give step by step instructions while waiting for an ambulance to arrive.
Sounds like with the systems being down the call would have been cut off which sounds horrible.
No, treating in person. But we can't function as a department without computers. You call cardiology (on another floor) and none of their computers are working to be able review the patients records. You could take the EKG printout and run it to them, but we're just telling them lab results from what we can remember before our machines all bluescreened. The lab's computers were down so we can't do blood tests. Nursing staff knows what to do next by looking at the board or their computer. Without that you're just a room full of people shouting things at each other, and definitely can't see the 3-4x patients an hour you're expected to. Doctors and midlevels rely on epic to place med orders too.
I had to install one of these tools recently because of the notch on newer macbooks, they hide icons that overflow and you never notice it. For a while I thought my apps were erroring and not opening properly.
You have to manage it yourself with one of these tools otherwise they are lost to the void in my experience.
Had the same experience with my dad passing recently, we had access to everything because we know the passwords he uses and he was ok with sharing those with us.
Will definitely look into the 1password emergency kit, thanks for mentioning it. 2fa is the other big challenge after that.
I was going to ask, when would you only rely on a sequential id for temporal locality, wouldn't most people be indexing and sorting on `created_at` or something equivalent?
You don’t need to need temporal elements, the simple fact that UUIDs are a bad fit for B-Trees means that simply inserting a bunch of data that relies on a UUIDv4 for its unique ID, will also run into these same problems.
You’d typically see this as either lower performance of a table than you expect, or higher IOPS usage of your database (which gets expensive at scale).
Even that won't work because there are companies like Pushshift (I think it's pushshift anyway) that are constantly archiving comments, so they have the old comment and the overwritten one. I think you can request pushshift delete your data under GDPR so at least you can remove your data from there, but I believe they are not the only ones doing this.
Pushshift only blocked such data from being queried via their API. All these users' posts and comments are still in their data dumps, which now have tens of thousands of copies after being released as torrents.
Seems like a game of whack-a-mole. A person might get Reddit and Pushshift to delete their comments, but surely there are other less well-known companies mirroring Reddit. Maybe even clandestinely.
Sounds like with the systems being down the call would have been cut off which sounds horrible.