:hacker_f::hacker_s::hacker_e:
All right. Sorry for the delay. I have been putting out fires all day. One of them was the issue with uploads on FSE.
One thing that I was always able to say, no matter what, was that FSE never lost anybody's data. We hoard it, in case you need it. You can still see posts from instances that have been dead for a year. I am very sorry to report that we have lost some data, this was entirely my fault.

What happened?
The TL;DR is depicted in the first image.
I spend something like $155/month on FSE's hosting costs. This is not a lot, as far as things like this go. FSE is a low-budget operation. (But see below.) $5/month of that (≈3.2%) is the block storage. The invoices are quarterly, but they are staggered for some reason. In this case, I got a bill for $15 that necessitated (a bunch of BTC already having been liquidated and the remaining fedoral reserve notes being held in coins that FranTech does not take) pulling some BTC out of cold storage, which took a minute.
In the mean time, it turns out that FranTech just...wipes your storage if you are late with the bill. Servers get deactivated, sure, that's one thing. The storage getting deactivated is another. The storage getting *wiped* the same day it's deactivated, though, that's kind of infuriating. On the other hand, I was late paying, so this was my fault.
I filed a support ticket, they said the storage was purged, I requested and just got confirmation that this does indeed mean that the disk space was wiped and resold. Although it is nice to know that they wipe the disk space before handing it to the next customer.
But, hey, they credited my account in the amount I paid them!


What was lost?
I do not know, but at the very least, all of the uploads (images, videos, PDFs, etc.) between the last backup of the storage (late Sunday night PDT, early Monday morning UTC) and this morning. The block storage not having been attached, uploads broke, so nothing from today was lost, because no one could upload anything.

What now?
My uplink is saturated restoring the backup. I take a great deal of care with the DB backups, but the media backups are (by their nature) somewhat harder to check. (It was about 80GB of files...all in a single, flat directory. I filed an issue:
https://git.pleroma.social/pleroma/pleroma/issues/1513 .) I didn't keep a manifest. Uploads were low priority, as the site still runs even if all of them are wiped.
This policy was obviously a mistake on my part.
Restoration of the backup is ≈10% complete. It'll take a few hours. Afterwards, avatars and attachments will be back.
In the mean time, uploads are working again.

What are you doing to prevent this in the short term?
The reason the backup script runs *weekly* instead of nightly is that it stresses the database. This obviously doesn't apply to the media uploads, so I'll be splitting the backup script to back up the DB weekly and the uploads nightly.

What are you doing to prevent this longer term?
I'm also going to be shoving the uploads into a content-addressed data store that will be replicated continuously. I'm poking at making this work cross-instance and potentially serving directly from there. This supercedes the experimental search stuff for now.
I've priced out colo space and some new hardware (Seriously!) and FSE will be included in the move. As long as everything goes according to plan ("No plan survives first contact with the enemy"), FSE will no longer be in the cloud a couple of months from now. FSE will be cheaper to run this way, and will also be faster.

Plumber, apologize to the whole FSE
Sorry again.

NAS
If anyone has a recommendation for a solid but cheap-ish 1U or 2U NAS, that'd be cool. I am kinda iffy on the options I found. AoE preferred, but as long as it is reliable and also doesn't go all "web-based management console for a dumbass NFS or CIFS export" on me, I'm happy.
If things go better than expected...pic related.
shoot-the-hostages.jpggensokyo_apology.jpgbrantley.jpgPARANOiA--180.mp3