Follow

so, my 36.9GiB postgres database produces and 25.4GiB dump file that compresses to 3.6GiB with gzip

why so much bloat!?

· · Web · 1 · 0 · 1
@dirb Compression is most effective when the input contains repetitive data. I haven't looked at pleroma's db schema yet, but it's obvious that most of it is repeated usernames and urls.

@r I thought postgres compressed the data when logging it

still, it;s strange that the text dump is smaller than the actual database. unless pleroma makes a lot of rewrites

@dirb That's because dumps don't include index data, they'll be created when you restore the dump.
Sign in to participate in the conversation
Game Liberty Mastodon

Mainly gaming/nerd instance for people who value free speech. Everyone is welcome.