Is it worth it making a ZFS partition with compression for Postgres in order to save space? I assume since it's mostly text it should compress fairly well.
Or is there a better FS for compression?

Follow

I did a test on my PC in a VM similar to my VPS, 3 Zen 2 threads, 12 GB RAM, NVME drive and the results are absolutely terrible. The performance got basically cut in half even with lz4
I didn't do any optimization Postgres side, but I did on ZFS side:
recordsize=8k
atime=off
xattr=sa
logbias=latency
redundant_metadata=most

Results:
pgbench -i -s 6000 testdb

ext4
done in 1396.01 s (drop tables 0.00 s, create tables 0.01 s, client-side generate 731.07 s, vacuum 295.22 s, primary keys 369.71 s).

zfs
done in 3183.85 s (drop tables 0.00 s, create tables 0.01 s, client-side generate 1187.85 s, vacuum 571.35 s, primary keys 1424.63 s)

pgbench -c 180 -j 2 -t 10000 testdb

ext4
latency average = 74.381 ms
tps = 2419.988141 (including connections establishing)
tps = 2419.994924 (excluding connections establishing)

zfs
latency average = 144.478 ms
tps = 1245.868334 (including connections establishing)
tps = 1245.870005 (excluding connections establishing)

· · Web · 2 · 0 · 0

I tried it with btrfs and the results are all over the place. Both better and worse than ZFS.
It compressed the test db from 90GB to 5GB though. On ZFS with lz4 it was 18GB.

btrfs-zstd

done in 1085.00 s (drop tables 0.00 s, create tables 0.01 s, client-side generate 510.36 s, vacuum 234.14 s, primary keys 340.49 s).

latency average = 97.927 ms
tps = 1838.100929 (including connections establishing)
tps = 1838.104699 (excluding connections establishing)

btrfs- zstd, noatime, ssd

done in 1022.16 s (drop tables 0.00 s, create tables 0.00 s, client-side generate 473.47 s, vacuum 227.62 s, primary keys 321.07 s).

latency average = 214.155 ms
tps = 840.511215 (including connections establishing)
tps = 840.512285 (excluding connections establishing)

latency average = 191.493 ms
tps = 939.982522 (including connections establishing)
tps = 939.983717 (excluding connections establishing)

It compressed a 98GB Mastodon DB to 46GB though

@r000t No, but I did these on a separate day, so the cache should have been flushed.

@EnjuAihara @matrix lossy compression except you're straight up just deleting photos at random
@AbNormal @matrix do you get charged per image or per size
or does it not matter at all
just bloat them up and take the new record
@matrix compressing them gives you more space
so you get charged more?
cant get more than 10 years anyway
why did they bump it from 5 to 10 years last summer
Sign in to participate in the conversation
Game Liberty Mastodon

Mainly gaming/nerd instance for people who value free speech. Everyone is welcome.