I did a test on my PC in a VM similar to my VPS, 3 Zen 2 threads, 12 GB RAM, NVME drive and the results are absolutely terrible. The performance got basically cut in half even with lz4
I didn't do any optimization Postgres side, but I did on ZFS side:
recordsize=8k
atime=off
xattr=sa
logbias=latency
redundant_metadata=most
Results:
pgbench -i -s 6000 testdb
ext4
done in 1396.01 s (drop tables 0.00 s, create tables 0.01 s, client-side generate 731.07 s, vacuum 295.22 s, primary keys 369.71 s).
zfs
done in 3183.85 s (drop tables 0.00 s, create tables 0.01 s, client-side generate 1187.85 s, vacuum 571.35 s, primary keys 1424.63 s)
pgbench -c 180 -j 2 -t 10000 testdb
ext4
latency average = 74.381 ms
tps = 2419.988141 (including connections establishing)
tps = 2419.994924 (excluding connections establishing)
zfs
latency average = 144.478 ms
tps = 1245.868334 (including connections establishing)
tps = 1245.870005 (excluding connections establishing)
@matrix
You're controlling for SLC cache exhaustion rite?
@r000t No, but I did these on a separate day, so the cache should have been flushed.
@EnjuAihara That's not how it works ![]()
I tried it with btrfs and the results are all over the place. Both better and worse than ZFS.
It compressed the test db from 90GB to 5GB though. On ZFS with lz4 it was 18GB.
btrfs-zstd
done in 1085.00 s (drop tables 0.00 s, create tables 0.01 s, client-side generate 510.36 s, vacuum 234.14 s, primary keys 340.49 s).
latency average = 97.927 ms
tps = 1838.100929 (including connections establishing)
tps = 1838.104699 (excluding connections establishing)
btrfs- zstd, noatime, ssd
done in 1022.16 s (drop tables 0.00 s, create tables 0.00 s, client-side generate 473.47 s, vacuum 227.62 s, primary keys 321.07 s).
latency average = 214.155 ms
tps = 840.511215 (including connections establishing)
tps = 840.512285 (excluding connections establishing)
latency average = 191.493 ms
tps = 939.982522 (including connections establishing)
tps = 939.983717 (excluding connections establishing)