You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Linux-distro (kernel version): Cachyos, Linux 6.9.3-4-cachyos-lto
Desktop Environment (KDE/GNOME etc.): KDE
Qt Version: 6.7.1
KDiskMark Version: 3.1.4
FIO Version: fio-3.37
Description:
Was trying to figure out some accurate ways to benchmark my zfs pool.
With default settings of the program, I get very different results when doing 1 vs 5 runs of the test.
About 3GB/s for 1 run, 5.7GB/s if 5+ runs. I opened iotop during the testing and I saw that for read I get Total disk read 10GB/s, Actual disk read: 0. For writing it appears accurate, about 1.7GB/s. That means it's using caching. O_DIRECT option was left at default, enabled.
Not sure if there's any way around this.
Steps To Reproduce:
Select a zfs mountpoint to benchmark
The text was updated successfully, but these errors were encountered:
I think the question revolves around what you want to measure. Are you interested in raw disk I/O or the performance of the entire storage subsystem (which includes caching.)
Ultimately it is important that the storage subsystem meets the needs of the applications running on the host and those vary widely. Of course raw disk performance factors into this. With ZFS filesystems I'm not sure it is easy to defeat various levels of caching.
I also learned along the way that writing data sourced from /dev/zero to a ZFS filesystem with compression enabled produces fabulous results because the incoming data compresses to almost nothing.
Description:
Was trying to figure out some accurate ways to benchmark my zfs pool.
With default settings of the program, I get very different results when doing 1 vs 5 runs of the test.
About 3GB/s for 1 run, 5.7GB/s if 5+ runs. I opened iotop during the testing and I saw that for read I get Total disk read 10GB/s, Actual disk read: 0. For writing it appears accurate, about 1.7GB/s. That means it's using caching. O_DIRECT option was left at default, enabled.
Not sure if there's any way around this.
Steps To Reproduce:
Select a zfs mountpoint to benchmark
The text was updated successfully, but these errors were encountered: