Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarking zfs read speed not accurate #148

Open
sgmihai opened this issue Jun 19, 2024 · 2 comments
Open

Benchmarking zfs read speed not accurate #148

sgmihai opened this issue Jun 19, 2024 · 2 comments
Labels
bug Something isn't working unconfirmed

Comments

@sgmihai
Copy link

sgmihai commented Jun 19, 2024

  • Linux-distro (kernel version): Cachyos, Linux 6.9.3-4-cachyos-lto
  • Desktop Environment (KDE/GNOME etc.): KDE
  • Qt Version: 6.7.1
  • KDiskMark Version: 3.1.4
  • FIO Version: fio-3.37

Description:

Was trying to figure out some accurate ways to benchmark my zfs pool.
With default settings of the program, I get very different results when doing 1 vs 5 runs of the test.
About 3GB/s for 1 run, 5.7GB/s if 5+ runs. I opened iotop during the testing and I saw that for read I get Total disk read 10GB/s, Actual disk read: 0. For writing it appears accurate, about 1.7GB/s. That means it's using caching. O_DIRECT option was left at default, enabled.
Not sure if there's any way around this.

Steps To Reproduce:

Select a zfs mountpoint to benchmark

@sgmihai sgmihai added bug Something isn't working unconfirmed labels Jun 19, 2024
@nwgat
Copy link

nwgat commented Aug 22, 2024

am running into the same issue, its like cache is on

@HankB
Copy link

HankB commented Aug 24, 2024

I think the question revolves around what you want to measure. Are you interested in raw disk I/O or the performance of the entire storage subsystem (which includes caching.)

Ultimately it is important that the storage subsystem meets the needs of the applications running on the host and those vary widely. Of course raw disk performance factors into this. With ZFS filesystems I'm not sure it is easy to defeat various levels of caching.

I also learned along the way that writing data sourced from /dev/zero to a ZFS filesystem with compression enabled produces fabulous results because the incoming data compresses to almost nothing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working unconfirmed
Projects
None yet
Development

No branches or pull requests

3 participants