Why is random write faster with LVM than without?
I'm measuring random write speeds for a EMMC device with fio.
I'm testing multiple block sizes with multiple io sizes.
After multiple iterations for all test cases on a raw emmc device, I create a logical volume using the entire emmc device and run the same tests on that volume.(such as /dev/new_vol_group/new_logical_volume)
I was expecting a slight performance overhead with LVM but the strangest thing happened.
For small io sizes random write speeds are quite similar between raw and LVM. When IO size is increased(specially when the size is double amount of ram), random write speeds for raw device is quite decreased but that's not the case with LVM. There is no decrease in random write speeds with LVM when file size is increased.
So LVM is much faster than raw device for random write access specially for large filesizes. This is only true for random write. I didn't see this behaviour with sequential read-write or random read tests.
This is my fio file
ioengine=libaio
direct=1
buffered=0
iodepth=1
numjobs=1
ramp_time=5
startdelay=5
runtime=90
time_based
refill_buffers
randrepeat=1
Only reason I could think of was caching, but caching can't affect the tests this much when io size is double the amount ram on the host.
Note: I use fio-3.1 for benchmarking, Ubuntu 18.04 LTS as a host.
I detected this behaviour on multiple devices.
linux hard-drive performance lvm
New contributor
add a comment |Â
I'm measuring random write speeds for a EMMC device with fio.
I'm testing multiple block sizes with multiple io sizes.
After multiple iterations for all test cases on a raw emmc device, I create a logical volume using the entire emmc device and run the same tests on that volume.(such as /dev/new_vol_group/new_logical_volume)
I was expecting a slight performance overhead with LVM but the strangest thing happened.
For small io sizes random write speeds are quite similar between raw and LVM. When IO size is increased(specially when the size is double amount of ram), random write speeds for raw device is quite decreased but that's not the case with LVM. There is no decrease in random write speeds with LVM when file size is increased.
So LVM is much faster than raw device for random write access specially for large filesizes. This is only true for random write. I didn't see this behaviour with sequential read-write or random read tests.
This is my fio file
ioengine=libaio
direct=1
buffered=0
iodepth=1
numjobs=1
ramp_time=5
startdelay=5
runtime=90
time_based
refill_buffers
randrepeat=1
Only reason I could think of was caching, but caching can't affect the tests this much when io size is double the amount ram on the host.
Note: I use fio-3.1 for benchmarking, Ubuntu 18.04 LTS as a host.
I detected this behaviour on multiple devices.
linux hard-drive performance lvm
New contributor
If you're still caching half the data, it's speed could be 2-6GB/s, averaging that with the disk's real speed would still increase the result a lot
â Xen2050
yesterday
add a comment |Â
I'm measuring random write speeds for a EMMC device with fio.
I'm testing multiple block sizes with multiple io sizes.
After multiple iterations for all test cases on a raw emmc device, I create a logical volume using the entire emmc device and run the same tests on that volume.(such as /dev/new_vol_group/new_logical_volume)
I was expecting a slight performance overhead with LVM but the strangest thing happened.
For small io sizes random write speeds are quite similar between raw and LVM. When IO size is increased(specially when the size is double amount of ram), random write speeds for raw device is quite decreased but that's not the case with LVM. There is no decrease in random write speeds with LVM when file size is increased.
So LVM is much faster than raw device for random write access specially for large filesizes. This is only true for random write. I didn't see this behaviour with sequential read-write or random read tests.
This is my fio file
ioengine=libaio
direct=1
buffered=0
iodepth=1
numjobs=1
ramp_time=5
startdelay=5
runtime=90
time_based
refill_buffers
randrepeat=1
Only reason I could think of was caching, but caching can't affect the tests this much when io size is double the amount ram on the host.
Note: I use fio-3.1 for benchmarking, Ubuntu 18.04 LTS as a host.
I detected this behaviour on multiple devices.
linux hard-drive performance lvm
New contributor
I'm measuring random write speeds for a EMMC device with fio.
I'm testing multiple block sizes with multiple io sizes.
After multiple iterations for all test cases on a raw emmc device, I create a logical volume using the entire emmc device and run the same tests on that volume.(such as /dev/new_vol_group/new_logical_volume)
I was expecting a slight performance overhead with LVM but the strangest thing happened.
For small io sizes random write speeds are quite similar between raw and LVM. When IO size is increased(specially when the size is double amount of ram), random write speeds for raw device is quite decreased but that's not the case with LVM. There is no decrease in random write speeds with LVM when file size is increased.
So LVM is much faster than raw device for random write access specially for large filesizes. This is only true for random write. I didn't see this behaviour with sequential read-write or random read tests.
This is my fio file
ioengine=libaio
direct=1
buffered=0
iodepth=1
numjobs=1
ramp_time=5
startdelay=5
runtime=90
time_based
refill_buffers
randrepeat=1
Only reason I could think of was caching, but caching can't affect the tests this much when io size is double the amount ram on the host.
Note: I use fio-3.1 for benchmarking, Ubuntu 18.04 LTS as a host.
I detected this behaviour on multiple devices.
linux hard-drive performance lvm
linux hard-drive performance lvm
New contributor
New contributor
edited yesterday
New contributor
asked yesterday
Can
11
11
New contributor
New contributor
If you're still caching half the data, it's speed could be 2-6GB/s, averaging that with the disk's real speed would still increase the result a lot
â Xen2050
yesterday
add a comment |Â
If you're still caching half the data, it's speed could be 2-6GB/s, averaging that with the disk's real speed would still increase the result a lot
â Xen2050
yesterday
If you're still caching half the data, it's speed could be 2-6GB/s, averaging that with the disk's real speed would still increase the result a lot
â Xen2050
yesterday
If you're still caching half the data, it's speed could be 2-6GB/s, averaging that with the disk's real speed would still increase the result a lot
â Xen2050
yesterday
add a comment |Â
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Can is a new contributor. Be nice, and check out our Code of Conduct.
Can is a new contributor. Be nice, and check out our Code of Conduct.
Can is a new contributor. Be nice, and check out our Code of Conduct.
Can is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Super User!
- Please be sure to answer the question. Provide details and share your research!
But avoid â¦
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid â¦
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1387908%2fwhy-is-random-write-faster-with-lvm-than-without%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
If you're still caching half the data, it's speed could be 2-6GB/s, averaging that with the disk's real speed would still increase the result a lot
â Xen2050
yesterday