diff options
author | Christoph Hellwig <hch@lst.de> | 2009-09-04 22:44:42 +0200 |
---|---|---|
committer | Rusty Russell <rusty@rustcorp.com.au> | 2009-10-22 16:39:26 +1030 |
commit | f8b12e513b953aebf30f8ff7d2de9be7e024dbbe (patch) | |
tree | ec261949b674283b8ba214fd2715f3a7674da11c /include/linux/ivtv.h | |
parent | 2fdc246aaf9a7fa088451ad2a72e9119b5f7f029 (diff) |
virtio_blk: revert QUEUE_FLAG_VIRT addition
It seems like the addition of QUEUE_FLAG_VIRT caueses major performance
regressions for Fedora users:
https://bugzilla.redhat.com/show_bug.cgi?id=509383
https://bugzilla.redhat.com/show_bug.cgi?id=505695
while I can't reproduce those extreme regressions myself I think the flag
is wrong.
Rationale:
QUEUE_FLAG_VIRT expands to QUEUE_FLAG_NONROT which casus the queue
unplugged immediately. This is not a good behaviour for at least
qemu and kvm where we do have significant overhead for every
I/O operations. Even with all the latested speeups (native AIO,
MSI support, zero copy) we can only get native speed for up to 128kb
I/O requests we already are down to 66% of native performance for 4kb
requests even on my laptop running the Intel X25-M SSD for which the
QUEUE_FLAG_NONROT was designed.
If we ever get virtio-blk overhead low enough that this flag makes
sense it should only be set based on a feature flag set by the host.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Diffstat (limited to 'include/linux/ivtv.h')
0 files changed, 0 insertions, 0 deletions