From: Christoph Hellwig Date: Fri, 4 Sep 2009 20:44:42 +0000 (+0200) Subject: virtio_blk: revert QUEUE_FLAG_VIRT addition X-Git-Tag: v2.6.32-rc6~113^2~4 X-Git-Url: https://openfabrics.org/gitweb/?a=commitdiff_plain;h=f8b12e513b953aebf30f8ff7d2de9be7e024dbbe;p=~shefty%2Frdma-dev.git virtio_blk: revert QUEUE_FLAG_VIRT addition It seems like the addition of QUEUE_FLAG_VIRT caueses major performance regressions for Fedora users: https://bugzilla.redhat.com/show_bug.cgi?id=509383 https://bugzilla.redhat.com/show_bug.cgi?id=505695 while I can't reproduce those extreme regressions myself I think the flag is wrong. Rationale: QUEUE_FLAG_VIRT expands to QUEUE_FLAG_NONROT which casus the queue unplugged immediately. This is not a good behaviour for at least qemu and kvm where we do have significant overhead for every I/O operations. Even with all the latested speeups (native AIO, MSI support, zero copy) we can only get native speed for up to 128kb I/O requests we already are down to 66% of native performance for 4kb requests even on my laptop running the Intel X25-M SSD for which the QUEUE_FLAG_NONROT was designed. If we ever get virtio-blk overhead low enough that this flag makes sense it should only be set based on a feature flag set by the host. Signed-off-by: Christoph Hellwig Signed-off-by: Rusty Russell --- diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 43f19389647..348befaaec7 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -332,7 +332,6 @@ static int __devinit virtblk_probe(struct virtio_device *vdev) } vblk->disk->queue->queuedata = vblk; - queue_flag_set_unlocked(QUEUE_FLAG_VIRT, vblk->disk->queue); if (index < 26) { sprintf(vblk->disk->disk_name, "vd%c", 'a' + index % 26);