Age | Commit message (Collapse) | Author |
|
In current code, we are re-reading cic->key after dead cic->key check.
So, in theory, it may really re-read *after* cfq_exit_queue() seted NULL.
To avoid race, we copy it to stack, then use it. With this change, I
guess gcc will assign cic->key to a register or stack, and it wouldn't
be re-readed.
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Signed-off-by: Jens Axboe <axboe@suse.de>
|
|
When queue dies, we set cic->key=NULL as dead mark. So, when we
traverse a rbtree, we must check whether it's still valid key. if it
was invalidated, drop it, then restart the traversal from top.
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Signed-off-by: Jens Axboe <axboe@suse.de>
|
|
On rmmod path, cfq/as waits to make sure all io-contexts was
freed. However, it's using complete(), not wait_for_completion().
I think barrier() is not enough in here. To avoid the following case,
this patch replaces barrier() with smb_wmb().
cpu0 visibility cpu1
[ioc_gnone=NULL,ioc_count=1]
ioc_gnone = &all_gone NULL,ioc_count=1
atomic_read(&ioc_count) NULL,ioc_count=1
wait_for_completion() NULL,ioc_count=0 atomic_sub_and_test()
NULL,ioc_count=0 if ( && ioc_gone)
[ioc_gone==NULL,
so doesn't call complete()]
&all_gone,ioc_count=0
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Signed-off-by: Jens Axboe <axboe@suse.de>
|
|
Detect whether a given process is seeky and if so disable (mostly) the
idle window if it is. We still allow just a little idle time, just enough
to allow that process to submit a new request. That is needed to maintain
fairness across priority groups.
In some cases, we could setup several async queues. This is not optimal
from a performance POV, since we want all async io in one queue to perform
good sorting on it. It also impacted sync queues, as async io got too much
slice time.
Signed-off-by: Jens Axboe <axboe@suse.de>
|
|
this is a small optimization to cfq_choose_req() in the CFQ I/O scheduler
(this function is a semi-often invoked candidate in an oprofile log):
by using a bit mask variable, we can use a simple switch() to check
the various cases instead of having to query two variables for each check.
Benefit: 251 vs. 285 bytes footprint of cfq_choose_req().
Also, common case 0 (no request wrapping) is now checked first in code.
Signed-off-by: Andreas Mohr <andi@lisas.de>
Signed-off-by: Jens Axboe <axboe@suse.de>
|
|
On setups with many disks, we spend a considerable amount of time
looking up the process-disk mapping on each queue of io. Testing with
a NULL based block driver, this costs 40-50% reduction in throughput
for 1000 disks.
Signed-off-by: Jens Axboe <axboe@suse.de>
|
|
Modify well over a dozen mempool users to call mempool_create_slab_pool()
rather than calling mempool_create() with extra arguments, saving about 30
lines of code and increasing readability.
Signed-off-by: Matthew Dobson <colpatch@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
|
|
|
|
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
We don't need to pin ->key down; ->cfqq->cfqd will do that for us.
Incidentally, that stops the leak we had - that reference was never
dropped.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
If somebody does a hash lookup for cfq_queue while ioprio of an async queue
is elevated, they shouldn't end up stuck with lowered ioprio when we go back.
Fix is to use ->org_ioprio{,class} in hash lookups.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
During testing of SLES10, we encountered a hang in the CFQ io scheduler.
Turns out the deferred slice expiry logic is buggy, so remove that for
now. We could be left with an idle queue that would never wake up. So
kill that logic, always expire immediately. Also fix a potential timer
race condition.
Patch looks bigger than it is, because it moves a function.
Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
the patch below marks various read-only variables in block/* as const,
so that gcc can optimize the use of them; eg gcc will replace the use by
the value directly now and will even remove the memory usage of these.
Signed-off-by: Arjan van de Ven <arjan@infradead.org>
Signed-off-by: Jens Axboe <axboe@suse.de>
|
|
Some leftover comments referring to drivers/block that are now block/.
They don't add any information we don't already have, so kill them.
Signed-off-by: Coywolf Qi Hunt <qiyong@fc-cn.com>
Signed-off-by: Jens Axboe <axboe@suse.de>
|
|
When cfq slice expires, remainder of slice is calculated and stored in
cfqq->slice_left. Current code calculates the opposite of remainder -
how many jiffies the cfqq has used past slice end. This patch fixes
the bug.
Signed-off-by: Tejun Heo <htejun@gmail.com>
Signed-off-by: Jens Axboe <axboe@suse.de>
|
|
cfq forced dispatching might not return all requests on the queue.
This bug can hang elevator switchinig and corrupt request ordering
during flush sequence.
Signed-off-by: Tejun Heo <htejun@gmail.com>
Signed-off-by: Jens Axboe <axboe@suse.de>
|
|
drivers/block/ is right now a mix of core and driver parts. Lets move
the core parts to a new top level directory. Al will move the fs/
related block parts to block/ next.
Signed-off-by: Jens Axboe <axboe@suse.de>
|