Age | Commit message (Collapse) | Author |
|
The garbage collection thread is strictly an optimisation. Everything it
does would also be done just-in-time in the context of something in
userspace trying to access the file system.
Sometimes, however, it's a pessimisation. Especially during early boot
when it's checksumming nodes and scanning inodes which are shortly going
to be pulled in by read_inode anyway. We end up building the rbtree of
node coverage twice for the same inode.
By switching to yield() instead of cond_resched() in the main loop, we
observe boot times on the OLPC system going down from about 100 seconds to
60.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
|
|
For the case when nand_write_page fail with -EIO for the first page in an
eraseblock, jffs2_wbuf_recover ends up producing a BUG in jffs2_block_refile
as jeb->first_node is not yet set up (it's set up later in jffs2_wbuf_recover).
This BUG is not really a bug; it's just jffs2_wbuf_recover calling
jffs2_block_refile with the wrong second parameter.
This patch takes care of this situation.
Signed-off-by: Vitaly Wool <vwool@ru.mvista.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
|
|
"!ip->i_inode.i_mapping->nrpages" failed
The following removes an incorrect assertion from the GFS2 glops code. This
fixes Red Hat bz 229873. Thanks to Abhijith Das for testing the patch
and confirming the fix.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
Cc: Abhijith Das <adas@redhat.com>
|
|
fs/gfs2/glock.c:2198: error: 'THIS_MODULE' undeclared here (not in a function)
Cc: Steven Whitehouse <swhiteho@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
|
|
The ->go_drop_bh function is never used, so this removes it and the single
caller,
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
Remove an unused variable.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
The following patch fixes Red Hat bz 229831. Without this patch its
possible for the wrong inode to be returned in certain cases. It is a
pretty unusual event, so that its taken some time to track down. Thanks
and due to Josef Whiter who did a lot of the testing required to thrack
this down and fix it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
The below patch fixes a problem where we were not flushing rgrps
correctly. It only occurred in the specific case that a callback was
received for an rgrp which was dirty and when a journal log flush had
not already resulted in the rgrp being flushed anyway. This fixes Red
Hat bz 230143,
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
ok, the following is the minimum changes to get NFSD going before we
settle down this issue .. would appreciate this in the tree so other NFS
related works can get done in parallel.
Signed-off-by: S. Wendy Cheng <wcheng@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
Every file should include the headers containing the prototypes for
it's global functions.
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
This fixes a problem I encountered while running bonnie++. When you have one
thread that opens a file and starts to write to it, and then another thread that
tries to open and write to the same file, the second thread will loop forever
trying to grab the inode lock for that inode. Basically we come in through
generic_buffered_file_write, which calls gfs2_prepare_write, which then attempts
to grab the glock. Because we don't own the lock, gfs2_prepare_write gets
GLR_TRYFAILED, which returns AOP_TRUNCATED_PAGE to generic_buffered_file_write.
At this point generic_buffered_file_write loops around again and immediately
retries the prepare_write. This means that the second process never gets off of
the processor in order to allow the process that holds the lock to finish its
work and let go of the lock. This patch makes gfs2_glock_nq schedule() if it
gets back a GLR_TRYFAILED, which resolves this problem.
Signed-off-by: Josef Whiter <jwhiter@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
File handle checking error found in '07 NFS connectathon. The fh_type
and fh_len are not necessarily identical. Some of the client machines
could fail mount with stale filehandle without this patch.
Signed-off-by: S. Wendy Cheng <wcheng@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
Patch for the 2.6.20 stable tree that adds a missing newline to one of
the printk messages in fs/gfs2/ops_fstype.c.
Signed-off-by: Richard Fearn <richardfearn@gmail.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
This patch fixes a locking mistake in the quota code, we do a mutex_lock instead
of a mutex_unlock.
Signed-off-by: Josef Whiter <jwhiter@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
Suspend deadlocks when trying to unregister /sys/block/sr0.
This comes from Oliver's commit 94bebf4d1b8e7719f0f3944c037a21cfd99a4af7
"Driver core: fix race in sysfs between sysfs_remove_file() and
read()/write()".
sysfs_write_file downs buffer->sem while calling flush_write_buffer, and
flushing that particular write buffer entails downing buffer->sem in
orphan_all_buffers, resulting in the obvious self-deadlock.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
* git://git.kernel.org/pub/scm/linux/kernel/git/sfrench/cifs-2.6:
[CIFS] cifs_prepare_write was incorrectly rereading page in some cases
[CIFS] Fix set file size to zero when doing chmod to Samba 3.0.26pre
[CIFS] Remove some unused functions/declarations
[CIFS] New file for previous commit
[CIFS] cifs export operations
[CIFS] small piece missing from previous patch
[CIFS] Fix locking problem around some cifs uses of i_size write
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/drzeus/mmc
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/drzeus/mmc:
sdhci: release irq during suspend
sdhci: make isr tolerant of read errors
mmc: require explicit support for high-speed
ncpfs: make sure server connection survives a kill
|
|
This fixes a regression caused by 22c8ca78f20724676b6006232bf06cc3e9299539.
nobh_prepare_write() no longer marks the page uptodate, so
nobh_truncate_page() needs to do it.
Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Any attempt to open/use a bluetooth rfcomm device locks up
scheduling completely on my machine.
Interrupts (ping, alt-sysrq) seem to be alive, but nothing else.
This was working fine in 2.6.20, broken now in 2.6.21-rc2-git*
Reverting this change (below) fixes it:
| author Marcel Holtmann <marcel@holtmann.org>
| Sat, 17 Feb 2007 22:58:57 +0000 (23:58 +0100)
| committer David S. Miller <davem@sunset.davemloft.net>
| Mon, 26 Feb 2007 19:42:41 +0000 (11:42 -0800)
| commit c1a3313698895d8ad4760f98642007bf236af2e8
| tree 337a876f727061362b6a169f8759849c105b8f7a tree | snapshot
| parent f5ffd4620aba9e55656483ae1ef5c79ba81f5403 commit | diff
| | [Bluetooth] Make use of device_move() for RFCOMM TTY devices
| | In the case of bound RFCOMM TTY devices the parent is not available
| before its usage. So when opening a RFCOMM TTY device, move it to
| the corresponding ACL device as a child. When closing the device,
| move it back to the virtual device tree.
| Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
The simplest fix for this bug is to prevent sysfs_move_dir()
from self-deadlocking when (old_parent == new_parent).
This patch prevents total system lockup when using rfcomm devices.
Signed-off-by: Mark Lord <mlord@pobox.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: Greg KH <greg@kroah.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Use internal buffers instead of the ones supplied by the caller
so that a caller can be interrupted without having to abort the
entire ncp connection.
Signed-off-by: Pierre Ossman <ossman@cendio.se>
Acked-by: Petr Vandrovec <petr@vandrovec.name>
|
|
Noticed by Shaggy.
Signed-off-by: Shaggy <shaggy@us.ibm.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
- In fact we don't have to fail if AOP_TRUNCATED_PAGE was returned from
prepare_write or commit_write. It is beter to retry attempt where it
is possible.
- Rearange ecryptfs_get_lower_page() error handling logic, make it more clean.
Signed-off-by: Dmitriy Monakhov <dmonakhov@openvz.org>
Acked-by: Michael Halcrow <mhalcrow@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
- Currently after path_lookup succeed we dot't have any guarantie what
it is DIR. This must be explicitly demanded.
- path_lookup can't return negative dentry, So inode check is useless.
Signed-off-by: Dmitriy Monakhov <dmonakhov@openvz.org>
Acked-by: Michael Halcrow <mhalcrow@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
shmem's super_operations were missed from the recent const-ification;
and simple_fill_super()'s, which can share with get_sb_pseudo()'s.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Acked-by: Josef 'Jeff' Sipek <jsipek@cs.sunysb.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
- ecryptfs_write_inode_size_to_metadata() error code was ignored.
- i_op->setxattr() must be supported by lower fs because used below.
Signed-off-by: Monakhov Dmitriy <dmonakhov@openvz.org>
Acked-by: Michael Halcrow <mhalcrow@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
There are race issues around ext[34] xattr block release code.
ext[34]_xattr_release_block() checks the reference count of xattr block
(h_refcount) and frees that xattr block if it is the last one reference it.
Unlike ext2, the check of this counter is unprotected by any lock.
ext[34]_xattr_release_block() will free the mb_cache entry before freeing
that xattr block. There is a small window between the check for the re
h_refcount ==1 and the call to mb_cache_entry_free(). During this small
window another inode might find this xattr block from the mbcache and reuse
it, racing a refcount updates. The xattr block will later be freed by the
first inode without notice other inode is still use it. Later if that
block is reallocated as a datablock for other file, then more serious
problem might happen.
We need put a lock around places checking the refount as well to avoid
racing issue. Another place need this kind of protection is in
ext3_xattr_block_set(), where it will modify the xattr block content in-
the-fly if the refcount is 1 (means it's the only inode reference it).
This will also fix another issue: the xattr block may not get freed at all
if no lock is to protect the refcount check at the release time. It is
possible that the last two inodes could release the shared xattr block at
the same time. But both of them think they are not the last one so only
decreased the h_refcount without freeing xattr block at all.
We need to call lock_buffer() after ext3_journal_get_write_access() to
avoid deadlock (because the later will call lock_buffer()/unlock_buffer
() as well).
Signed-off-by: Mingming Cao <cmm@us.ibm.com>
Cc: Andreas Gruenbacher <agruen@suse.de>
Cc: <linux-ext4@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Dmitriy Monakhov wrote:
> if path_lookup() return non zero code we don't have to worry about
> 'nd' parameter, but ecryptfs_read_super does path_release(&nd) after
> path_lookup has failed, and dentry counter becomes negative
Do not do a path_release after a path_lookup error.
Signed-off-by: Michael Halcrow <mhalcrow@us.ibm.com>
Cc: Dmitriy Monakhov <dmonakhov@openvz.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Remove unnecessary flush_dcache_page() call. Thanks to Dmitriy
Monakhov for pointing this out.
Signed-off-by: Michael Halcrow <mhalcrow@us.ibm.com>
Cc: Dmitriy Monakhov <dmonakhov@openvz.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
O_LARGEFILE should be set here when opening the lower file.
Signed-off-by: Michael Halcrow <mhalcrow@us.ibm.com>
Cc: Dmitriy Monakhov <dmonakhov@openvz.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
eCryptfs lower file handling code has several issues:
- Retval from prepare_write()/commit_write() wasn't checked to equality
to AOP_TRUNCATED_PAGE.
- In some places page wasn't unmapped and unlocked after error.
Signed-off-by: Michael Halcrow <mhalcrow@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
In fixing a bug Samba 3.0.26pre allowed some clients (including Linux cifs
client) to change file size to zero in SET_FILE_UNIX_BASIC (which Linux cifs
client uses for chmod).
The server has been "fixed" now but that also fixes the client to net send
file size zero on chmod.
Fixes Samba bugzilla bug # 4418.
Fixed with help from Jeremy Allison
Signed-off-by: Jeremy Allison <jra@samba.org>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
For nfsd to work over cifs mounts (which presumably makes sense when trying
to reexport mounts to windows, network appliances or Samba servers to nfs
clients via nfs server).
This is the first stage of that enablement, marked experimental and turned
off by default.
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
There were two i_size_writes in the new truncate
function - we missed one in the last patch.
Noticed by Shaggy when he reviewed.
Thank you Shaggy ...
CC: Shaggy <shaggy@us.ibm.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/shaggy/jfs-2.6
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/shaggy/jfs-2.6:
JFS: Get rid of "may be used uninitialized" warnings
|
|
* master.kernel.org:/pub/scm/linux/kernel/git/gregkh/driver-2.6:
Revert "Driver core: let request_module() send a /sys/modules/kmod/-uevent"
Driver core: fix error by cleanup up symlinks properly
make kernel/kmod.c:kmod_mk static
power management: fix struct layout and docs
power management: no valid states w/o pm_ops
Driver core: more fallout from class_device changes for pcmcia
sysfs: move struct sysfs_dirent to private header
driver core: refcounting fix
Driver core: remove class_device_rename
|
|
Could cause hangs on smp systems in i_size_read on a cifs inode
whose size has been previously simultaneously updated from
different processes.
Thanks to Brian Wang for some great testing/debugging on this
hard problem.
Fixes kernel bugzilla #7903
CC: Shirish Pargoankar <shirishp@us.ibm.com>
CC: Shaggy <shaggy@us.ibm.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
This patch (as860) adds two new sysfs routines:
sysfs_add_file_to_group() and sysfs_remove_file_from_group().
A later patch adds code that uses the new routines.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Cc: Maneesh Soni <maneesh@in.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
struct sysfs_dirent is private to the fs/sysfs/ subtree. It is
not even referenced as an opaque structure outside of that subtree.
The following patch moves the declaration from include/linux/sysfs.h to
fs/sysfs/sysfs.h, making it clearer that nothing else in the kernel
dereferences it.
I have been running this patch for years. Please integrate and forward
upstream if there are no objections.
From: "Adam J. Richter" <adam@yggdrasil.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
* master.kernel.org:/pub/scm/linux/kernel/git/sfrench/cifs-2.6:
[CIFS] One line missing from previous commit
[CIFS] mtime bounces from local to remote when cifs nocmtime i_flags overwritten
[CIFS] fix &&/& typo in cifs_setattr()
|
|
>=============================================
>[ INFO: possible recursive locking detected ]
>2.6.19-1.2909.fc7 #1
>---------------------------------------------
>anaconda/587 is trying to acquire lock:
> (&bdev->bd_mutex){--..}, at: [<c05fb380>] mutex_lock+0x21/0x24
>
>but task is already holding lock:
> (&bdev->bd_mutex){--..}, at: [<c05fb380>] mutex_lock+0x21/0x24
>
>other info that might help us debug this:
>1 lock held by anaconda/587:
> #0: (&bdev->bd_mutex){--..}, at: [<c05fb380>] mutex_lock+0x21/0x24
>
>stack backtrace:
> [<c0405812>] show_trace_log_lvl+0x1a/0x2f
> [<c0405db2>] show_trace+0x12/0x14
> [<c0405e36>] dump_stack+0x16/0x18
> [<c043bd84>] __lock_acquire+0x116/0xa09
> [<c043c960>] lock_acquire+0x56/0x6f
> [<c05fb1fa>] __mutex_lock_slowpath+0xe5/0x24a
> [<c05fb380>] mutex_lock+0x21/0x24
> [<c04d82fb>] blkdev_ioctl+0x600/0x76d
> [<c04946b1>] block_ioctl+0x1b/0x1f
> [<c047ed5a>] do_ioctl+0x22/0x68
> [<c047eff2>] vfs_ioctl+0x252/0x265
> [<c047f04e>] sys_ioctl+0x49/0x63
> [<c0404070>] syscall_call+0x7/0xb
Annotate BLKPG_DEL_PARTITION's bd_mutex locking and add a little comment
clarifying the bd_mutex locking, because I confused myself and initially
thought the lock order was wrong too.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Pointers to user data should be marked with a __user hint. This one is
missing.
Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
affs wants to truncate the inode when the last user goes away, currently it
does that through a potentially racy i_count check in ->put_inode. But we
already have a method that's called just after the we dropped the last
reference, ->drop_inode. This patch implements affs_drop_inode to take
advantage of this.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Cc: Roman Zippel <zippel@linux-m68k.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
This problem was identified and fixed some time ago by Jeff Moyer but it fell
through the cracks somehow.
It is possible that a user space application could remove and re-create a
directory during a request. To avoid returning a failure from lookup
incorrectly when our current dentry is unhashed we need to check if another
positive, hashed dentry matching this one exists and if so return it instead
of a fail.
Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
Signed-off-by: Ian Kent <raven@themaw.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Jeff Moyer has identified a race between mount and expire.
What happens is that during an expire the situation can arise that a directory
is removed and another lookup is done before the expire issues a completion
status to the kernel module. In this case, since the the lookup gets a new
dentry, it doesn't know that there is an expire in progress and when it posts
its mount request, matches the existing expire request and waits for its
completion. ENOENT is then returned to user space from lookup (as the dentry
passed in is now unhashed) without having performed the mount request.
The solution used here is to keep track of dentrys in this unhashed state and
reuse them, if possible, in order to preserve the flags. Additionally, this
infrastructure will provide the framework for the reintroduction of caching of
mount fails removed earlier in development.
Signed-off-by: Ian Kent <raven@themaw.net>
Acked-by: Jeff Moyer <jmoyer@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
The current header file definitions for autofs version 5 have caused a couple
of problems for application builds downstream.
This fixes the problem by separating the definitions.
Signed-off-by: Ian Kent <raven@themaw.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
nobh_prepare_write leaks data similarly to how simple_prepare_write did. Fix
by not marking the page uptodate until nobh_commit_write time. Again, this
could break weird use-cases, but none appear to exist in the tree.
We can safely remove the set_page_dirty, because as the comment says,
nobh_commit_write does set_page_dirty. If a filesystem wants to allocate
backing store for a page dirtied via mmap, page_mkwrite is the suggested
approach.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
simple_prepare_write leaks uninitialised kernel data. This happens because
the it leaves an uninitialised "hole" over the part of the page that the
write is expected to go to. This is fine, but it then marks the page
uptodate, which means a concurrent read can come in and copy the
uninitialised memory into userspace before it written to.
Fix it by simply marking it uptodate in simple_commit_write instead, after
the hole has been filled in. This could theoretically break an fs that
uses simple_prepare_write and not simple_commit_write, and that relies on
the incorrect simple_prepare_write behaviour. Luckily, none of those
exists in the tree.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Signed-off-by: "Aneesh Kumar K.V" <aneesh.kumar@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|