Age | Commit message (Collapse) | Author |
|
git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/security-testing-2.6
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/security-testing-2.6:
sysfs: Don't leak secdata when a sysfs_dirent is freed.
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
x86, fs: Fix x86 procfs stack information for threads on 64-bit
x86: Add reboot quirk for 3 series Mac mini
x86: Fix printk message typo in mtrr cleanup code
dma-debug: Fix compile warning with PAE enabled
x86/amd-iommu: Un__init function required on shutdown
x86/amd-iommu: Workaround for erratum 63
|
|
While refreshing my sysfs patches I noticed a leak in the secdata
implementation. We don't free the secdata when we free the
sysfs dirent.
This is a bug in 2.6.32-rc5 that we really should close.
Signed-off-by: Eric W. Biederman <ebiederm@aristanetworks.com>
Acked-by: Serge Hallyn <serue@us.ibm.com>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
This patch fixes two issues in the procfs stack information on
x86-64 linux.
The 32 bit loader compat_do_execve did not store stack
start. (this was figured out by Alexey Dobriyan).
The stack information on a x64_64 kernel always shows 0 kbyte
stack usage, because of a missing implementation of the KSTK_ESP
macro which always returned -1.
The new implementation now returns the right value.
Signed-off-by: Stefani Seibold <stefani@seibold.net>
Cc: Americo Wang <xiyou.wangcong@gmail.com>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Andrew Morton <akpm@linux-foundation.org>
LKML-Reference: <1257240160.4889.24.camel@wall-e>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
* 'for-linus' of git://git.kernel.dk/linux-2.6-block:
cfq-iosched: limit coop preemption
cfq-iosched: fix bad return value cfq_should_preempt()
backing-dev: bdi sb prune should be in the unregister path, not destroy
Fix bio_alloc() and bio_kmalloc() documentation
bio_put(): add bio_clone() to the list of functions in the comment
|
|
Adds missing initialization of newly allocated b-tree node buffers.
This avoids garbage data to be mixed in b-tree node blocks.
Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
|
|
When nilfs flushes out dirty data to reduce memory pressure, creation
of checkpoints is wrongly postponed. This bug causes irregular
checkpoint creation especially in small footprint systems.
To correct this issue, a timer for the checkpoint creation has to be
continued if a log writer does not create a checkpoint.
This will do the correction.
Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
|
|
Bruno Prémont and Dunphy, Bill noticed me that NILFS will certainly
hang on ARM-based targets.
I found this was caused by an underflow of dirty pages counter. A
b-tree cache routine was marking page dirty without adjusting page
account information.
This fixes the dirty page accounting leak and resolves the hang on
arm-based targets.
Reported-by: Bruno Prémont <bonbons@linux-vserver.org>
Reported-by: Dunphy, Bill <WDunphy@tandbergdata.com>
Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
Tested-by: Bruno Prémont <bonbons@linux-vserver.org>
Cc: stable <stable@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/ericvh/v9fs
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ericvh/v9fs:
9p: fix readdir corner cases
9p: fix readlink
9p: fix a small bug in readdir for long directories
|
|
This reverts commit d0646f7b636d067d715fab52a2ba9c6f0f46b0d7, as
requested by Eric Sandeen.
It can basically cause an ext4 filesystem to miss recovery (and thus get
mounted with errors) if the journal checksum does not match.
Quoth Eric:
"My hand-wavy hunch about what is happening is that we're finding a
bad checksum on the last partially-written transaction, which is
not surprising, but if we have a wrapped log and we're doing the
initial scan for head/tail, and we abort scanning on that bad
checksum, then we are essentially running an unrecovered filesystem.
But that's hand-wavy and I need to go look at the code.
We lived without journal checksums on by default until now, and at
this point they're doing more harm than good, so we should revert
the default-changing commit until we can fix it and do some good
power-fail testing with the fixes in place."
See
http://bugzilla.kernel.org/show_bug.cgi?id=14354
for all the gory details.
Requested-by: Eric Sandeen <sandeen@redhat.com>
Cc: Theodore Tso <tytso@mit.edu>
Cc: Alexey Fisher <bug-track@fisher-privat.net>
Cc: Maxim Levitsky <maximlevitsky@gmail.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Mathias Burén <mathias.buren@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
The patch below also addresses a couple of other corner cases in readdir
seen with a large (e.g. 64k) msize. I'm not sure what people think of
my co-opting of fid->aux here. I'd be happy to rework if there's a better
way.
When the size of the user supplied buffer passed to readdir is smaller
than the data returned in one go by the 9P read request, v9fs_dir_readdir()
currently discards extra data so that, on the next call, a 9P read
request will be issued with offset < previous offset + bytes returned,
which voilates the constraint described in paragraph 3 of read(5) description.
This patch preseves the leftover data in fid->aux for use in the next call.
Signed-off-by: Jim Garlick <garlick@llnl.gov>
Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
|
|
I do not know if you've looked on the patch, but unfortunately it is
incorrect. A suggested better version is in this email (the old
version didn't work in case the user provided buffer was not long
enough - it incorrectly appended null byte on a position of last char,
and thus broke the contract of the readlink method). However, I'm
still not sure this is 100% correct thing to do, I think readlink is
supposed to return buffer without last null byte in all cases, but we
do return last null byte (even the old version).. on the other hand it
is likely unspecified what is in the remaining part of the buffer, so
null character may be fine there ;):
Signed-off-by: Martin Stava <martin.stava@gmail.com>
Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
|
|
Here is a proposed patch for bug in readdir. Listing of dirs with
many files fails without this patch.
Signed-off-by: Martin Stava <martin.stava@gmail.com>
Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
|
|
Commit 451a9ebf accidentally broke bio_alloc() and bio_kmalloc() comments by
(almost) swapping them.
This patch fixes that, by placing the comments in the right place.
Signed-off-by: Alberto Bertogli <albertito@blitiri.com.ar>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
|
|
In bio_put()'s comment, add bio_clone() to the list of functions that can
give you a bio reference.
Signed-off-by: Alberto Bertogli <albertito@blitiri.com.ar>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
|
|
* 'for-linus' of git://git.kernel.org/pub/scm/fs/xfs/xfs:
xfs: fix xfs_quota remove error
xfs: free temporary cursor in xfs_dialloc
|
|
The xfs_quota returns ENOSYS when remove command is executed.
Reproducable with following steps.
# mount -t xfs -o uquota /dev/sda7 /mnt/mp1
# xfs_quota -x -c off -c remove
XFS_QUOTARM: Function not implemented.
The remove command is allowed during quotaoff, but xfs_fs_set_xstate()
checks whether quota is running, and it leads to ENOSYS.
To solve this problem, add a check for X_QUOTARM.
Signed-off-by: Ryota Yamauchi <r-yamauchi@vf.jp.nec.com>
Signed-off-by: Utako Kusaka <u-kusaka@wm.jp.nec.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
Commit bd169565993b39b9b4b102cdac8b13e0a259ce2f seems
to have a slight regression where this code path:
if (!--searchdistance) {
/*
* Not in range - save last search
* location and allocate a new inode
*/
...
goto newino;
}
doesn't free the temporary cursor (tcur) that got dup'd in
this function.
This leaks an item in the xfs_btree_cur zone, and it's caught
on module unload:
===========================================================
BUG xfs_btree_cur: Objects remaining on kmem_cache_close()
-----------------------------------------------------------
It seems like maybe a single free at the end of the function might
be cleaner, but for now put a del_cursor right in this code block
similar to the handling in the rest of the function.
Signed-off-by: Eric Sandeen <sandeen@sandeen.net>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
* 'for-linus' of git://git.kernel.dk/linux-2.6-block:
backing-dev: ensure that a removed bdi no longer has super_block referencing it
block: use after free bug in __blkdev_get
block: silently error unsupported empty barriers too
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6
* 'sh/for-2.6.32' of git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6:
sh: Fix hugetlbfs dependencies for SH-3 && MMU configurations.
sh: Document uImage.bin target in archhelp.
sh: add uImage.bin target
sh: rsk7203 CONFIG_MTD=n fix
sh: Check for return_to_handler when unwinding the stack
sh: Build fix: define more __movmem* symbols
sh: __irq_entry annotate do_IRQ().
Fix up sh/powerpc conflicts in fs/Kconfig
|
|
* 'bugfixes' of git://git.linux-nfs.org/projects/trondmy/nfs-2.6:
NFSv4: The link() operation should return any delegation on the file
NFSv4: Fix two unbalanced put_rpccred() issues.
NFSv4: Fix a bug when the server returns NFS4ERR_RESOURCE
nfs: Panic when commit fails
|
|
* git://git.kernel.org/pub/scm/linux/kernel/git/sfrench/cifs-2.6:
[CIFS] Fixing to avoid invalid kfree() in cifs_get_tcp_session()
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc:
powerpc/ppc64: Use preempt_schedule_irq instead of preempt_schedule
powerpc: Minor cleanup to lib/Kconfig.debug
powerpc: Minor cleanup to sound/ppc/Kconfig
powerpc: Minor cleanup to init/Kconfig
powerpc: Limit memory hotplug support to PPC64 Book-3S machines
powerpc: Limit hugetlbfs support to PPC64 Book-3S machines
powerpc: Fix compile errors found by new ppc64e_defconfig
powerpc: Add a Book-3E 64-bit defconfig
powerpc/booke: Fix xmon single step on PowerPC Book-E
powerpc: Align vDSO base address
powerpc: Fix segment mapping in vdso32
powerpc/iseries: Remove compiler version dependent hack
powerpc/perf_events: Fix priority of MSR HV vs PR bits
powerpc/5200: Update defconfigs
drivers/serial/mpc52xx_uart.c: Use UPIO_MEM rather than SERIAL_IO_MEM
powerpc/boot/dts: drop obsolete 'fsl5200-clocking'
of: Remove nested function
mpc5200: support for the MAN mpc5200 based board mucmc52
mpc5200: support for the MAN mpc5200 based board uc101
|
|
* 'for-linus' of git://oss.sgi.com/xfs/xfs:
xfs: fix double IRELE in xfs_dqrele_inode
|
|
A particular fsfuzzer run caused an hfs file system to crash on mount.
This is due to a corrupted MDB extent record causing a miscalculation of
HFS_I(inode)->first_blocks for the extent tree. If the extent records are
zereod out, it won't trigger the first_blocks special case. Instead it
falls through to the extent code which we're still in the middle of
initializing.
This patch catches the 0 size extent records, reports the corruption, and
fails the mount.
Reported-by: Ramon de Carvalho Valle <rcvalle@linux.vnet.ibm.com>
Signed-off-by: Jeff Mahoney <jeffm@suse.com>
Cc: Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
As found in <http://bugs.debian.org/550010>, hfsplus is using type u32
rather than sector_t for some sector number calculations.
In particular, hfsplus_get_block() does:
u32 ablock, dblock, mask;
...
map_bh(bh_result, sb, (dblock << HFSPLUS_SB(sb).fs_shift) + HFSPLUS_SB(sb).blockoffset + (iblock & mask));
I am not confident that I can find and fix all cases where a sector number
may be truncated. For now, avoid data loss by refusing to mount HFS+
volumes with more than 2^32 sectors (2TB).
[akpm@linux-foundation.org: fix 32 and 64-bit issues]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Cc: Eric Sesterhenn <snakebyte@gmx.de>
Cc: Roman Zippel <zippel@linux-m68k.org>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Given such a long name, the kB count in /proc/meminfo's HardwareCorrupted
line is being shown too far right (it does align with x86_64's VmallocChunk
above, but I hope nobody will ever have that much corrupted!). Align it.
Signed-off-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Cc: Andi Kleen <andi@firstfloor.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
The hugetlb dependencies presently depend on SUPERH && MMU while the
hugetlb page size definitions depend on CPU_SH4 or CPU_SH5. This
unfortunately allows SH-3 + MMU configurations to enable hugetlbfs
without a corresponding HPAGE_SHIFT definition, resulting in the build
blowing up.
As SH-3 doesn't support variable page sizes, we tighten up the
dependenies a bit to prevent hugetlbfs from being enabled. These days
we also have a shiny new SYS_SUPPORTS_HUGETLBFS, so switch to using
that rather than adding to the list of corner cases in fs/Kconfig.
Reported-by: Kristoffer Ericson <kristoffer.ericson@gmail.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
commit 0762b8bde9729f10f8e6249809660ff2ec3ad735
(from 14 months ago) introduced a use-after-free bug which has just
recently started manifesting in my md testing.
I tried git bisect to find out what caused the bug to start
manifesting, and it could have been the recent change to
blk_unregister_queue (48c0d4d4c04) but the results were inconclusive.
This patch certainly fixes my symptoms and looks correct as the two
calls are now in the same order as elsewhere in that function.
Signed-off-by: NeilBrown <neilb@suse.de>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
|
|
Otherwise, we have to wait for the server to recall it.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
Commits 29fba38b (nfs41: lease renewal) and fc01cea9 (nfs41: sequence
operation) introduce a couple of put_rpccred() calls on credentials for
which there is no corresponding get_rpccred().
See http://bugzilla.kernel.org/show_bug.cgi?id=14249
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
RFC 3530 states that when we recieve the error NFS4ERR_RESOURCE, we are not
supposed to bump the sequence number on OPEN, LOCK, LOCKU, CLOSE, etc
operations. The problem is that we map that error into EREMOTEIO in the XDR
layer, and so the NFSv4 middle-layer routines like seqid_mutating_err(),
and nfs_increment_seqid() don't recognise it.
The fix is to defer the mapping until after the middle layers have
processed the error.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
Actually pass the NFS_FILE_SYNC option to the server to avoid a
Panic in nfs_direct_write_complete() when a commit fails.
At the end of an nfs write, if the nfs commit fails, all the writes
will be rescheduled. They are supposed to be rescheduled as NFS_FILE_SYNC
writes, but the rpc_task structure is not completely intialized and so
the option is not passed. When the rescheduled writes complete, the
return indicates that they are NFS_UNSTABLE and we try to do another
commit. This leads to a Panic because the commit data structure pointer
was set to null in the initial (failed) commit attempt.
Signed-off-by: Terry Loftin <terry.loftin@hp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
* 'for-linus' of git://git.infradead.org/users/eparis/notify:
dnotify: ignore FS_EVENT_ON_CHILD
inotify: fix coalesce duplicate events into a single event in special case
inotify: deprecate the inotify kernel interface
fsnotify: do not set group for a mark before it is on the i_list
|
|
Fix a (small) memory leak in one of the error paths of the NFS mount
options parsing code.
Regression introduced in 2.6.30 by commit a67d18f (NFS: load the
rpc/rdma transport module automatically).
Reported-by: Yinghai Lu <yinghai@kernel.org>
Reported-by: Pekka Enberg <penberg@cs.helsinki.fi>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
This patch fixes a null pointer exception in pipe_rdwr_open() which
generates the stack trace:
> Unable to handle kernel NULL pointer dereference at 0000000000000028 RIP:
> [<ffffffff802899a5>] pipe_rdwr_open+0x35/0x70
> [<ffffffff8028125c>] __dentry_open+0x13c/0x230
> [<ffffffff8028143d>] do_filp_open+0x2d/0x40
> [<ffffffff802814aa>] do_sys_open+0x5a/0x100
> [<ffffffff8021faf3>] sysenter_do_call+0x1b/0x67
The failure mode is triggered by an attempt to open an anonymous
pipe via /proc/pid/fd/* as exemplified by this script:
=============================================================
while : ; do
{ echo y ; sleep 1 ; } | { while read ; do echo z$REPLY; done ; } &
PID=$!
OUT=$(ps -efl | grep 'sleep 1' | grep -v grep |
{ read PID REST ; echo $PID; } )
OUT="${OUT%% *}"
DELAY=$((RANDOM * 1000 / 32768))
usleep $((DELAY * 1000 + RANDOM % 1000 ))
echo n > /proc/$OUT/fd/1 # Trigger defect
done
=============================================================
Note that the failure window is quite small and I could only
reliably reproduce the defect by inserting a small delay
in pipe_rdwr_open(). For example:
static int
pipe_rdwr_open(struct inode *inode, struct file *filp)
{
msleep(100);
mutex_lock(&inode->i_mutex);
Although the defect was observed in pipe_rdwr_open(), I think it
makes sense to replicate the change through all the pipe_*_open()
functions.
The core of the change is to verify that inode->i_pipe has not
been released before attempting to manipulate it. If inode->i_pipe
is no longer present, return ENOENT to indicate so.
The comment about potentially using atomic_t for i_pipe->readers
and i_pipe->writers has also been removed because it is no longer
relevant in this context. The inode->i_mutex lock must be used so
that inode->i_pipe can be dealt with correctly.
Signed-off-by: Earl Chew <earl_chew@agilent.com>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Mask off FS_EVENT_ON_CHILD in dnotify_handle_event(). Otherwise, when there
is more than one watch on a directory and dnotify_should_send_event()
succeeds, events with FS_EVENT_ON_CHILD set will trigger all watches and cause
spurious events.
This case was overlooked in commit e42e2773.
#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <signal.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <string.h>
static void create_event(int s, siginfo_t* si, void* p)
{
printf("create\n");
}
static void delete_event(int s, siginfo_t* si, void* p)
{
printf("delete\n");
}
int main (void) {
struct sigaction action;
char *tmpdir, *file;
int fd1, fd2;
sigemptyset (&action.sa_mask);
action.sa_flags = SA_SIGINFO;
action.sa_sigaction = create_event;
sigaction (SIGRTMIN + 0, &action, NULL);
action.sa_sigaction = delete_event;
sigaction (SIGRTMIN + 1, &action, NULL);
# define TMPDIR "/tmp/test.XXXXXX"
tmpdir = malloc(strlen(TMPDIR) + 1);
strcpy(tmpdir, TMPDIR);
mkdtemp(tmpdir);
# define TMPFILE "/file"
file = malloc(strlen(tmpdir) + strlen(TMPFILE) + 1);
sprintf(file, "%s/%s", tmpdir, TMPFILE);
fd1 = open (tmpdir, O_RDONLY);
fcntl(fd1, F_SETSIG, SIGRTMIN);
fcntl(fd1, F_NOTIFY, DN_MULTISHOT | DN_CREATE);
fd2 = open (tmpdir, O_RDONLY);
fcntl(fd2, F_SETSIG, SIGRTMIN + 1);
fcntl(fd2, F_NOTIFY, DN_MULTISHOT | DN_DELETE);
if (fork()) {
/* This triggers a create event */
creat(file, 0600);
/* This triggers a create and delete event (!) */
unlink(file);
} else {
sleep(1);
rmdir(tmpdir);
}
return 0;
}
Signed-off-by: Andreas Gruenbacher <agruen@suse.de>
Signed-off-by: Eric Paris <eparis@redhat.com>
|
|
If we do rename a dir entry, like this:
rename("/tmp/ino7UrgoJ.rename1", "/tmp/ino7UrgoJ.rename2")
rename("/tmp/ino7UrgoJ.rename2", "/tmp/ino7UrgoJ")
The duplicate events should be coalesced into a single event. But those two
events do not be coalesced into a single event, due to some bad check in
event_compare(). It can not match the two NULL inodes as the same event.
Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com>
Signed-off-by: Eric Paris <eparis@redhat.com>
|
|
fsnotify_add_mark is supposed to add a mark to the g_list and i_list and to
set the group and inode for the mark. fsnotify_destroy_mark_by_entry uses
the fact that ->group != NULL to know if this group should be destroyed or
if it's already been done.
But fsnotify_add_mark sets the group and inode before it actually adds the
mark to the i_list and g_list. This can result in a race in inotify, it
requires 3 threads.
sys_inotify_add_watch("file") sys_inotify_add_watch("file") sys_inotify_rm_watch([a])
inotify_update_watch()
inotify_new_watch()
inotify_add_to_idr()
^--- returns wd = [a]
inotfiy_update_watch()
inotify_new_watch()
inotify_add_to_idr()
fsnotify_add_mark()
^--- returns wd = [b]
returns to userspace;
inotify_idr_find([a])
^--- gives us the pointer from task 1
fsnotify_add_mark()
^--- this is going to set the mark->group and mark->inode fields, but will
return -EEXIST because of the race with [b].
fsnotify_destroy_mark()
^--- since ->group != NULL we call back
into inotify_freeing_mark() which calls
inotify_remove_from_idr([a])
since fsnotify_add_mark() failed we call:
inotify_remove_from_idr([a]) <------WHOOPS it's not in the idr, this could
have been any entry added later!
The fix is to make sure we don't set mark->group until we are sure the mark is
on the inode and fsnotify_add_mark will return success.
Signed-off-by: Eric Paris <eparis@redhat.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/teigland/dlm
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/teigland/dlm:
dlm: fix socket fd translation
dlm: fix lowcomms_connect_node for sctp
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstable
* 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstable:
Btrfs: always pin metadata in discard mode
Btrfs: enable discard support
Btrfs: add -o discard option
Btrfs: properly wait log writers during log sync
Btrfs: fix possible ENOSPC problems with truncate
Btrfs: fix btrfs acl #ifdef checks
Btrfs: streamline tree-log btree block writeout
Btrfs: avoid tree log commit when there are no changes
Btrfs: only write one super copy during fsync
|
|
sysfs_notify_dirent is a simple atomic operation that can be used to
alert user-space that new data can be read from a sysfs attribute.
Unfortunately it cannot currently be called from non-process context
because of its use of spin_lock which is sometimes taken with
interrupts enabled.
So change all lockers of sysfs_open_dirent_lock to disable interrupts,
thus making sysfs_notify_dirent safe to be called from non-process
context (as drivers/md does in md_safemode_timeout).
sysfs_get_open_dirent is (documented as being) only called from
process context, so it uses spin_lock_irq. Other places
use spin_lock_irqsave.
The usage for sysfs_notify_dirent in md_safemode_timeout was
introduced in 2.6.28, so this patch is suitable for that and more
recent kernels.
Reported-by: Joel Andres Granados <jgranado@redhat.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Cc: stable <stable@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
As device_move() and kobject_move() both handle a NULL destination,
sysfs_move_dir() should do this as well (again) and fall back to
sysfs_root in that case.
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: Phil Carmody <ext-phil.2.carmody@nokia.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
We have an optimization in btrfs to allow blocks to be
immediately freed if they were allocated in this transaction and never
written. Otherwise they are pinned and freed when the transaction
commits.
This isn't optimal for discard mode because immediately freeing
them means immediately discarding them. It is better to give the
block to the pinning code and letting the (slow) discard happen later.
Signed-off-by: Chris Mason <chris.mason@oracle.com>
|
|
The discard support code in btrfs currently is guarded by ifdefs for
BIO_RW_DISCARD, which is never defines as it's the name of an enum
memeber. Just remove the useless ifdefs to actually enable the code.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
|
|
Enable discard by default is not a good idea given the the trim speed
of SSD prototypes we've seen, and the carecteristics for many high-end
arrays. Turn of discards by default and require the -o discard option
to enable them on.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
|
|
A recently fsync optimization make btrfs_sync_log skip calling
wait_for_writer in the single log writer case. This is incorrect
since the writer count can also be increased by btrfs_pin_log.
Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
|
|
There's a problem where we don't do any space reservation for truncates, which
can cause you to OOPs because you will be allowed to go off in the weeds a bit
since we don't account for the delalloc bytes that are created as a result of
the truncate.
Signed-off-by: Josef Bacik <jbacik@redhat.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
|
|
|