aboutsummaryrefslogtreecommitdiff
path: root/include
AgeCommit message (Collapse)Author
2007-07-13iop13xx: surface the iop13xx adma units to the iop-adma driverDan Williams
Adds the platform device definitions and the architecture specific support routines (i.e. register initialization and descriptor formats) for the iop-adma driver. Changelog: * added 'descriptor pool size' to the platform data * add base support for buffer sizes larger than 16MB (hw max) * build error fix from Kirill A. Shutemov * rebase for async_tx changes * add interrupt support * do not call platform register macros in driver code * remove unnecessary ARM assembly statement * checkpatch.pl fixes * gpl v2 only correction Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2007-07-13dmaengine: driver for the iop32x, iop33x, and iop13xx raid enginesDan Williams
The Intel(R) IOP series of i/o processors integrate an Xscale core with raid acceleration engines. The capabilities per platform are: iop219: (2) copy engines iop321: (2) copy engines (1) xor and block fill engine iop33x: (2) copy and crc32c engines (1) xor, xor zero sum, pq, pq zero sum, and block fill engine iop34x (iop13xx): (2) copy, crc32c, xor, xor zero sum, and block fill engines (1) copy, crc32c, xor, xor zero sum, pq, pq zero sum, and block fill engine The driver supports the features of the async_tx api: * asynchronous notification of operation completion * implicit (interupt triggered) handling of inter-channel transaction dependencies The driver adapts to the platform it is running by two methods. 1/ #include <asm/arch/adma.h> which defines the hardware specific iop_chan_* and iop_desc_* routines as a series of static inline functions 2/ The private platform data attached to the platform_device defines the capabilities of the channels 20070626: Callbacks are run in a tasklet. Given the recent discussion on LKML about killing tasklets in favor of workqueues I did a quick conversion of the driver. Raid5 resync performance dropped from 50MB/s to 30MB/s, so the tasklet implementation remains until a generic softirq interface is available. Changelog: * fixed a slot allocation bug in do_iop13xx_adma_xor that caused too few slots to be requested eventually leading to data corruption * enabled the slot allocation routine to attempt to free slots before returning -ENOMEM * switched the cleanup routine to solely use the software chain and the status register to determine if a descriptor is complete. This is necessary to support other IOP engines that do not have status writeback capability * make the driver iop generic * modified the allocation routines to understand allocating a group of slots for a single operation * added a null xor initialization operation for the xor only channel on iop3xx * support xor operations on buffers larger than the hardware maximum * split the do_* routines into separate prep, src/dest set, submit stages * added async_tx support (dependent operations initiation at cleanup time) * simplified group handling * added interrupt support (callbacks via tasklets) * brought the pending depth inline with ioat (i.e. 4 descriptors) * drop dma mapping methods, suggested by Chris Leech * don't use inline in C files, Adrian Bunk * remove static tasklet declarations * make iop_adma_alloc_slots easier to read and remove chances for a corrupted descriptor chain * fix locking bug in iop_adma_alloc_chan_resources, Benjamin Herrenschmidt * convert capabilities over to dma_cap_mask_t * fixup sparse warnings * add descriptor flush before iop_chan_enable * checkpatch.pl fixes * gpl v2 only correction * move set_src, set_dest, submit to async_tx methods * move group_list and phys to async_tx Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2007-07-13md: handle_stripe5 - add request/completion logic for async read opsDan Williams
When a read bio is attached to the stripe and the corresponding block is marked R5_UPTODATE, then a read (biofill) operation is scheduled to copy the data from the stripe cache to the bio buffer. handle_stripe flags the blocks to be operated on with the R5_Wantfill flag. If new read requests arrive while raid5_run_ops is running they will not be handled until handle_stripe is scheduled to run again. Changelog: * cleanup to_read and to_fill accounting * do not fail reads that have reached the cache Signed-off-by: Dan Williams <dan.j.williams@intel.com> Acked-By: NeilBrown <neilb@suse.de>
2007-07-13md: handle_stripe5 - add request/completion logic for async compute opsDan Williams
handle_stripe will compute a block when a backing disk has failed, or when it determines it can save a disk read by computing the block from all the other up-to-date blocks. Previously a block would be computed under the lock and subsequent logic in handle_stripe could use the newly up-to-date block. With the raid5_run_ops implementation the compute operation is carried out a later time outside the lock. To preserve the old functionality we take advantage of the dependency chain feature of async_tx to flag the block as R5_Wantcompute and then let other parts of handle_stripe operate on the block as if it were up-to-date. raid5_run_ops guarantees that the block will be ready before it is used in another operation. However, this only works in cases where the compute and the dependent operation are scheduled at the same time. If a previous call to handle_stripe sets the R5_Wantcompute flag there is no facility to pass the async_tx dependency chain across successive calls to raid5_run_ops. The req_compute variable protects against this case. Changelog: * remove the req_compute BUG_ON Signed-off-by: Dan Williams <dan.j.williams@intel.com> Acked-By: NeilBrown <neilb@suse.de>
2007-07-13md: raid5_run_ops - run stripe operations outside sh->lockDan Williams
When the raid acceleration work was proposed, Neil laid out the following attack plan: 1/ move the xor and copy operations outside spin_lock(&sh->lock) 2/ find/implement an asynchronous offload api The raid5_run_ops routine uses the asynchronous offload api (async_tx) and the stripe_operations member of a stripe_head to carry out xor+copy operations asynchronously, outside the lock. To perform operations outside the lock a new set of state flags is needed to track new requests, in-flight requests, and completed requests. In this new model handle_stripe is tasked with scanning the stripe_head for work, updating the stripe_operations structure, and finally dropping the lock and calling raid5_run_ops for processing. The following flags outline the requests that handle_stripe can make of raid5_run_ops: STRIPE_OP_BIOFILL - copy data into request buffers to satisfy a read request STRIPE_OP_COMPUTE_BLK - generate a missing block in the cache from the other blocks STRIPE_OP_PREXOR - subtract existing data as part of the read-modify-write process STRIPE_OP_BIODRAIN - copy data out of request buffers to satisfy a write request STRIPE_OP_POSTXOR - recalculate parity for new data that has entered the cache STRIPE_OP_CHECK - verify that the parity is correct STRIPE_OP_IO - submit i/o to the member disks (note this was already performed outside the stripe lock, but it made sense to add it as an operation type The flow is: 1/ handle_stripe sets STRIPE_OP_* in sh->ops.pending 2/ raid5_run_ops reads sh->ops.pending, sets sh->ops.ack, and submits the operation to the async_tx api 3/ async_tx triggers the completion callback routine to set sh->ops.complete and release the stripe 4/ handle_stripe runs again to finish the operation and optionally submit new operations that were previously blocked Note this patch just defines raid5_run_ops, subsequent commits (one per major operation type) modify handle_stripe to take advantage of this routine. Changelog: * removed ops_complete_biodrain in favor of ops_complete_postxor and ops_complete_write. * removed the raid5_run_ops workqueue * call bi_end_io for reads in ops_complete_biofill, saves a call to handle_stripe * explicitly handle the 2-disk raid5 case (xor becomes memcpy), Neil Brown * fix race between async engines and bi_end_io call for reads, Neil Brown * remove unnecessary spin_lock from ops_complete_biofill * remove test_and_set/test_and_clear BUG_ONs, Neil Brown * remove explicit interrupt handling for channel switching, this feature was absorbed (i.e. it is now implicit) by the async_tx api * use return_io in ops_complete_biofill Signed-off-by: Dan Williams <dan.j.williams@intel.com> Acked-By: NeilBrown <neilb@suse.de>
2007-07-13raid5: refactor handle_stripe5 and handle_stripe6 (v3)Dan Williams
handle_stripe5 and handle_stripe6 have very deep logic paths handling the various states of a stripe_head. By introducing the 'stripe_head_state' and 'r6_state' objects, large portions of the logic can be moved to sub-routines. 'struct stripe_head_state' consumes all of the automatic variables that previously stood alone in handle_stripe5,6. 'struct r6_state' contains the handle_stripe6 specific variables like p_failed and q_failed. One of the nice side effects of the 'stripe_head_state' change is that it allows for further reductions in code duplication between raid5 and raid6. The following new routines are shared between raid5 and raid6: handle_completed_write_requests handle_requests_to_failed_array handle_stripe_expansion Changes: * v2: fixed 'conf->raid_disk-1' for the raid6 'handle_stripe_expansion' path * v3: removed the unused 'dirty' field from struct stripe_head_state * v3: coalesced open coded bi_end_io routines into return_io() Signed-off-by: Dan Williams <dan.j.williams@intel.com> Acked-By: NeilBrown <neilb@suse.de>
2007-07-13async_tx: add the async_tx apiDan Williams
The async_tx api provides methods for describing a chain of asynchronous bulk memory transfers/transforms with support for inter-transactional dependencies. It is implemented as a dmaengine client that smooths over the details of different hardware offload engine implementations. Code that is written to the api can optimize for asynchronous operation and the api will fit the chain of operations to the available offload resources. I imagine that any piece of ADMA hardware would register with the 'async_*' subsystem, and a call to async_X would be routed as appropriate, or be run in-line. - Neil Brown async_tx exploits the capabilities of struct dma_async_tx_descriptor to provide an api of the following general format: struct dma_async_tx_descriptor * async_<operation>(..., struct dma_async_tx_descriptor *depend_tx, dma_async_tx_callback cb_fn, void *cb_param) { struct dma_chan *chan = async_tx_find_channel(depend_tx, <operation>); struct dma_device *device = chan ? chan->device : NULL; int int_en = cb_fn ? 1 : 0; struct dma_async_tx_descriptor *tx = device ? device->device_prep_dma_<operation>(chan, len, int_en) : NULL; if (tx) { /* run <operation> asynchronously */ ... tx->tx_set_dest(addr, tx, index); ... tx->tx_set_src(addr, tx, index); ... async_tx_submit(chan, tx, flags, depend_tx, cb_fn, cb_param); } else { /* run <operation> synchronously */ ... <operation> ... async_tx_sync_epilog(flags, depend_tx, cb_fn, cb_param); } return tx; } async_tx_find_channel() returns a capable channel from its pool. The channel pool is organized as a per-cpu array of channel pointers. The async_tx_rebalance() routine is tasked with managing these arrays. In the uniprocessor case async_tx_rebalance() tries to spread responsibility evenly over channels of similar capabilities. For example if there are two copy+xor channels, one will handle copy operations and the other will handle xor. In the SMP case async_tx_rebalance() attempts to spread the operations evenly over the cpus, e.g. cpu0 gets copy channel0 and xor channel0 while cpu1 gets copy channel 1 and xor channel 1. When a dependency is specified async_tx_find_channel defaults to keeping the operation on the same channel. A xor->copy->xor chain will stay on one channel if it supports both operation types, otherwise the transaction will transition between a copy and a xor resource. Currently the raid5 implementation in the MD raid456 driver has been converted to the async_tx api. A driver for the offload engines on the Intel Xscale series of I/O processors, iop-adma, is provided in a later commit. With the iop-adma driver and async_tx, raid456 is able to offload copy, xor, and xor-zero-sum operations to hardware engines. On iop342 tiobench showed higher throughput for sequential writes (20 - 30% improvement) and sequential reads to a degraded array (40 - 55% improvement). For the other cases performance was roughly equal, +/- a few percentage points. On a x86-smp platform the performance of the async_tx implementation (in synchronous mode) was also +/- a few percentage points of the original implementation. According to 'top' on iop342 CPU utilization drops from ~50% to ~15% during a 'resync' while the speed according to /proc/mdstat doubles from ~25 MB/s to ~50 MB/s. The tiobench command line used for testing was: tiobench --size 2048 --block 4096 --block 131072 --dir /mnt/raid --numruns 5 * iop342 had 1GB of memory available Details: * if CONFIG_DMA_ENGINE=n the asynchronous path is compiled away by making async_tx_find_channel a static inline routine that always returns NULL * when a callback is specified for a given transaction an interrupt will fire at operation completion time and the callback will occur in a tasklet. if the the channel does not support interrupts then a live polling wait will be performed * the api is written as a dmaengine client that requests all available channels * In support of dependencies the api implicitly schedules channel-switch interrupts. The interrupt triggers the cleanup tasklet which causes pending operations to be scheduled on the next channel * Xor engines treat an xor destination address differently than a software xor routine. To the software routine the destination address is an implied source, whereas engines treat it as a write-only destination. This patch modifies the xor_blocks routine to take a an explicit destination address to mirror the hardware. Changelog: * fixed a leftover debug print * don't allow callbacks in async_interrupt_cond * fixed xor_block changes * fixed usage of ASYNC_TX_XOR_DROP_DEST * drop dma mapping methods, suggested by Chris Leech * printk warning fixups from Andrew Morton * don't use inline in C files, Adrian Bunk * select the API when MD is enabled * BUG_ON xor source counts <= 1 * implicitly handle hardware concerns like channel switching and interrupts, Neil Brown * remove the per operation type list, and distribute operation capabilities evenly amongst the available channels * simplify async_tx_find_channel to optimize the fast path * introduce the channel_table_initialized flag to prevent early calls to the api * reorganize the code to mimic crypto * include mm.h as not all archs include it in dma-mapping.h * make the Kconfig options non-user visible, Adrian Bunk * move async_tx under crypto since it is meant as 'core' functionality, and the two may share algorithms in the future * move large inline functions into c files * checkpatch.pl fixes * gpl v2 only correction Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Acked-By: NeilBrown <neilb@suse.de>
2007-07-13xor: make 'xor_blocks' a library routine for use with async_txDan Williams
The async_tx api tries to use a dma engine for an operation, but will fall back to an optimized software routine otherwise. Xor support is implemented using the raid5 xor routines. For organizational purposes this routine is moved to a common area. The following fixes are also made: * rename xor_block => xor_blocks, suggested by Adrian Bunk * ensure that xor.o initializes before md.o in the built-in case * checkpatch.pl fixes * mark calibrate_xor_blocks __init, Adrian Bunk Cc: Adrian Bunk <bunk@stusta.de> Cc: NeilBrown <neilb@suse.de> Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2007-07-13dmaengine: make clients responsible for managing channelsDan Williams
The current implementation assumes that a channel will only be used by one client at a time. In order to enable channel sharing the dmaengine core is changed to a model where clients subscribe to channel-available-events. Instead of tracking how many channels a client wants and how many it has received the core just broadcasts the available channels and lets the clients optionally take a reference. The core learns about the clients' needs at dma_event_callback time. In support of multiple operation types, clients can specify a capability mask to only be notified of channels that satisfy a certain set of capabilities. Changelog: * removed DMA_TX_ARRAY_INIT, no longer needed * dma_client_chan_free -> dma_chan_release: switch to global reference counting only at device unregistration time, before it was also happening at client unregistration time * clients now return dma_state_client to dmaengine (ack, dup, nak) * checkpatch.pl fixes * fixup merge with git-ioat Cc: Chris Leech <christopher.leech@intel.com> Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Acked-by: David S. Miller <davem@davemloft.net>
2007-07-13dmaengine: refactor dmaengine around dma_async_tx_descriptorDan Williams
The current dmaengine interface defines mutliple routines per operation, i.e. dma_async_memcpy_buf_to_buf, dma_async_memcpy_buf_to_page etc. Adding more operation types (xor, crc, etc) to this model would result in an unmanageable number of method permutations. Are we really going to add a set of hooks for each DMA engine whizbang feature? - Jeff Garzik The descriptor creation process is refactored using the new common dma_async_tx_descriptor structure. Instead of per driver do_<operation>_<dest>_to_<src> methods, drivers integrate dma_async_tx_descriptor into their private software descriptor and then define a 'prep' routine per operation. The prep routine allocates a descriptor and ensures that the tx_set_src, tx_set_dest, tx_submit routines are valid. Descriptor creation and submission becomes: struct dma_device *dev; struct dma_chan *chan; struct dma_async_tx_descriptor *tx; tx = dev->device_prep_dma_<operation>(chan, len, int_flag) tx->tx_set_src(dma_addr_t, tx, index /* for multi-source ops */) tx->tx_set_dest(dma_addr_t, tx, index) tx->tx_submit(tx) In addition to the refactoring, dma_async_tx_descriptor also lays the groundwork for definining cross-channel-operation dependencies, and a callback facility for asynchronous notification of operation completion. Changelog: * drop dma mapping methods, suggested by Chris Leech * fix ioat_dma_dependency_added, also caught by Andrew Morton * fix dma_sync_wait, change from Andrew Morton * uninline large functions, change from Andrew Morton * add tx->callback = NULL to dmaengine calls to interoperate with async_tx calls * hookup ioat_tx_submit * convert channel capabilities to a 'cpumask_t like' bitmap * removed DMA_TX_ARRAY_INIT, no longer needed * checkpatch.pl fixes * make set_src, set_dest, and tx_submit descriptor specific methods * fixup git-ioat merge * move group_list and phys to dma_async_tx_descriptor Cc: Jeff Garzik <jeff@garzik.org> Cc: Chris Leech <christopher.leech@intel.com> Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Acked-by: David S. Miller <davem@davemloft.net>
2007-07-08sis5513: adding PCI-IDUwe Koziolek
The SiS966 has one additional PCI-ID 1180. If the chipset is using this PCI-ID, the primary channel is connected to the first PATA-port. The secondary channel is connected to SATA-ports in IDE emulation mode. The legacy IO-ports are used. The including of the PCI-ID into pata_sis is not sufficient, because the legacy driver in drivers/ide is initialized before pata_sis. Signed-off-by: Uwe Koziolek <uwe.koziolek@gmx.net> Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
2007-07-07include/linux/kallsyms.h must #include <linux/errno.h>Adrian Bunk
This patch fixes the following 2.6.22 regression with CONFIG_KALLSYMS=n: <-- snip --> ... CC arch/m32r/kernel/traps.o In file included from /home/bunk/linux/kernel-2.6/linux-2.6.22-rc6-mm1/arch/m32r/kernel/traps.c:14: /home/bunk/linux/kernel-2.6/linux-2.6.22-rc6-mm1/include/linux/kallsyms.h: In function 'lookup_symbol_name': /home/bunk/linux/kernel-2.6/linux-2.6.22-rc6-mm1/include/linux/kallsyms.h:66: error: 'ERANGE' undeclared (first use in this function) /home/bunk/linux/kernel-2.6/linux-2.6.22-rc6-mm1/include/linux/kallsyms.h:66: error: (Each undeclared identifier is reported only once /home/bunk/linux/kernel-2.6/linux-2.6.22-rc6-mm1/include/linux/kallsyms.h:66: error: for each function it appears in.) /home/bunk/linux/kernel-2.6/linux-2.6.22-rc6-mm1/include/linux/kallsyms.h: In function 'lookup_symbol_attrs': /home/bunk/linux/kernel-2.6/linux-2.6.22-rc6-mm1/include/linux/kallsyms.h:71: error: 'ERANGE' undeclared (first use in this function) make[2]: *** [arch/m32r/kernel/traps.o] Error 1 <-- snip --> Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-06Merge branch 'upstream' of git://ftp.linux-mips.org/pub/scm/upstream-linusLinus Torvalds
* 'upstream' of git://ftp.linux-mips.org/pub/scm/upstream-linus: [MIPS] Fix scheduling latency issue on 24K, 34K and 74K cores [MIPS] Add macros to encode processor revisions. [MIPS] RM7000: Enable ICACHE_REFILLS_WORKAROUND_WAR. [MIPS] SMTC: Fix cut'n'paste bug in Kconfig.debug [MIPS] Change libgcc-style functions from lib-y to obj-y [MIPS] Fix timer/performance interrupt detection [MIPS] AP/SP: Avoid triggering the 34K E125 performance issue [MIPS] 64-bit TO_PHYS_MASK macro for RM9000 processors
2007-07-06i386: es7000 build breakage fixVivek Goyal
o Commit 1833d6bc72893265f22addd79cf52e6987496e0f broke the build if compiled with CONFIG_ES7000=y and CONFIG_X86_GENERICARCH=n arch/i386/kernel/built-in.o(.init.text+0x4fa9): In function `acpi_parse_madt': : undefined reference to `acpi_madt_oem_check' arch/i386/kernel/built-in.o(.init.text+0x7406): In function `smp_read_mpc': : undefined reference to `mps_oem_check' arch/i386/kernel/built-in.o(.init.text+0x8990): In function `connect_bsp_APIC': : undefined reference to `enable_apic_mode' make: *** [.tmp_vmlinux1] Error 1 o Fix the build issue. Provided the definitions of missing functions. o Don't have ES7000 machine. Only compile tested. Cc: Len Brown <lenb@kernel.org> Cc: Natalie Protasevich <protasnb@gmail.com> Cc: Roland Dreier <rolandd@cisco.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-06[MIPS] Fix scheduling latency issue on 24K, 34K and 74K coresRalf Baechle
The idle loop goes to sleep using the WAIT instruction if !need_resched(). This has is suffering from from a race condition that if if just after need_resched has returned 0 an interrupt might set TIF_NEED_RESCHED but we've just completed the test so go to sleep anyway. This would be trivial to fix by just disabling interrupts during that sequence as in: local_irq_disable(); if (!need_resched()) __asm__("wait"); local_irq_enable(); but the processor architecture leaves it undefined if a processor calling WAIT with interrupts disabled will ever restart its pipeline and indeed some processors have made use of the freedom provided by the architecture definition. This has been resolved and the Config7.WII bit indicates that the use of WAIT is safe on 24K, 24KE and 34K cores. It also is safe on 74K starting revision 2.1.0 so enable the use of WAIT with interrupts disabled for 74K based on a c0_prid of at least that. Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2007-07-06[MIPS] Add macros to encode processor revisions.Ralf Baechle
Older processors used to encode processor version and revision in two 4-bit bitfields, the 4K seems to simply count up and even newer MTI cores have switched to use the 8-bits as 3:3:2 bitfield with the last field as the patch number. Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2007-07-06[MIPS] RM7000: Enable ICACHE_REFILLS_WORKAROUND_WAR.Ralf Baechle
The RM7000 processors and the E9000 cores have a bug (though PMC-Sierra opposes it being called that) where invalid instructions in the same I-cache line worth of instructions being fetched may case spurious exceptions. The workaround for this was only enabled for E9000 cores; enable it also for all RM7000-based platforms. Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2007-07-06[MIPS] 64-bit TO_PHYS_MASK macro for RM9000 processorsAndrew Sharp
Signed-off-by: Andrew Sharp <tigerand@gmail.com> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2007-07-05Merge branch 'for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input: Input: document some of keycodes Input: add a new EV_SW SW_RADIO event, for radio switches on laptops Input: serio - take drv_mutex in serio_cleanup() Input: atkbd - use printk_ratelimit for spurious ACK messages Input: atkbd - throttle LED switching Input: i8042 - add HP Pavilion ZT1000 to the MUX blacklist
2007-07-05Merge branch 'merge' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc * 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc: [POWERPC] Update defconfigs [POWERPC] Uninline and export virq_to_hw() for the pasemi_mac driver [POWERPC] Fix PMI breakage in cbe_cbufreq driver [POWERPC] Disable old EMAC driver in arch/powerpc
2007-07-04[MIPS] Add whitelists for checksyscalls.shAtsushi Nemoto
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2007-07-04[MIPS] die(): Properly declare as non-returningMaciej W. Rozycki
This marks the declaration of die() correctly, removing "control reaches end of non-void function" warnings from non-void functions that die() at the end. Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2007-07-04[MIPS] Fix include wrapper symbol definitions in IP32 code.Kumba
Some IP35 defines snuck into some IP32-specific code during the DMA re-write. Signed-off-by: Joshua Kinard <kumba@gentoo.org> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2007-07-03Blackfin arch: remove zero-sized include/asm-blackfin/macros.hMarco Roeland
This file accidentally got truncated instead of deleted in commit df30b11. Signed-off-by: Marco Roeland <marco.roeland@xs4all.nl> Cc: Robert P. J. Day <rpjday@mindspring.com> Cc: Jeff Garzik <jeff@garzik.org> Cc: Jesper Juhl <jesper.juhl@gmail.com> Cc: Alex Riesen <raa.lkml@gmail.com> Cc: Robin Getz <robin.getz@analog.com> Acked-by: Bryan Wu <bryan.wu@analog.com>
2007-07-02[POWERPC] Uninline and export virq_to_hw() for the pasemi_mac driverOlof Johansson
Uninline virq_to_hw and export it so modules can use it. The alternative would be to export the irq_map array instead, but it's an infrequently called function, and keeping the array unexported seems considerably cleaner. This is needed so that the pasemi_mac driver can be compiled as a module. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-01PM: introduce set_target method in pm_opsRafael J. Wysocki
Commit 52ade9b3b97fd3bea42842a056fe0786c28d0555 changed the suspend code ordering to execute pm_ops->prepare() after the device model per-device .suspend() calls in order to fix some ACPI-related issues. Unfortunately, it broke the at91 platform which assumed that pm_ops->prepare() would be called before suspending devices. at91 used pm_ops->prepare() to get notified of the target system sleep state, so that it could use this information while suspending devices. However, with the current suspend code ordering pm_ops->prepare() is called too late for this purpose. Thus, at91 needs an additional method in 'struct pm_ops' that will be used for notifying the platform of the target system sleep state. Moreover, in the future such a method will also be needed by ACPI. This patch adds the .set_target() method to 'struct pm_ops' and makes the suspend code call it, if implemented, before executing the device model per-device .suspend() calls. It also modifies the at91 code to use pm_ops->set_target() instead of pm_ops->prepare(). Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Acked-by: David Brownell <dbrownell@users.sourceforge.net> Cc: Pavel Machek <pavel@ucw.cz> Cc: Johannes Berg <johannes@sipsolutions.net> Cc: Len Brown <lenb@kernel.org> Cc: Greg KH <greg@kroah.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-01pci.h stubs (for EDD build error)Randy Dunlap
Provide stubs for more PCI bus/slot functions when CONFIG_PCI=n. Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Cc: Matt Domsch <Matt_Domsch@dell.com> Cc: Greg KH <greg@kroah.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-01frv: fix fallout from "remove sched.h from mm.h" patchAlexey Dobriyan
/home/rpjday/AMD/k/topics/0_hi/hi1.c:15: error: dereferencing pointer to incomplete type /home/rpjday/AMD/k/topics/0_hi/hi1.c:16: error: dereferencing pointer to incomplete type Signed-off-by: Alexey Dobriyan <adobriyan@sw.ru> Cc: David Howells <dhowells@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-06-29Merge branch 'master' of ↵Linus Torvalds
master.kernel.org:/pub/scm/linux/kernel/git/davem/sparc-2.6 * 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/sparc-2.6: [SPARC64]: Add linux/pagemap.h to asm/tlb.h [SPARC64]: Need to set state to IDLE during sun4v IRQ enable. [SPARC64]: Fix VIRQ enabling. [SPARC64]: Add irqs to mdesc_node.
2007-06-29Input: document some of keycodesDmitry Torokhov
Document some of keycodes, based on USB HUT 1.12 and current mapping in HID driver. Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
2007-06-29Input: add a new EV_SW SW_RADIO event, for radio switches on laptopsHenrique de Moraes Holschuh
Many laptops have rf-kill physical switches that are not keys, but slider or rocker switches. Often (like in all ThinkPads with a radio kill slider switch), they have both a slider/rocker switch and a hot key. Trying to kludge a real switch to act like a key is not a very smart thing to do if you can help it, and it gets specially bad when you are going to have both in the same machine. So, we do the right thing and add an input EV_SW event for radio kill switches. The EV_SW SW_RADIO event is defined with positive logic, i.e. when the switch is active, the radios are to be enabled. When the switch is inactive, the radios are to be disabled. Signed-off-by: Henrique de Moraes Holschuh <hmh@hmh.eng.br> Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
2007-06-28[SPARC64]: Add linux/pagemap.h to asm/tlb.hAlexey Dobriyan
As seen on sparc64-allnoconfig: CC arch/sparc64/mm/tlb.o In file included from arch/sparc64/mm/tlb.c:19: include/asm/tlb.h: In function 'tlb_flush_mmu': include/asm/tlb.h:60: warning: implicit declaration of function 'release_pages' include/asm/tlb.h: In function 'tlb_remove_page': include/asm/tlb.h:92: warning: implicit declaration of function 'page_cache_release' Signed-off-by: Alexey Dobriyan <adobriyan@sw.ru> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-06-28Introduce fixed sys_sync_file_range2() syscall, implement on PowerPC and ARMDavid Woodhouse
Not all the world is an i386. Many architectures need 64-bit arguments to be aligned in suitable pairs of registers, and the original sys_sync_file_range(int, loff_t, loff_t, int) was therefore wasting an argument register for padding after the first integer. Since we don't normally have more than 6 arguments for system calls, that left no room for the final argument on some architectures. Fix this by introducing sys_sync_file_range2(int, int, loff_t, loff_t) which all fits nicely. In fact, ARM already had that, but called it sys_arm_sync_file_range. Move it to fs/sync.c and rename it, then implement the needed compatibility routine. And stop the missing syscall check from bitching about the absence of sys_sync_file_range() if we've implemented sys_sync_file_range2() instead. Tested on PPC32 and with 32-bit and 64-bit userspace on PPC64. Signed-off-by: David Woodhouse <dwmw2@infradead.org> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-06-28eventfd: clean compile when CONFIG_EVENTFD=nRandy Dunlap
Fix gcc warning and add parameter checking when CONFIG_EVENTFD=n: fs/aio.c: In function 'aio_complete': fs/aio.c:955: warning: statement with no effect Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Cc: Davide Libenzi <davidel@xmailserver.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-06-27Merge branch 'release' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6 * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6: [IA64] Make SN2 PCI code use ioremap rather than manually mangle the address [IA64] Force error to surface in nofault code [IA64] change sh_change_coherence oemcall to use nolock [IA64] remove duplicate header include line [IA64] Correct unwind validation code [IA64] is_power_of_2-ia64/mm/hugetlbpage.c
2007-06-27libata: kill ATA_HORKAGE_DMA_RW_ONLYTejun Heo
ATA_HORKAGE_DMA_RW_ONLY for TORiSAN is verified to be subset of using DMA for ATAPI commands which aren't aligned to 16 bytes. As libata now doesn't use DMA for unaligned ATAPI commands, the horkage is redundant. Kill it. Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: Jeff Garzik <jeff@garzik.org>
2007-06-27libata: kill the infamous abnormal status messageTejun Heo
The infamous abnormal status message triggers on not so abnormal cases including empty port and even when it's being triggered on actual errors the info it provides is redundant and out of context - higher level functions will print the info in better safe later anyway. Also, by being triggered all the time, it leads people to think that the abnormality is somehow related to all ATA and system problems they're experiencing and gives owners of healthy systems unfounded doubts about the integrity of the universe. Make it a DPRINTK and save the universe. Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: Jeff Garzik <jeff@garzik.org>
2007-06-26Merge master.kernel.org:/home/rmk/linux-2.6-armLinus Torvalds
* master.kernel.org:/home/rmk/linux-2.6-arm: [ARM] 4449/1: more entries in arch/arm/boot/.gitignore [ARM] 4452/1: Force the literal pool dump before reloc_end [ARM] Update show_regs/oops register format [ARM] Add support for pause_on_oops and display preempt/smp options
2007-06-26Merge branch 'upstream' of git://ftp.linux-mips.org/pub/scm/upstream-linusLinus Torvalds
* 'upstream' of git://ftp.linux-mips.org/pub/scm/upstream-linus: [MIPS] Count timer interrupts correctly. [MIPS] SMTC and non-SMTC kernel and modules are incompatible [MIPS] EMMA2RH: Disable GEN_RTC, it can't possibly work. [MIPS] Remove a duplicated local variable in test_and_clear_bit() [MIPS] use compat_siginfo in rt_sigframe_n32 [MIPS] 20K: Handle WAIT related bugs according to errata information [MIPS] AP/SP requires shadow registers, auto enable support. [MIPS] Fix pb1500 reg B access [MIPS] Alchemy: Fix wrong cast [MIPS] remove "support for" from system type entry [MIPS] add io_map_base to pci_controller on Cobalt [MIPS] __ucmpdi2 arguments are unsigned long long.
2007-06-26[IA64] change sh_change_coherence oemcall to use nolockDean Nelson
Change sn_change_coherence's ia64_sal_oemcall to the nolock variety since PROM does the locking for this function internally. Signed-off-by: Dean Nelson <dcn@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2007-06-26[MIPS] SMTC and non-SMTC kernel and modules are incompatibleRalf Baechle
So don't allow mixing. Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2007-06-26[MIPS] Remove a duplicated local variable in test_and_clear_bit()Atsushi Nemoto
Fix a sparse warning caused by 2c921d07f8c641e691b0dfd80a5cfe14c60ec489 include2/asm/bitops.h:313:23: warning: symbol 'res' shadows an earlier one include2/asm/bitops.h:309:16: originally declared here Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2007-06-26[MIPS] use compat_siginfo in rt_sigframe_n32Pavel Kiryukhin
Signed-off-by: Pavel Kiryukhin <vksavl@gmail.com> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2007-06-26[SPARC64]: Add irqs to mdesc_node.David S. Miller
Will be used to store translated LDC rx-ino and tx-ino. Signed-off-by: David S. Miller <davem@davemloft.net>
2007-06-25fix nmi_watchdog=2 bootup hangBjörn Steinbrink
wrmsrl() is broken, dropping the upper 32bits of the value to be written. This broke the NMI watchdog on AMD hardware. (and it probably broke other code too.) Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-06-25Blackfin arch: Add proper -mcpu option according to the cpu and silicon ↵Jie Zhang
revision configuration Add silicon revision "any" and "none". Add proper -mcpu option according to the cpu and silicon revision configuration. Need update to use latest Blackfin cross compile toolchain. Signed-off-by: Jie Zhang <jie.zhang@analog.com> Signed-off-by: Mike Frysinger <michael.frysinger@analog.com> Signed-off-by: Bryan Wu <bryan.wu@analog.com>
2007-06-24Merge branch 'master' of ↵Linus Torvalds
master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6 * 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6: [NET]: Make skb_seq_read unmap the last fragment [NET]: Re-enable irqs before pushing pending DMA requests [TCP] tcp_read_sock: Allow recv_actor() return return negative error value. [PPP]: Fix osize too small errors when decoding mppe. [PPP]: Revert 606f585e363527da9feaed79465132c0c661fd9e [TIPC]: Fix infinite loop in netlink handler [SKBUFF]: Fix incorrect config #ifdef around skb_copy_secmark [IPV4]: include sysctl.h from inetdevice.h [IPV6] NDISC: Fix thinko to control Router Preference support. [NETFILTER]: nfctnetlink: Don't allow to change helper [NETFILTER]: nf_conntrack_sip: add missing message types containing RTP info
2007-06-24slab allocators: MAX_ORDER one off fixChristoph Lameter
MAX_ORDER is the first order that is not possible. Use MAX_ORDER - 1 to calculate the larges possible object size in slab.h Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-06-24document nlink functionDave Hansen
These should have been documented from the beginning. Fix it. Signed-off-by: Dave Hansen <haveblue@us.ibm.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-06-24uml: add asm/paravirt.hJeff Dike
Add asm-um/paravirt.h so that i386 headers that get pulled into UML don't cause build failures when they want asm/paravirt.h. Signed-off-by: Jeff Dike <jdike@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>