aboutsummaryrefslogtreecommitdiff
path: root/arch/powerpc/platforms/cell
AgeCommit message (Collapse)Author
2008-07-30cpufreq acpi: only call _PPC after cpufreq ACPI init funcs got called alreadyThomas Renninger
Ingo Molnar provided a fix to not call _PPC at processor driver initialization time in "[PATCH] ACPI: fix cpufreq regression" (git commit e4233dec749a3519069d9390561b5636a75c7579) But it can still happen that _PPC is called at processor driver initialization time. This patch should make sure that this is not possible anymore. Signed-off-by: Thomas Renninger <trenn@suse.de> Cc: Andi Kleen <andi@firstfloor.org> Cc: Len Brown <lenb@kernel.org> Cc: Dave Jones <davej@codemonkey.org.uk> Cc: Ingo Molnar <mingo@elte.hu> Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-26SL*B: drop kmem cache argument from constructorAlexey Dobriyan
Kmem cache passed to constructor is only needed for constructors that are themselves multiplexeres. Nobody uses this "feature", nor does anybody uses passed kmem cache in non-trivial way, so pass only pointer to object. Non-trivial places are: arch/powerpc/mm/init_64.c arch/powerpc/mm/hugetlbpage.c This is flag day, yes. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Acked-by: Pekka Enberg <penberg@cs.helsinki.fi> Acked-by: Christoph Lameter <cl@linux-foundation.org> Cc: Jon Tollefson <kniht@linux.vnet.ibm.com> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Matt Mackall <mpm@selenic.com> [akpm@linux-foundation.org: fix arch/powerpc/mm/hugetlbpage.c] [akpm@linux-foundation.org: fix mm/slab.c] [akpm@linux-foundation.org: fix ubifs] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-26dma-mapping: add the device argument to dma_mapping_error()FUJITA Tomonori
Add per-device dma_mapping_ops support for CONFIG_X86_64 as POWER architecture does: This enables us to cleanly fix the Calgary IOMMU issue that some devices are not behind the IOMMU (http://lkml.org/lkml/2008/5/8/423). I think that per-device dma_mapping_ops support would be also helpful for KVM people to support PCI passthrough but Andi thinks that this makes it difficult to support the PCI passthrough (see the above thread). So I CC'ed this to KVM camp. Comments are appreciated. A pointer to dma_mapping_ops to struct dev_archdata is added. If the pointer is non NULL, DMA operations in asm/dma-mapping.h use it. If it's NULL, the system-wide dma_ops pointer is used as before. If it's useful for KVM people, I plan to implement a mechanism to register a hook called when a new pci (or dma capable) device is created (it works with hot plugging). It enables IOMMUs to set up an appropriate dma_mapping_ops per device. The major obstacle is that dma_mapping_error doesn't take a pointer to the device unlike other DMA operations. So x86 can't have dma_mapping_ops per device. Note all the POWER IOMMUs use the same dma_mapping_error function so this is not a problem for POWER but x86 IOMMUs use different dma_mapping_error functions. The first patch adds the device argument to dma_mapping_error. The patch is trivial but large since it touches lots of drivers and dma-mapping.h in all the architecture. This patch: dma_mapping_error() doesn't take a pointer to the device unlike other DMA operations. So we can't have dma_mapping_ops per device. Note that POWER already has dma_mapping_ops per device but all the POWER IOMMUs use the same dma_mapping_error function. x86 IOMMUs use device argument. [akpm@linux-foundation.org: fix sge] [akpm@linux-foundation.org: fix svc_rdma] [akpm@linux-foundation.org: build fix] [akpm@linux-foundation.org: fix bnx2x] [akpm@linux-foundation.org: fix s2io] [akpm@linux-foundation.org: fix pasemi_mac] [akpm@linux-foundation.org: fix sdhci] [akpm@linux-foundation.org: build fix] [akpm@linux-foundation.org: fix sparc] [akpm@linux-foundation.org: fix ibmvscsi] Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: Avi Kivity <avi@qumranet.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-25powerpc/pseries: iommu enablement for CMORobert Jennings
To support Cooperative Memory Overcommitment (CMO), we need to check for failure from some of the tce hcalls. These changes for the pseries platform affect the powerpc architecture; patches for the other affected platforms are included in this patch. pSeries platform IOMMU code changes: * platform TCE functions must handle H_NOT_ENOUGH_RESOURCES errors and return an error. Architecture IOMMU code changes: * Calls to ppc_md.tce_build need to check return values and return DMA_MAPPING_ERROR for transient errors. Architecture changes: * struct machdep_calls for tce_build*_pSeriesLP functions need to change to indicate failure. * all other platforms will need updates to iommu functions to match the new calling semantics; they will return 0 on success. The other platforms default configs have been built, but no further testing was performed. Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com> Acked-by: Olof Johansson <olof@lixom.net> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-25powerpc/cell: Fixed IOMMU mapping uses weak ordering for a pcie endpointMark Nelson
At the moment the fixed mapping is by default strongly ordered (the iommu_fixed=weak boot option must be used to make the fixed mapping weakly ordered). If we're on a setup where the southbridge is being used in endpoint mode (triblade and CAB boards) the default should be a weakly ordered fixed mapping. This adds a check so that if a node of type pcie-endpoint can be found in the device tree the fixed mapping is set to be weak by default (but can be overridden using iommu_fixed=strong). Signed-off-by: Mark Nelson <markn@au1.ibm.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-25Merge commit 'jk/jk-merge'Benjamin Herrenschmidt
2008-07-24spufs: use new vm_ops->access to allow local state access from gdbBenjamin Herrenschmidt
This uses the new vm_ops->access to allow gdb to access the SPU local store. We currently prevent access to problem state registers, this can be done later if really needed but it's safer not to. [akpm@linux-foundation.org: fix typo] Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Rik van Riel <riel@redhat.com> Cc: Dave Airlie <airlied@linux.ie> Cc: Hugh Dickins <hugh@veritas.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-24powerpc/spufs: better placement of spu affinity reference contextAndre Detsch
This patch adjusts the placement of a reference context from a spu affinity chain. The reference context can now be placed only on nodes that have enough spus not intended to be used by another gang (already running on the node). Signed-off-by: Andre Detsch <adetsch@br.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-07-24powerpc/spufs: fix aff_mutex and cbe_spu_info[n].list_mutex deadlockAndre Detsch
Currenlt,, it is possible to lock aff_mutex and cbe_spu_info[n].list_mutex in different orders, allowing a deadlock to occur. With this change, aff_mutex is not taken within a list_mutex critical section anymore. Signed-off-by: Andre Detsch <adetsch@br.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-07-23powerpc/spufs: correct kcalloc usageMilton Miller
kcalloc is supposed to be called with the count as its first argument and the element size as the second. Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-07-22Merge branch 'merge' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc * 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc: (49 commits) powerpc: Fix build bug with binutils < 2.18 and GCC < 4.2 powerpc/eeh: Don't panic when EEH_MAX_FAILS is exceeded fbdev: Teaches offb about palette on radeon r5xx/r6xx powerpc/cell/edac: Log a syndrome code in case of correctable error powerpc/cell: Add DMA_ATTR_WEAK_ORDERING dma attribute and use in Cell IOMMU code powerpc: Indicate which oprofile counters to use while in compat mode powerpc/boot: Change spaces to tabs powerpc: Remove duplicate 6xx option in Kconfig powerpc: Use PPC_LONG and PPC_LONG_ALIGN in lib/string.S powerpc: Use PPC_LONG_ALIGN in uaccess.h powerpc: Add a #define for aligning to a long-sized boundary powerpc: Fix OF parsing of 64 bits PCI addresses powerpc: Use WARN_ON(1) instead of __WARN() powerpc: Fix support for latencytop powerpc/ps3: Update ps3_defconfig powerpc/ps3: Add a sub-match id to ps3_system_bus powerpc: Add a 6xx defconfig powerpc/dma: Use the struct dma_attrs in iommu code powerpc/cell: Add support for power button of future IBM cell blades powerpc/cell: Cleanup sysreset_hack for IBM cell blades ...
2008-07-21sysdev: Pass the attribute to the low level sysdev show/store functionAndi Kleen
This allow to dynamically generate attributes and share show/store functions between attributes. Right now most attributes are generated by special macros and lots of duplicated code. With the attribute passed it's instead possible to attach some data to the attribute and then use that in shared low level functions to do different things. I need this for the dynamically generated bank attributes in the x86 machine check code, but it'll allow some further cleanups. I converted all users in tree to the new show/store prototype. It's a single huge patch to avoid unbisectable sections. Runtime tested: x86-32, x86-64 Compiled only: ia64, powerpc Not compile tested/only grep converted: sh, arm, avr32 Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2008-07-22powerpc/cell: Add DMA_ATTR_WEAK_ORDERING dma attribute and use in Cell IOMMU ↵Mark Nelson
code Introduce a new dma attriblue DMA_ATTR_WEAK_ORDERING to use weak ordering on DMA mappings in the Cell processor. Add the code to the Cell's IOMMU implementation to use this code. Dynamic mappings can be weakly or strongly ordered on an individual basis but the fixed mapping has to be either completely strong or completely weak. This is currently decided by a kernel boot option (pass iommu_fixed=weak for a weakly ordered fixed linear mapping, strongly ordered is the default). Signed-off-by: Mark Nelson <markn@au1.ibm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-22powerpc/dma: Use the struct dma_attrs in iommu codeMark Nelson
Update iommu_alloc() to take the struct dma_attrs and pass them on to tce_build(). This change propagates down to the tce_build functions of all the platforms. Signed-off-by: Mark Nelson <markn@au1.ibm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-22powerpc/cell: Add support for power button of future IBM cell bladesChristian Krafft
This patch adds support for the power button on future IBM cell blades. It actually doesn't shut down the machine. Instead it exposes an input device /dev/input/event0 to userspace which sends KEY_POWER if power button has been pressed. haldaemon actually recognizes the button, so a plattform independent acpid replacement should handle it correctly. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-22powerpc/cell: Cleanup sysreset_hack for IBM cell bladesChristian Krafft
This patch adds a config option for the sysreset_hack used for IBM Cell blades. The code is moves from pervasive.c into ras.c and gets it's own init method. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-22powerpc/cell/cpufreq: Add spu aware cpufreq governorChristian Krafft
This patch adds a cpufreq governor that takes the number of running spus into account. It's very similar to the ondemand governor, but not as complex. Instead of hacking spu load into the ondemand governor it might be easier to have cpufreq accepting multiple governors per cpu in future. Don't know if this is the right way, but it would keep the governors simple. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Dave Jones <davej@redhat.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-16Merge commit 'origin/master'Benjamin Herrenschmidt
Manual merge of: arch/powerpc/Kconfig arch/powerpc/kernel/stacktrace.c arch/powerpc/mm/slice.c arch/ppc/kernel/smp.c
2008-07-15powerpc: Remove unnecessary condition when sanity-checking WIMG bitsDave Kleikamp
It is okay for both _PAGE_GUARDED and _PAGE_COHERENT (G and M) to be set in the same pte. In fact, even if that were not the case, there doesn't seem to be any place where G is set without also setting I (_PAGE_NO_CACHE), so the test for I is sufficient as a condition to clear _PAGE_COHERENT when filling the hash table. Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-09powerpc/cell: cell_dma_dev_setup_iommu() return the iommu tableMark Nelson
Make cell_dma_dev_setup_iommu() return a pointer to the struct iommu_table (or NULL if no table can be found) rather than putting this pointer into dev->archdata.dma_data (let the caller do that), and rename this function to cell_get_iommu_table() to reflect this change. This will allow us to get the iommu table for a device that doesn't have the table in the archdata. Signed-off-by: Mark Nelson <markn@au1.ibm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-09powerpc/spufs: add atomic busy_spus counter to struct cbe_spu_infoMaxim Shchetynin
As nr_active counter includes also spus waiting for syscalls to return we need a seperate counter that only counts spus that are currently running on spu side. This counter shall be used by a cpufreq governor that targets a frequency dependent from the number of running spus. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Acked-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-09powerpc/spufs: only add ".ctx" file with "debug" mount optionJeremy Kerr
Currently, the .ctx debug file in spu context directories is always present. We'd prefer to prevent users from relying on this file, so add a "debug" mount option to spufs. The .ctx file will only be added to the context directories when this option is present. Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-07-09powerpc/spufs: add sizes for context filesJeremy Kerr
Populate the size member of a few context files. Leave out files that have different semantics with read vs mmap, or contain a variable-length hex string. Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-07-09powerpc/spufs: allow spufs files to specify sizesJeremy Kerr
Currently, spufs never specifies the i_size for the files in context directories, so stat() always reports 0-byte files. This change adds allows the spufs_dir_(nosched_)contents arrays to specify a file size. This allows stat() to report correct file sizes, and makes SEEK_END work. Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-07-09powerpc/spufs: avoid magic numbers for mapping sizesJeremy Kerr
Use a set of #defines for the size of context mappings, instead of magic numbers. Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-07-09powerpc/spufs: don't extend time time slice if context is not in spu_runLuke Browning
An spu context shouldn't get an extra tick if the time slice code couldn't find something else to run. This means contexts that are not within spu_run (ie, SPU_SCHED_SPU_RUN is cleared) will not receive extra ticks while we have no other contexts waiting. Signed-off-by: Luke Browning <lukebrowning@us.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-07-09powerpc/spufs: provide context debug fileLuke Browning
Add a ctxt file to spufs that shows spu context information that is used in scheduling. This info can be used for debugging spufs scheduler issues, and to isolate between application and spufs problems as it shows a lot of state such as priorities and dispatch counts. This file contains internal spufs state and is subject to change at any time, and therefore no applications should depend on it. The file is intended for the use of spufs kernel developers. Signed-off-by: Luke Browning <lukebrowning@us.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-06-30powerpc/cell: Disable ptcal in case of crash kdumpArnd Bergmann
We need to disable ptcal before starting a new kernel after a crash, in order to avoid overwriting data in the kdump kernel. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-06-30spufs: Convert nopfn to faultNick Piggin
Signed-off-by: Nick Piggin <npiggin@suse.de> Acked-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-06-30Merge branch 'linux-2.6'Paul Mackerras
2008-06-26powerpc: convert to generic helpers for IPI function callsJens Axboe
This converts ppc to use the new helpers for smp_call_function() and friends, and adds support for smp_call_function_single(). ppc loses the timeout functionality of smp_call_function_mask() with this change, as the generic code does not provide that. Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-06-16powerpc/spufs: fix missed stop-and-signal eventLuke Browning
There is a delay in the transition to the stopped state for class 2 interrupts. In some cases, the controlling thread detects the state of the spu as running, and goes back to sleep resulting in a hung application as the event is missed. This change detects the stop condition and re-generates the wakeup event after a context save. Signed-off-by: Luke Browning <lukebrowning@us.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-06-16powerpc/spufs: synchronize interaction between spu exception handling and ↵Luke Browning
time slicing Time slicing can occur at the same time as spu exception handling resulting in the wakeup of the wrong thread. This change uses the the spu's register_lock to enforce synchronization between bind/unbind and spu exception handling so that they are mutually exclusive. Signed-off-by: Luke Browning <lukebrowning@us.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-06-16powerpc/spufs: remove class_0_dsisr from spu exception handlingLuke Browning
According to the CBEA, the SPU dsisr is not updated for class 0 exceptions. spu_stopped() is testing the dsisr that was passed to it from the class 0 exception handler, so we return a false positive here. This patch cleans up the interrupt handler and erroneous tests in spu_stopped. It also removes the fields from the csa since it is not needed to process class 0 events. Signed-off-by: Luke Browning <lukebrowning@us.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-06-16powerpc/spufs: wait for stable spu status in spu_stopped()Luke Browning
If the spu is stopping (ie, the SPU_STATUS_RUNNING bit is still set), re-read the register to get the final stopped value. Signed-off-by: Luke Browning <lukebrowning@us.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-06-09powerpc: Fix irq_alloc_host() reference counting and callersMichael Ellerman
When I changed irq_alloc_host() to take an of_node (52964f87c64e6c6ea671b5bf3030fb1494090a48: "Add an optional device_node pointer to the irq_host"), I botched the reference counting semantics. Stephen pointed out that it's irq_alloc_host()'s business if it needs to take an additional reference to the device_node, the caller shouldn't need to care. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-06-09powerpc: Rework Axon MSI setup so we can avoid freeing the irq_hostMichael Ellerman
If we do the call to irq_of_parse_and_map() first, then we don't need to worry about freeing the irq_host. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-06-09Merge branch 'merge'Paul Mackerras
Conflicts: arch/powerpc/sysdev/fsl_soc.c
2008-06-04celleb_scc_pciex endianness misannotationsAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-05-23[POWERPC] Add debugging trigger to Axon MSI codeMichael Ellerman
This adds some debugging code to the Axon MSI driver. It creates a file per MSIC in /sys/kernel/debug/powerpc, which allows the user to trigger a fake MSI interrupt by writing to the file. This can be used to test some of the MSI generation path. In particular, that the MSIC recognises a write to the MSI address, generates an interrupt and writes the MSI packet into the ring buffer. All the code is inside #ifdef DEBUG so it causes no harm unless it's enabled. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-05-15[POWERPC] cell: Fix section mismatches in io-workarounds codeIshizaki Kou
Fix following warnings: WARNING: arch/powerpc/platforms/cell/built-in.o(.devinit.text+0x9c): Section mismatch in reference from the function .cell_setup_phb() to the function .init.text:.iowa_register_bus() WARNING: arch/powerpc/platforms/cell/built-in.o(.devinit.text+0xa4): Section mismatch in reference from the function .cell_setup_phb() to the function .init.text:.io_workaround_init() Signed-off-by: Kou Ishizaki <kou.ishizaki@toshiba.co.jp> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-05-15[POWERPC] spufs: Fix compile errorFUJITA Tomonori
With CONFIG_VIRT_CPU_ACCOUNTING disabled, I got the following error: linux-2.6/arch/powerpc/platforms/cell/spufs/file.c: In function 'spu_switch_log_notify': linux-2.6/arch/powerpc/platforms/cell/spufs/file.c:2542: error: implicit declaration of function 'get_tb' make[4]: *** [arch/powerpc/platforms/cell/spufs/file.o] Error 1 Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-05-15[POWERPC] spufs: Fix pointer reference in find_victimLuke Browning
If victim (not ctx) is in spu_run, add victim to rq. Signed-off-by: Luke Browning <lukebrowning@us.ibm.com> Acked-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-05-09Merge branch 'for-2.6.26' of ↵Paul Mackerras
master.kernel.org:/pub/scm/linux/kernel/git/jwboyer/powerpc-4xx into merge
2008-05-08[POWERPC] spufs: lockdep annotations for spufs_dir_closeChristoph Hellwig
We need to acquire the parent i_mutex with I_MUTEX_PARENT to keep lockdep happy. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-05-08[POWERPC] spufs: don't requeue victim contex in find_victim if it's not in ↵Christoph Hellwig
spu_run We should not requeue the victim context in find_victim if the owner is not in spu_run. It's first not needed because leaving the context on the spu is an optimization and second is harmful because it means the owner could re-enter spu_run when the context is on the runqueue and trip the BUG_ON in __spu_update_sched_info. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-05-06[POWERPC] spufs: spu_create should send inotify IM_CREATE eventChristoph Hellwig
Creating a spufs context or gand using spu_create should send an inotify event so that things like performance monitors have an easy way to find out about newly created contexts. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-05-05[POWERPC] spufs: handle faults while the context switch pending flag is setLuke Browning
Currently, page fault handlers don't issue a mfc restart if the context switch pending flag is set, which can leave us with a hanging DMA after a context restore. This patch introduces fault pending flag that is set by the fault handler and read by the context switch code, so that the latter can add the restart bit at the right spot, after it has successfuly saved the state of the mfc control register. Signed-off-by: Luke Browning <lukebr@linux.vnet.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-05-05[POWERPC] spufs: fix concurrent delivery of class 0 & 1 exceptionsLuke Browning
SPU class 0 & 1 exceptions may occur in parallel, so we may end up overwriting csa.dsisr. This change adds dedicated fields for each class to the spu and the spu context so that fault data is not overwritten. Signed-off-by: Luke Browning <lukebr@linux.vnet.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-05-05[POWERPC] spufs: try to route SPU interrupts to local nodeLuke Browning
Currently, we re-route SPU interrupts to the current cpu, which may be on a remote node. In the case of time slicing, all spu interrupts will end up routed to the same cpu, where the spusched_tick occurs. This change routes mfc interrupts to the cpu where the controlling thread last ran, provided that cpu is on the same node as the spu (otherwise don't reroute interrupts). This should improve performance and provide a more predictable environment for processing spu exceptions. In the past we have seen concurrent delivery of spu exceptions to two cpus. This eliminates that concern. Signed-off-by: Luke Browning <lukebr@linux.vnet.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org>