aboutsummaryrefslogtreecommitdiff
path: root/arch/powerpc/kernel/dma.c
AgeCommit message (Collapse)Author
2009-09-24powerpc: Rename get_dma_direct_offset get_dma_offsetBecky Bruce
The former is no longer really accurate with the swiotlb case now a possibility. I also move it into dma-mapping.h - it no longer needs to be in dma.c, and there are about to be some more accessors that should all end up in the same place. A comment is added to indicate that this function is not used in configs where there is no simple dma offset, such as the iommu case. Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-28powerpc: Add CONFIG_DMA_API_DEBUG supportFUJITA Tomonori
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-28powerpc: use dma_map_ops structFUJITA Tomonori
This converts uses dma_map_ops struct (in include/linux/dma-mapping.h) instead of POWERPC homegrown dma_mapping_ops. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: Becky Bruce <beckyb@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-10powerpc/dma: pci_set_dma_mask() shouldn't fail if mask fits in RAMBenjamin Herrenschmidt
On an iMac G5, the b43 driver is failing to initialise because trying to set the dma mask to 30-bit fails. Even though there's only 512MiB of RAM in the machine anyway: https://bugzilla.redhat.com/show_bug.cgi?id=514787 We should probably let it succeed if the available RAM in the system doesn't exceed the requested limit. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-06-09powerpc: Add support for swiotlb on 32-bitBecky Bruce
This patch includes the basic infrastructure to use swiotlb bounce buffering on 32-bit powerpc. It is not yet enabled on any platforms. Probably the most interesting bit is the addition of addr_needs_map to dma_ops - we need this as a dma_op because the decision of whether or not an addr can be mapped by a device is device-specific. Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-05-27powerpc: Fix up dma_alloc_coherent() on platforms without cache coherency.Benjamin Herrenschmidt
The implementation we just revived has issues, such as using a Kconfig-defined virtual address area in kernel space that nothing actually carves out (and thus will overlap whatever is there), or having some dependencies on being self contained in a single PTE page which adds unnecessary constraints on the kernel virtual address space. This fixes it by using more classic PTE accessors and automatically locating the area for consistent memory, carving an appropriate hole in the kernel virtual address space, leaving only the size of that area as a Kconfig option. It also brings some dma-mask related fixes from the ARM implementation which was almost identical initially but grew its own fixes. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-04-07dma-mapping: replace all DMA_32BIT_MASK macro with DMA_BIT_MASK(32)Yang Hongyang
Replace all DMA_32BIT_MASK macro with DMA_BIT_MASK(32) Signed-off-by: Yang Hongyang<yanghy@cn.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-12-03powerpc: Add sync_*_for_* to dma_opsBecky Bruce
We need to swap these out once we start using swiotlb, so add them to dma_ops. Create CONFIG_PPC_NEED_DMA_SYNC_OPS Kconfig option; this is currently enabled automatically if we're CONFIG_NOT_COHERENT_CACHE. In the future, this will also be enabled for builds that need swiotlb. If PPC_NEED_DMA_SYNC_OPS is not defined, the dma_sync_*_for_* ops compile to nothing. Otherwise, they access the dma_ops pointers for the sync ops. This patch also changes dma_sync_single_range_* to actually sync the range - previously it was using a generous dma_sync_single. dma_sync_single_* is now implemented as a dma_sync_single_range with an offset of 0. Signed-off-by: Becky Bruce <becky.bruce@freescale.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-03powerpc: Fix dma_map_sg() cache flushing on non coherent platformsBenjamin Herrenschmidt
On PowerPC 4xx or other non cache-coherent platforms, we lost the appropriate cache flushing in dma_map_sg() when merging the 32 and 64-bit DMA code (commit 4fc665b88a79a45bae8bbf3a05563c27c7337c3d, "powerpc: Merge 32 and 64-bit dma code"). This restores it. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Becky Bruce <beckyb@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-14powerpc: Fix DMA offset for non-coherent DMABenjamin Herrenschmidt
After Becky's work we can almost have different DMA offsets between on-chip devices and PCI. Almost because there's a problem with the non-coherent DMA code that basically ignores the programmed offset to use the global one for everything. This fixes it. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-09-24powerpc: Merge 32 and 64-bit dma codeBecky Bruce
We essentially adopt the 64-bit dma code, with some changes to support 32-bit systems, including HIGHMEM. dma functions on 32-bit are now invoked via accessor functions which call the correct op for a device based on archdata dma_ops. If there is no archdata dma_ops, this defaults to dma_direct_ops. In addition, the dma_map/unmap_page functions are added to dma_ops because we can't just fall back on map/unmap_single when HIGHMEM is enabled. In the case of dma_direct_*, we stop using map/unmap_single and just use the page version - this saves a lot of ugly ifdeffing. We leave map/unmap_single in the dma_ops definition, though, because they are needed by the iommu code, which does not implement map/unmap_page. Ideally, going forward, we will completely eliminate map/unmap_single and just have map/unmap_page, if it's workable for 64-bit. Signed-off-by: Becky Bruce <becky.bruce@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-09-24powerpc: Drop archdata numa_nodeBecky Bruce
Use the struct device's numa_node instead; use accessor functions to get/set numa_node. Signed-off-by: Becky Bruce <becky.bruce@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-09-24powerpc: Move iommu dma ops from dma.c to dma-iommu.cBecky Bruce
32-bit platforms are about to start using dma.c; move the iommu dma ops into their own file to make this a bit cleaner. Signed-off-by: Becky Bruce <becky.bruce@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-09-24powerpc: Rename dma_64.c to dma.cBecky Bruce
This is in preparation for the merge of the 32 and 64-bit dma code in arch/powerpc. Signed-off-by: Becky Bruce <becky.bruce@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>