aboutsummaryrefslogtreecommitdiff
path: root/arch/arm/mm
AgeCommit message (Collapse)Author
2009-03-28Merge branch 'master' into develRussell King
Conflicts: arch/arm/include/asm/elf.h arch/arm/kernel/module.c
2009-03-28[ARM] 5435/1: fix compile warning in sanity_check_meminfo()Mikael Pettersson
Compiling recent 2.6.29-rc kernels for ARM gives me the following warning: arch/arm/mm/mmu.c: In function 'sanity_check_meminfo': arch/arm/mm/mmu.c:697: warning: comparison between pointer and integer This is because commit 3fd9825c42c784a59b3b90bdf073f49d4bb42a8d "[ARM] 5402/1: fix a case of wrap-around in sanity_check_meminfo()" in 2.6.29-rc5-git4 added a comparison of a pointer with PAGE_OFFSET, which is an integer. Fixed by casting PAGE_OFFSET to void *. Signed-off-by: Mikael Pettersson <mikpe@it.uu.se> Acked-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-03-26Merge branch 'for-rmk' of git://gitorious.org/linux-gemini/mainline into develRussell King
Conflicts: arch/arm/mm/Kconfig Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-03-25ARM: Add support for FA526 v2Paulius Zaleckas
Adds support for Faraday FA526 core. This core is used at least by: Cortina Systems Gemini and Centroid family Cavium Networks ECONA family Grain Media GM8120 Pixelplus ImageARM Prolific PL-1029 Faraday IP evaluation boards v2: - move TLB_BTB to separate patch - update copyrights Signed-off-by: Paulius Zaleckas <paulius.zaleckas@teltonika.lt>
2009-03-24Merge branch 'highmem' into develRussell King
2009-03-24Merge branch 'devel' of ↵root
git://git.kernel.org/pub/scm/linux/kernel/git/ycmiao/pxa-linux-2.6 into devel
2009-03-23[ARM] pxa: add base support for Marvell's PXA168 processor lineEric Miao
"""The Marvell® PXA168 processor is the first in a family of application processors targeted at mass market opportunities in computing and consumer devices. It balances high computing and multimedia performance with low power consumption to support extended battery life, and includes a wealth of integrated peripherals to reduce overall BOM cost .... """ See http://www.marvell.com/featured/pxa168.jsp for more information. 1. Marvell Mohawk core is a hybrid of xscale3 and its own ARM core, there are many enhancements like instructions for flushing the whole D-cache, and so on 2. Clock reuses Russell's common clkdev, and added the basic support for UART1/2. 3. Devices are a bit different from the 'mach-pxa' way, the platform devices are now dynamically allocated only when necessary (i.e. when pxa_register_device() is called). Description for each device are stored in an array of 'struct pxa_device_desc'. Now that: a. this array of device description is marked with __initdata and can be freed up system is fully up b. which means board code has to add all needed devices early in his initializing function c. platform specific data can now be marked as __initdata since they are allocated and copied by platform_device_add_data() 4. only the basic UART1/2/3 are added, more devices will come later. Signed-off-by: Jason Chagas <chagas@marvell.com> Signed-off-by: Eric Miao <eric.miao@marvell.com>
2009-03-19Merge branch 'master' of git://git.marvell.com/orion into develRussell King
Conflicts: arch/arm/mach-mx1/devices.c
2009-03-15[ARM] ignore high memory with VIPT aliasing cachesNicolas Pitre
VIPT aliasing caches have issues of their own which are not yet handled. Usage of discard_old_kernel_data() in copypage-v6.c is not highmem ready, kmap/fixmap stuff doesn't take account of cache colouring, etc. If/when those issues are handled then this could be reverted. Signed-off-by: Nicolas Pitre <nico@marvell.com>
2009-03-15[ARM] xsc3: add highmem support to L2 cache handling codeNicolas Pitre
On xsc3, L2 cache ops are possible only on virtual addresses. The code is rearranged so to have a linear progression requiring the least amount of pte setups in the highmem case. To protect the virtual mapping so created, interrupts must be disabled currently up to a page worth of address range. The interrupt disabling is done in a way to minimize the overhead within the inner loop. The alternative would consist in separate code for the highmem and non highmem compilation which is less preferable. Signed-off-by: Nicolas Pitre <nico@marvell.com>
2009-03-15[ARM] Feroceon: add highmem support to L2 cache handling codeNicolas Pitre
The choice is between looping over the physical range and performing single cache line operations, or to map highmem pages somewhere, as cache range ops are possible only on virtual addresses. Because L2 range ops are much faster, we go with the later by factoring the physical-to-virtual address conversion and use a fixmap entry for it in the HIGHMEM case. Possible future optimizations to avoid the pte setup cost: - do the pte setup for highmem pages only - determine a threshold for doing a line-by-line processing on physical addresses when the range is small Signed-off-by: Nicolas Pitre <nico@marvell.com>
2009-03-15[ARM] introduce dma_cache_maint_page()Nicolas Pitre
This is a helper to be used by the DMA mapping API to handle cache maintenance for memory identified by a page structure instead of a virtual address. Those pages may or may not be highmem pages, and when they're highmem pages, they may or may not be virtually mapped. When they're not mapped then there is no L1 cache to worry about. But even in that case the L2 cache must be processed since unmapped highmem pages can still be L2 cached. Signed-off-by: Nicolas Pitre <nico@marvell.com>
2009-03-15[ARM] mem_init(): make highmem pages available for useNicolas Pitre
Signed-off-by: Nicolas Pitre <nico@marvell.com>
2009-03-15[ARM] kmap supportNicolas Pitre
The kmap virtual area borrows a 2MB range at the top of the 16MB area below PAGE_OFFSET currently reserved for kernel modules and/or the XIP kernel. This 2MB corresponds to the range covered by 2 consecutive second-level page tables, or a single pmd entry as seen by the Linux page table abstraction. Because XIP kernels are unlikely to be seen on systems needing highmem support, there shouldn't be any shortage of VM space for modules (14 MB for modules is still way more than twice the typical usage). Because the virtual mapping of highmem pages can go away at any moment after kunmap() is called on them, we need to bypass the delayed cache flushing provided by flush_dcache_page() in that case. The atomic kmap versions are based on fixmaps, and __cpuc_flush_dcache_page() is used directly in that case. Signed-off-by: Nicolas Pitre <nico@marvell.com>
2009-03-15[ARM] fixmap supportNicolas Pitre
This is the minimum fixmap interface expected to be implemented by architectures supporting highmem. We have a second level page table already allocated and covering 0xfff00000-0xffffffff because the exception vector page is located at 0xffff0000, and various cache tricks already use some entries above 0xffff0000. Therefore the PTEs covering 0xfff00000-0xfffeffff are free to be used. However the XScale cache flushing code already uses virtual addresses between 0xfffe0000 and 0xfffeffff. So this reserves the 0xfff00000-0xfffdffff range for fixmap stuff. The Documentation/arm/memory.txt information is updated accordingly, including the information about the actual top of DMA memory mapping region which didn't match the code. Signed-off-by: Nicolas Pitre <nico@marvell.com>
2009-03-13Merge branch 'for-rmk' of git://git.pengutronix.de/git/imx/linux-2.6 into develRussell King
Conflicts: arch/arm/mach-at91/gpio.c
2009-03-13[ARM] MX31/MX35: Add l2x0 cache supportSascha Hauer
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
2009-03-12[ARM] Fix virtual to physical translation macro corner casesRussell King
The current use of these macros works well when the conversion is entirely linear. In this case, we can be assured that the following holds true: __va(p + s) - s = __va(p) However, this is not always the case, especially when there is a non-linear conversion (eg, when there is a 3.5GB hole in memory.) In this case, if 's' is the size of the region (eg, PAGE_SIZE) and 'p' is the final page, the above is most definitely not true. So, we must ensure that __va() and __pa() are only used with valid kernel direct mapped RAM addresses. This patch tweaks the code to achieve this. Tested-by: Charles Moschel <fred99@carolina.rr.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-03-12[ARM] 5421/1: ftrace: fix crash due to tracing of __naked functionsUwe Kleine-König
This is a fix for the following crash observed in 2.6.29-rc3: http://lkml.org/lkml/2009/1/29/150 On ARM it doesn't make sense to trace a naked function because then mcount is called without stack and frame pointer being set up and there is no chance to restore the lr register to the value before mcount was called. Reported-by: Matthias Kaehlcke <matthias@kaehlcke.net> Tested-by: Matthias Kaehlcke <matthias@kaehlcke.net> Cc: Abhishek Sagar <sagar.abhishek@gmail.com> Cc: Steven Rostedt <rostedt@home.goodmis.org> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-03-12[ARM] 5422/1: ARM: MMU: add a Non-cacheable Normal executable memory typePaul Walmsley
This patch adds a Non-cacheable Normal ARM executable memory type, MT_MEMORY_NONCACHED. On OMAP3, this is used for rapid dynamic voltage/frequency scaling in the VDD2 voltage domain. OMAP3's SDRAM controller (SDRC) is in the VDD2 voltage domain, and its clock frequency must change along with voltage. The SDRC clock change code cannot run from SDRAM itself, since SDRAM accesses are paused during the clock change. So the current implementation of the DVFS code executes from OMAP on-chip SRAM, aka "OCM RAM." If the OCM RAM pages are marked as Cacheable, the ARM cache controller will attempt to flush dirty cache lines to the SDRC, so it can fill those lines with OCM RAM instruction code. The problem is that the SDRC is paused during DVFS, and so any SDRAM access causes the ARM MPU subsystem to hang. TI's original solution to this problem was to mark the OCM RAM sections as Strongly Ordered memory, thus preventing caching. This is overkill: since the memory is marked as non-bufferable, OCM RAM writes become needlessly slow. The idea of "Strongly Ordered SRAM" is also conceptually disturbing. Previous LAKML list discussion is here: http://www.spinics.net/lists/arm-kernel/msg54312.html This memory type MT_MEMORY_NONCACHED is used for OCM RAM by a future patch. Cc: Richard Woodruff <r-woodruff2@ti.com> Signed-off-by: Paul Walmsley <paul@pwsan.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-03-03[ARM] 5416/1: Use unused address in v6_early_abortSeth Forshee
The target of the strex instruction to clear the exlusive monitor is currently the top of the stack. If the store succeeeds this corrupts r0 in pt_regs. Use the next stack location instead of the current one to prevent any chance of corrupting an in-use address. Signed-off-by: Seth Forshee <seth.forshee@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-02-19[ARM] 5402/1: fix a case of wrap-around in sanity_check_meminfo()Nicolas Pitre
In the non highmem case, if two memory banks of 1GB each are provided, the second bank would evade suppression since its virtual base would be 0. Fix this by disallowing any memory bank which virtual base address is found to be lower than PAGE_OFFSET. Reported-by: Lennert Buytenhek <buytenh@marvell.com> Signed-off-by: Nicolas Pitre <nico@marvell.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-01-28[ARM] 5366/1: fix shared memory coherency with VIVT L1 + L2 cachesNicolas Pitre
When there are multiple L1-aliasing userland mappings of the same physical page, we currently remap each of them uncached, to prevent VIVT cache aliasing issues. (E.g. writes to one of the mappings not being immediately visible via another mapping.) However, when we do this remapping, there could still be stale data in the L2 cache, and an uncached mapping might bypass L2 and go straight to RAM. This would cause reads from such mappings to see old data (until the dirty L2 line is eventually evicted.) This issue is solved by forcing a L2 cache flush whenever the shared page is made L1 uncacheable. Ideally, we would make L1 uncacheable and L2 cacheable as L2 is PIPT. But Feroceon does not support that combination, and the TEX=5 C=0 B=0 encoding for XSc3 doesn't appear to work in practice. Signed-off-by: Nicolas Pitre <nico@marvell.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-01-25[ARM] fix section-based ioremapRussell King
Tomi Valkeinen reports: Running with latest linux-omap kernel on OMAP3 SDP board, I have problem with iounmap(). It looks like iounmap() does not properly free large areas. Below is a test which fails for me in 6-7 loops. for (i = 0; i < 200; ++i) { vaddr = ioremap(paddr, size); if (!vaddr) { printk("couldn't ioremap\n"); break; } iounmap(vaddr); } The changes to vmalloc.c weren't reflected in the ARM ioremap implementation. Turns out the fix is rather simple. Tested-by: Tomi Valkeinen <tomi.valkeinen@nokia.com> Tested-by: Matt Gerassimoff <mgeras@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-01-24[ARM] fix StrongARM-11x0 page copy implementationRussell King
Which had the 'from' and 'to' pages reversed. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-01-12[ARM] 5364/1: allow flush_ioremap_region() to be used from modulesNicolas Pitre
Without this, the pxa2xx-flash driver cannot be used as a module. Reported-by: Chris Lawrence <chrisdl@netspace.net.au> Signed-off-by: Nicolas Pitre <nico@marvell.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-01-08NOMMU: Rename ARM's struct vm_regionDavid Howells
Rename ARM's struct vm_region so that I can introduce my own global version for NOMMU. It's feasible that the ARM version may wish to use my global one instead. The NOMMU vm_region struct defines areas of the physical memory map that are under mmap. This may include chunks of RAM or regions of memory mapped devices, such as flash. It is also used to retain copies of file content so that shareable private memory mappings of files can be made. As such, it may be compatible with what is described in the banner comment for ARM's vm_region struct. Signed-off-by: David Howells <dhowells@redhat.com>
2008-12-17Merge branch 'mxc-pu-imxfb' of ↵Russell King
git://pasiphae.extern.pengutronix.de/git/imx/linux-2.6 into devel
2008-12-15Merge branch 'omap3-upstream' of ↵Russell King
git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap-2.6 into devel
2008-12-15[ARM] Ensure linux/hardirqs.h is included where requiredRussell King
... for the removal of it from asm-generic/local.h Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-12-14[ARM] eliminate NULL test and memset after alloc_bootmemJulia Lawall
As noted by Akinobu Mita in patch b1fceac2b9e04d278316b2faddf276015fc06e3b, alloc_bootmem and related functions never return NULL and always return a zeroed region of memory. Thus a NULL test or memset after calls to these functions is unnecessary. This was fixed using the following semantic patch. (http://www.emn.fr/x-info/coccinelle/) // <smpl> @@ expression E; statement S; @@ E = \(alloc_bootmem\|alloc_bootmem_low\|alloc_bootmem_pages\|alloc_bootmem_low_pages\|alloc_bootmem_node\|alloc_bootmem_low_pages_node\|alloc_bootmem_pages_node\)(...) ... when != E ( - BUG_ON (E == NULL); | - if (E == NULL) S ) @@ expression E,E1; @@ E = \(alloc_bootmem\|alloc_bootmem_low\|alloc_bootmem_pages\|alloc_bootmem_low_pages\|alloc_bootmem_node\|alloc_bootmem_low_pages_node\|alloc_bootmem_pages_node\)(...) ... when != E - memset(E,0,E1); // </smpl> Signed-off-by: Julia Lawall <julia@diku.dk> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-12-07[ARM] Fix alignment fault handling for ARMv6 and later CPUsRussell King
On ARMv6 and later CPUs, it is possible for userspace processes to get stuck on a misaligned load or store due to the "ignore fault" setting; unlike previous CPUs, retrying the instruction without the 'A' bit set does not always cause the load to succeed. We have no real option but to default to fixing up alignment faults on these CPUs, and having the CPU fix up those misaligned accesses which it can. Reported-by: Wolfgang Grandegger <wg@grandegger.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-12-02Merge branch 'for-rmk' of ↵Russell King
git://git.kernel.org/pub/scm/linux/kernel/git/ycmiao/pxa-linux-2.6 into devel Conflicts: arch/arm/mach-pxa/pxa25x.c
2008-12-02[ARM] pxa: add base PXA935 support due to CPUID changeEric Miao
PXA935 has changed its implementor ID from Intel to Marvell, this patch modifies arch/arm/boot/compressed/head.S and proc-xsc3.S to support a smooth bootup. Signed-off-by: Eric Miao <eric.miao@marvell.com>
2008-12-01Merge branch 'for-rmk-realview' of git://linux-arm.org/linux-2.6 into develRussell King
2008-12-01RealView: Add Cortex-A9 support to the EB boardJon Callan
This patch adds the necessary definitions and Kconfig entries to enable Cortex-A9 (ARMv7 SMP) tiles on the RealView/EB board. Signed-off-by: Jon Callan <Jon.Callan@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2008-12-01[ARM] use asm/sections.hRussell King
Update to use the asm/sections.h header rather than declaring these symbols ourselves. Change __data_start to _data to conform with the naming found within asm/sections.h. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-29[ARM] Remove linux/sched.h from asm/cacheflush.h and asm/uaccess.hRussell King
... and fix those drivers that were incorrectly relying upon that include. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-28[ARM] Remove unnecessary mach/hardware.h includes in arch/arm/mmRussell King
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-28Merge branch 'highmem' into develRussell King
Conflicts: arch/arm/mach-clps7500/include/mach/memory.h
2008-11-28[ARM] remove bogus #ifdef CONFIG_HIGHMEM in show_pte()Nicolas Pitre
The restriction on !CONFIG_HIGHMEM is unneeded since page tables are currently never allocated with highmem pages, and actually disable PTE dump whenever highmem is configured. Let's have a dynamic test to better describe the current limitation instead. Signed-off-by: Nicolas Pitre <nico@marvell.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-28[ARM] prevent the vmalloc cmdline argument from eating all memoryNicolas Pitre
Commit 8d5796d2ec6b5a4e7a52861144e63af438d6f8f7 allows for the vmalloc area to be resized from the kernel cmdline. Make sure it cannot overlap with RAM entirely. Signed-off-by: Nicolas Pitre <nico@marvell.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-28[ARM] mem_init() cleanupsNicolas Pitre
Make free_area() arguments pfn based, and return number of freed pages. This will simplify highmem initialization later. Also, codepages, datapages and initpages are actually codesize, datasize and initsize. Signed-off-by: Nicolas Pitre <nico@marvell.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-28[ARM] split highmem into its own memory bankNicolas Pitre
Doing so will greatly simplify the bootmem initialization code as each bank is therefore entirely lowmem or highmem with no crossing between those zones. Signed-off-by: Nicolas Pitre <nico@marvell.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-28[ARM] rationalize memory configuration code some moreNicolas Pitre
Currently there are two instances of struct meminfo: one in kernel/setup.c marked __initdata, and another in mm/init.c with permanent storage. Let's keep only the later to directly populate the permanent version from arm_add_memory(). Also move common validation tests between the MMU and non-MMU cases into arm_add_memory() to remove some duplication. Protection against overflowing the membank array is also moved in there in order to cover the kernel cmdline parsing path as well. Signed-off-by: Nicolas Pitre <nico@marvell.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-28[ARM] fix a couple clear_user_highpage assembly constraintsNicolas Pitre
In all cases the kaddr is assigned an input register even though it is modified in the assembly code. Let's assign a new variable to the modified value and mark those inline asm with volatile otherwise they get optimized away because the output variable is otherwise not used. Also fix a few conversion errors in copypage-feroceon.c and copypage-v4mc.c. Signed-off-by: Nicolas Pitre <nico@marvell.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-27[ARM] clearpage: provide our own clear_user_highpage()Russell King
For similar reasons as copy_user_page(), we want to avoid the additional kmap_atomic if it's unnecessary. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-27[ARM] copypage: provide our own copy_user_highpage()Russell King
We used to override the copy_user_page() function. However, this is not only inefficient, it also causes additional complexity for highmem support, since we convert from a struct page to a kernel direct mapped address and back to a struct page again. Moreover, with highmem support, we end up pointlessly setting up kmap entries for pages which we're going to remap. So, push the kmapping down into the copypage implementation files where it's required. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-27[ARM] copypage: convert assembly files to CRussell King
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-27Merge branch 'for-rmk' of git://linux-arm.org/linux-2.6 into develRussell King