aboutsummaryrefslogtreecommitdiff
path: root/arch/ia64/mm
AgeCommit message (Collapse)Author
2009-06-17[IA64] Convert ia64 to use int-ll64.hMatthew Wilcox
It is generally agreed that it would be beneficial for u64 to be an unsigned long long on all architectures. ia64 (in common with several other 64-bit architectures) currently uses unsigned long. Migrating piecemeal is too painful; this giant patch fixes all compilation warnings and errors that come as a result of switching to use int-ll64.h. Note that userspace will still see __u64 defined as unsigned long. This is important as it affects C++ name mangling. [Updated by Tony Luck to change efi.h:efi_freemem_callback_t to use u64 for start/end rather than unsigned long] Signed-off-by: Matthew Wilcox <willy@linux.intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2009-06-15[IA64] fix compile error in arch/ia64/mm/extable.cRusty Russell
ad6561dffa17f17bb68d7207d422c26c381c4313 ("module: trim exception table on init free.") put a bogus trim_init_extable() function into ia64 which didn't compile. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Tony Luck <tony.luck@intel.com>
2009-06-12module: trim exception table on init free.Rusty Russell
It's theoretically possible that there are exception table entries which point into the (freed) init text of modules. These could cause future problems if other modules get loaded into that memory and cause an exception as we'd see the wrong fixup. The only case I know of is kvm-intel.ko (when CONFIG_CC_OPTIMIZE_FOR_SIZE=n). Amerigo fixed this long-standing FIXME in the x86 version, but this patch is more general. This implements trim_init_extable(); most archs are simple since they use the standard lib/extable.c sort code. Alpha and IA64 use relative addresses in their fixups, so thier trimming is a slight variation. Sparc32 is unique; it doesn't seem to define ARCH_HAS_SORT_EXTABLE, yet it defines its own sort_extable() which overrides the one in lib. It doesn't sort, so we have to mark deleted entries instead of actually trimming them. Inspired-by: Amerigo Wang <amwang@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Cc: linux-alpha@vger.kernel.org Cc: sparclinux@vger.kernel.org Cc: linux-ia64@vger.kernel.org
2009-04-01[IA64] BUG to BUG_ON changesStoyan Gaydarov
Replace: if (test) BUG(); with BUG_ON(test); Signed-off-by: Stoyan Gaydarov <stoyboyker@gmail.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2009-03-31Pull pvops into release branchTony Luck
2009-03-26ia64/pv_ops: gate page paravirtualization.Isaku Yamahata
paravirtualize gate page by allowing each pv_ops instances to define its own gate page. Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
2009-03-26ia64/pv_ops: add hooks to paravirtualize fsyscall implementation.Isaku Yamahata
Add two hooks, paravirt_get_fsyscall_table() and paravirt_get_fsys_bubble_doen() to paravirtualize fsyscall implementation. This patch just add the hooks fsyscall and don't paravirtualize it. Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
2009-03-16cpumask: use mm_cpumask() wrapper: ia64Rusty Russell
Makes code futureproof against the impending change to mm->cpu_vm_mask. It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-02-18mm: fix memmap init for handling memory holeKAMEZAWA Hiroyuki
Now, early_pfn_in_nid(PFN, NID) may returns false if PFN is a hole. and memmap initialization was not done. This was a trouble for sparc boot. To fix this, the PFN should be initialized and marked as PG_reserved. This patch changes early_pfn_in_nid() return true if PFN is a hole. Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Reported-by: David Miller <davem@davemlloft.net> Tested-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: <stable@kernel.org> [2.6.25.x, 2.6.26.x, 2.6.27.x, 2.6.28.x] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-02-18mm: clean up for early_pfn_to_nid()KAMEZAWA Hiroyuki
What's happening is that the assertion in mm/page_alloc.c:move_freepages() is triggering: BUG_ON(page_zone(start_page) != page_zone(end_page)); Once I knew this is what was happening, I added some annotations: if (unlikely(page_zone(start_page) != page_zone(end_page))) { printk(KERN_ERR "move_freepages: Bogus zones: " "start_page[%p] end_page[%p] zone[%p]\n", start_page, end_page, zone); printk(KERN_ERR "move_freepages: " "start_zone[%p] end_zone[%p]\n", page_zone(start_page), page_zone(end_page)); printk(KERN_ERR "move_freepages: " "start_pfn[0x%lx] end_pfn[0x%lx]\n", page_to_pfn(start_page), page_to_pfn(end_page)); printk(KERN_ERR "move_freepages: " "start_nid[%d] end_nid[%d]\n", page_to_nid(start_page), page_to_nid(end_page)); ... And here's what I got: move_freepages: Bogus zones: start_page[2207d0000] end_page[2207dffc0] zone[fffff8103effcb00] move_freepages: start_zone[fffff8103effcb00] end_zone[fffff8003fffeb00] move_freepages: start_pfn[0x81f600] end_pfn[0x81f7ff] move_freepages: start_nid[1] end_nid[0] My memory layout on this box is: [ 0.000000] Zone PFN ranges: [ 0.000000] Normal 0x00000000 -> 0x0081ff5d [ 0.000000] Movable zone start PFN for each node [ 0.000000] early_node_map[8] active PFN ranges [ 0.000000] 0: 0x00000000 -> 0x00020000 [ 0.000000] 1: 0x00800000 -> 0x0081f7ff [ 0.000000] 1: 0x0081f800 -> 0x0081fe50 [ 0.000000] 1: 0x0081fed1 -> 0x0081fed8 [ 0.000000] 1: 0x0081feda -> 0x0081fedb [ 0.000000] 1: 0x0081fedd -> 0x0081fee5 [ 0.000000] 1: 0x0081fee7 -> 0x0081ff51 [ 0.000000] 1: 0x0081ff59 -> 0x0081ff5d So it's a block move in that 0x81f600-->0x81f7ff region which triggers the problem. This patch: Declaration of early_pfn_to_nid() is scattered over per-arch include files, and it seems it's complicated to know when the declaration is used. I think it makes fix-for-memmap-init not easy. This patch moves all declaration to include/linux/mm.h After this, if !CONFIG_NODES_POPULATES_NODE_MAP && !CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID -> Use static definition in include/linux/mm.h else if !CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID -> Use generic definition in mm/page_alloc.c else -> per-arch back end function will be called. Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Tested-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Reported-by: David Miller <davem@davemlloft.net> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: <stable@kernel.org> [2.6.25.x, 2.6.26.x, 2.6.27.x, 2.6.28.x] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-01-06mm: show node to memory section relationship with symlinks in sysfsGary Hade
Show node to memory section relationship with symlinks in sysfs Add /sys/devices/system/node/nodeX/memoryY symlinks for all the memory sections located on nodeX. For example: /sys/devices/system/node/node1/memory135 -> ../../memory/memory135 indicates that memory section 135 resides on node1. Also revises documentation to cover this change as well as updating Documentation/ABI/testing/sysfs-devices-memory to include descriptions of memory hotremove files 'phys_device', 'phys_index', and 'state' that were previously not described there. In addition to it always being a good policy to provide users with the maximum possible amount of physical location information for resources that can be hot-added and/or hot-removed, the following are some (but likely not all) of the user benefits provided by this change. Immediate: - Provides information needed to determine the specific node on which a defective DIMM is located. This will reduce system downtime when the node or defective DIMM is swapped out. - Prevents unintended onlining of a memory section that was previously offlined due to a defective DIMM. This could happen during node hot-add when the user or node hot-add assist script onlines _all_ offlined sections due to user or script inability to identify the specific memory sections located on the hot-added node. The consequences of reintroducing the defective memory could be ugly. - Provides information needed to vary the amount and distribution of memory on specific nodes for testing or debugging purposes. Future: - Will provide information needed to identify the memory sections that need to be offlined prior to physical removal of a specific node. Symlink creation during boot was tested on 2-node x86_64, 2-node ppc64, and 2-node ia64 systems. Symlink creation during physical memory hot-add tested on a 2-node x86_64 system. Signed-off-by: Gary Hade <garyhade@us.ibm.com> Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-11-04[IA64] fix the difference between node_mem_map and node_start_pfnKen'ichi Ohmichi
makedumpfile[1] cannot run on ia64 discontigmem kernel, because the member node_mem_map of struct pgdat_list has invalid value. This patch fixes it. node_start_pfn shows the start pfn of each node, and node_mem_map should point 'struct page' of each node's node_start_pfn. On my machine, node0's node_start_pfn shows 0x400 and its node_mem_map points 0xa0007fffbf000000. This address is the same as vmem_map, so the node_mem_map points 'struct page' of pfn 0, even if its node_start_pfn shows 0x400. The cause is due to the round down of min_pfn in count_node_pages() and node0's node_mem_map points 'struct page' of inactive pfn (0x0). This patch fixes it. makedumpfile[1]: dump filtering command https://sourceforge.net/projects/makedumpfile/ Signed-off-by: Ken'ichi Ohmichi <oomichi@mxs.nes.nec.co.jp> Cc: Bernhard Walle <bwalle@suse.de> Cc: Jay Lan <jlan@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-10-23Merge branch 'release' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6 * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6: (41 commits) [IA64] Fix annoying IA64_TR_ALLOC_MAX message. [IA64] kill sys32_pipe [IA64] remove sys32_pause [IA64] Add Variable Page Size and IA64 Support in Intel IOMMU ia64/pv_ops: paravirtualized instruction checker. ia64/xen: a recipe for using xen/ia64 with pv_ops. ia64/pv_ops: update Kconfig for paravirtualized guest and xen. ia64/xen: preliminary support for save/restore. ia64/xen: define xen machine vector for domU. ia64/pv_ops/xen: implement xen pv_time_ops. ia64/pv_ops/xen: implement xen pv_irq_ops. ia64/pv_ops/xen: define the nubmer of irqs which xen needs. ia64/pv_ops/xen: implement xen pv_iosapic_ops. ia64/pv_ops/xen: paravirtualize entry.S for ia64/xen. ia64/pv_ops/xen: paravirtualize ivt.S for xen. ia64/pv_ops/xen: paravirtualize DO_SAVE_MIN for xen. ia64/pv_ops/xen: define xen paravirtualized instructions for hand written assembly code ia64/pv_ops/xen: define xen pv_cpu_ops. ia64/pv_ops/xen: define xen pv_init_ops for various xen initialization. ia64/pv_ops/xen: elf note based xen startup. ...
2008-10-20mm: cleanup to make remove_memory() arch-neutralBadari Pulavarty
There is nothing architecture specific about remove_memory(). remove_memory() function is common for all architectures which support hotplug memory remove. Instead of duplicating it in every architecture, collapse them into arch neutral function. [akpm@linux-foundation.org: fix the export] Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com> Cc: Yasunori Goto <y-goto@jp.fujitsu.com> Cc: Gary Hade <garyhade@us.ibm.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Yasunori Goto <y-goto@jp.fujitsu.com> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-10-17[IA64] Fix annoying IA64_TR_ALLOC_MAX message.Tony Luck
Madison cpus support 64 TR registers. Increase IA64_TR_ALLOC_MAX to 64. Also fixup the messages that get printed when this limit is exceeded. Repeating for every cpu is too noisy. Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-10-13Merge branch 'master' of ↵David Woodhouse
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6 Conflicts: include/asm-x86/statfs.h
2008-09-29[IA64] Put the space for cpu0 per-cpu area into .data sectionTony Luck
Initial fix for making sure that we can access percpu variables in all C code (commit: 10617bbe84628eb18ab5f723d3ba35005adde143) inadvertantly allocated the memory in the "percpu" section of the vmlinux ELF executable. This confused kexec/dump. Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-09-06Remove asm/a.out.h files for all architectures without a.out support.Adrian Bunk
This patch also includes the required removal of (unused) inclusion of <asm/a.out.h> <linux/a.out.h>'s in the arch/ code for these architectures. [dwmw2: updated for 2.6.27-rc] Signed-off-by: Adrian Bunk <bunk@kernel.org> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2008-08-12[IA64] Ensure cpu0 can access per-cpu variables in early boot codeTony Luck
ia64 handles per-cpu variables a litle differently from other architectures in that it maps the physical memory allocated for each cpu at a constant virtual address (0xffffffffffff0000). This mapping is not enabled until the architecture specific cpu_init() function is run, which causes problems since some generic code is run before this point. In particular when CONFIG_PRINTK_TIME is enabled, the boot cpu will trap on the access to per-cpu memory at the first printk() call so the boot will fail without the kernel printing anything to the console. Fix this by allocating percpu memory for cpu0 in the kernel data section and doing all initialization to enable percpu access in head.S before calling any generic code. Other cpus must take care not to access per-cpu variables too early, but their code path from start_secondary() to cpu_init() is all in arch/ia64 Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-07-30GRU Driver: hardware data structuresJack Steiner
This series of patches adds a driver for the SGI UV GRU. The driver is still in development but it currently compiles for both x86_64 & IA64. All simple regression tests pass on IA64. Although features remain to be added, I'd like to start the process of getting the driver into the kernel. Additional kernel drivers will depend on services provide by the GRU driver. The GRU is a hardware resource located in the system chipset. The GRU contains memory that is mmaped into the user address space. This memory is used to communicate with the GRU to perform functions such as load/store, scatter/gather, bcopy, AMOs, etc. The GRU is directly accessed by user instructions using user virtual addresses. GRU instructions (ex., bcopy) use user virtual addresses for operands. The GRU contains a large TLB that is functionally very similar to processor TLBs. Because the external contains a TLB with user virtual address, it requires callouts from the core VM system when certain types of changes are made to the process page tables. There are several MMUOPS patches currently being discussed but none has been accepted into the kernel. The GRU driver is built using version V18 from Andrea Arcangeli. This patch: Contains the definitions of the hardware GRU data structures that are used by the driver to manage the GRU. [akpm@linux-foundation;org: export hpage_shift] Signed-off-by: Jack Steiner <steiner@sgi.com> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-24bootmem: replace node_boot_start in struct bootmem_dataJohannes Weiner
Almost all users of this field need a PFN instead of a physical address, so replace node_boot_start with node_min_pfn. [Lee.Schermerhorn@hp.com: fix spurious BUG_ON() in mark_bootmem()] Signed-off-by: Johannes Weiner <hannes@saeureba.de> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-24hugetlb: introduce pud_hugeAndi Kleen
Straight forward extensions for huge pages located in the PUD instead of PMDs. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-24hugetlb: modular state for hugetlb page sizeAndi Kleen
The goal of this patchset is to support multiple hugetlb page sizes. This is achieved by introducing a new struct hstate structure, which encapsulates the important hugetlb state and constants (eg. huge page size, number of huge pages currently allocated, etc). The hstate structure is then passed around the code which requires these fields, they will do the right thing regardless of the exact hstate they are operating on. This patch adds the hstate structure, with a single global instance of it (default_hstate), and does the basic work of converting hugetlb to use the hstate. Future patches will add more hstate structures to allow for different hugetlbfs mounts to have different page sizes. [akpm@linux-foundation.org: coding-style fixes] Acked-by: Adam Litke <agl@us.ibm.com> Acked-by: Nishanth Aravamudan <nacc@us.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-24mm: remove double indirection on tlb parameter to free_pgd_range() & CoJan Beulich
The double indirection here is not needed anywhere and hence (at least) confusing. Signed-off-by: Jan Beulich <jbeulich@novell.com> Cc: Hugh Dickins <hugh@veritas.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "David S. Miller" <davem@davemloft.net> Acked-by: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-24mm: move bootmem descriptors definition to a single placeJohannes Weiner
There are a lot of places that define either a single bootmem descriptor or an array of them. Use only one central array with MAX_NUMNODES items instead. Signed-off-by: Johannes Weiner <hannes@saeurebad.de> Acked-by: Ralf Baechle <ralf@linux-mips.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Richard Henderson <rth@twiddle.net> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Tony Luck <tony.luck@intel.com> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Kyle McMartin <kyle@parisc-linux.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: David S. Miller <davem@davemloft.net> Cc: Yinghai Lu <yhlu.kernel@gmail.com> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Andy Whitcroft <apw@shadowen.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-05-15[IA64] fix personality(PER_LINUX32) performance issueHuang, Xiaolan
The patch aims to fix a performance issue for the syscall personality(PER_LINUX32). On IA-64 box, the syscall personality (PER_LINUX32) has poor performance because it failed to find the Linux/x86 execution domain. Then it tried to load the kernel module however it failed always and it used the default execution domain PER_LINUX instead. Requesting kernel modules is very expensive. It caused the performance issue. (see the function lookup_exec_domain in kernel/exec_domain.c). To resolve the issue, execution domain Linux/x86 is always registered in initialization time for IA-64 architecture. Signed-off-by: Xiaolan Huang <xiaolan.huang@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-04-29[IA64] bugfix: nptcg breaks cpu-hotaddHidetoshi Seto
If "max_purges" from PAL is 0, it actually means 1. However it was not handled later when a hot-added cpu pass the max_purges from PAL. This makes systems easy to go BUG_ON(). Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-04-28hotplug-memory: make online_page() commonJeremy Fitzhardinge
All architectures use an effectively identical definition of online_page(), so just make it common code. x86-64, ia64, powerpc and sh are actually identical; x86-32 is slightly different. x86-32's differences arise because it puts its hotplug pages in the highmem zone. We can handle this in the generic code by inspecting the page to see if its in highmem, and update the totalhigh_pages count appropriately. This leaves init_32.c:free_new_highpage with a single caller, so I folded it into add_one_highpage_init. I also removed an incorrect comment referring to the NUMA case; any NUMA details have already been dealt with by the time online_page() is called. [akpm@linux-foundation.org: fix indenting] Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Acked-by: Dave Hansen <dave@linux.vnet.ibm.com> Reviewed-by: KAMEZAWA Hiroyuki <kamez.hiroyu@jp.fujitsu.com> Tested-by: KAMEZAWA Hiroyuki <kamez.hiroyu@jp.fujitsu.com> Cc: Yasunori Goto <y-goto@jp.fujitsu.com> Cc: Christoph Lameter <clameter@sgi.com> Acked-by: Ingo Molnar <mingo@elte.hu> Acked-by: Yasunori Goto <y-goto@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-17Pull miscellaneous into release branchTony Luck
Conflicts: arch/ia64/kernel/mca.c
2008-04-17Pull nptcg into release branchTony Luck
Conflicts: arch/ia64/mm/tlb.c
2008-04-17Pull kvm-patches into release branchTony Luck
2008-04-11[IA64] Fix NUMA configuration issueZoltan Menyhart
There is a NUMA memory configuration issue in 2.6.24: A 2-node machine of ours has got the following memory layout: Node 0: 0 - 2 Gbytes Node 0: 4 - 8 Gbytes Node 1: 8 - 16 Gbytes Node 0: 16 - 18 Gbytes "efi_memmap_init()" merges the three last ranges into one. "register_active_ranges()" is called as follows: efi_memmap_walk(register_active_ranges, NULL); i.e. once for the 4 - 18 Gbytes range. It picks up the node number from the start address, and registers all the memory for the node #0. "register_active_ranges()" should be called as follows to make sure there is no merged address range at its entry: efi_memmap_walk(filter_memory, register_active_ranges); "filter_memory()" is similar to "filter_rsvd_memory()", but the reserved memory ranges are not filtered out. Signed-off-by: Zoltan Menyhart <Zoltan.Menyhart@bull.net> Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-04-09[IA64] Untangle sync_icache_dcache() page size determinationChristoph Lameter
Untangle the chaos of page size determination in this function by simply using PAGE_SIZE << compound_order(). Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-04-09[IA64] remove redundant display of free swap space in show_mem()Johannes Weiner
show_mem() has no need to print the amount of free swap space manually because show_free_areas() does this already and is called by the former. The two outputs only differ in text formatting: printk("Free swap = %lukB\n", ...); printk("Free swap: %6ldkB\n", ...); Signed-off-by: Johannes Weiner <hannes@saeurebad.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-04-08[IA64] Minimize per_cpu reservations.holt@sgi.com
This attached patch significantly shrinks boot memory allocation on ia64. It does this by not allocating per_cpu areas for cpus that can never exist. In the case where acpi does not have any numa node description of the cpus, I defaulted to assigning the first 32 round-robin on the known nodes.. For the !CONFIG_ACPI I used for_each_possible_cpu(). Signed-off-by: Robin Holt <holt@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-04-08[IA64] Correct pernodesize calculation.holt@sgi.com
A simple fix. The existing pernodesize reservation is not taking into account a second array of pg_data_t structures. This is normally not important because the PAGE_ALIGN macro reserves adequate space. I made the compute_pernodesize steps in the same order as the fill_pernode steps to make the correlation more clear. Signed-off-by: Robin Holt <holt@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-04-04[IA64] Kernel parameter for max number of concurrent global TLB purgesFenghua Yu
The patch defines kernel parameter "nptcg=". The parameter overrides max number of concurrent global TLB purges which is reported from either PAL_VM_SUMMARY or SAL PALO. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-04-04[IA64] Multiple outstanding ptc.g instruction supportFenghua Yu
According to SDM2.2, Itanium supports multiple outstanding ptc.g instructions. But current kernel function ia64_global_tlb_purge() uses a spinlock to serialize ptc.g instructions issued by multiple processors. This serialization might have scalability issue on a big SMP machine where many processors could purge TLB in parallel. The patch fixes this problem by issuing multiple ptc.g instructions in ia64_global_tlb_purge(). It also adds support for the "PALO" table to get a platform view of the max number of outstanding ptc.g instructions (which may be different from the processor view found from PAL_VM_SUMMARY). PALO specification can be found at: http://www.dig64.org/home/DIG64_PALO_R1_0.pdf spinaphore implementation by Matthew Wilcox. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-04-03[IA64] Add API for allocating Dynamic TR resource.Xiantao Zhang
Dynamic TR resource should be managed in the uniform way. Add two interfaces for kernel: ia64_itr_entry: Allocate a (pair of) TR for caller. ia64_ptr_entry: Purge a (pair of ) TR by caller. Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com> Signed-off-by: Anthony Xu <anthony.xu@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-03-06[IA64] kprobes arch consolidation build fixHarvey Harrison
ia64 named their handler kprobes_fault_handler while all other arches used kprobe_fault_handler. Change the function definition and header declaration. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-03-06[IA64] remove remaining __FUNCTION__ occurrencesHarvey Harrison
__FUNCTION__ is gcc-specific, use __func__ Long lines have been kept where they exist, some small spacing changes have been done. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-02-07Introduce flags for reserve_bootmem()Bernhard Walle
This patchset adds a flags variable to reserve_bootmem() and uses the BOOTMEM_EXCLUSIVE flag in crashkernel reservation code to detect collisions between crashkernel area and already used memory. This patch: Change the reserve_bootmem() function to accept a new flag BOOTMEM_EXCLUSIVE. If that flag is set, the function returns with -EBUSY if the memory already has been reserved in the past. This is to avoid conflicts. Because that code runs before SMP initialisation, there's no race condition inside reserve_bootmem_core(). [akpm@linux-foundation.org: coding-style fixes] [akpm@linux-foundation.org: fix powerpc build] Signed-off-by: Bernhard Walle <bwalle@suse.de> Cc: <linux-arch@vger.kernel.org> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-05[IA64] honor notify_die() returning NOTIFY_STOPJan Beulich
This requires making die() and die_if_kernel() return a value, and their callers to honor this (and be prepared that it returns). Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
2007-12-18[IA64] Avoid unnecessary TLB flushes when allocating memoryde Dinechin, Christophe (Integrity VM)
Improve performance of memory allocations on ia64 by avoiding a global TLB purge to purge a single page from the file cache. This happens whenever we evict a page from the buffer cache to make room for some other allocation. Test case: Run 'find /usr -type f | xargs cat > /dev/null' in the background to fill the buffer cache, then run something that uses memory, e.g. 'gmake -j50 install'. Instrumentation showed that the number of global TLB purges went from a few millions down to about 170 over a 12 hours run of the above. The performance impact is particularly noticeable under virtualization, because a virtual TLB is generally both larger and slower to purge than a physical one. Signed-off-by: Christophe de Dinechin <ddd@hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2007-12-07[IA64] Add missing "space" to concatenated stringsJoe Perches
Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2007-11-06[IA64] Fix section mismatch in contig.c version of per_cpu_init()Tony Luck
There is a section mismatch when building CONFIG_FLATMEM=y kernels that also have CONFIG_HOTPLUG_CPU=y WARNING: vmlinux.o(.text+0x5a902): Section mismatch: reference to \ .init.text:__alloc_bootmem (between 'per_cpu_init' and 'count_pages') The issue occurs because per_cpu_init() in mm/contig.c is marked __cpuinit (which is #define'd to nothing on a hot plug cpu configuration) call __alloc_bootmem() (which is an __init function). The usage is actually safe because the __alloc_bootmem() is inside an "if (first_time)" test so that the call is only made while it is still legal to do so. But the warning is irritating. Move the allocation to find_memory(). Signed-off-by: Tony Luck <tony.luck@intel.com>
2007-10-29[IA64] ia64/mm/init.c: fix section mismatchesAdrian Bunk
This patch fixes the following section mismatches: <-- snip --> ... WARNING: vmlinux.o(.text+0x5b5c2): Section mismatch: reference to .init.text:memmap_init_zone (between 'memmap_init' and 'virtual_memmap_init') WARNING: vmlinux.o(.text+0x5b842): Section mismatch: reference to .init.text:memmap_init_zone (between 'virtual_memmap_init' and 'ia64_mmu_init') ... <-- snip --> Signed-off-by: Adrian Bunk <bunk@kernel.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
2007-10-19pid namespaces: define is_global_init() and is_container_init()Serge E. Hallyn
is_init() is an ambiguous name for the pid==1 check. Split it into is_global_init() and is_container_init(). A cgroup init has it's tsk->pid == 1. A global init also has it's tsk->pid == 1 and it's active pid namespace is the init_pid_ns. But rather than check the active pid namespace, compare the task structure with 'init_pid_ns.child_reaper', which is initialized during boot to the /sbin/init process and never changes. Changelog: 2.6.22-rc4-mm2-pidns1: - Use 'init_pid_ns.child_reaper' to determine if a given task is the global init (/sbin/init) process. This would improve performance and remove dependence on the task_pid(). 2.6.21-mm2-pidns2: - [Sukadev Bhattiprolu] Changed is_container_init() calls in {powerpc, ppc,avr32}/traps.c for the _exception() call to is_global_init(). This way, we kill only the cgroup if the cgroup's init has a bug rather than force a kernel panic. [akpm@linux-foundation.org: fix comment] [sukadev@us.ibm.com: Use is_global_init() in arch/m32r/mm/fault.c] [bunk@stusta.de: kernel/pid.c: remove unused exports] [sukadev@us.ibm.com: Fix capability.c to work with threaded init] Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Signed-off-by: Sukadev Bhattiprolu <sukadev@us.ibm.com> Acked-by: Pavel Emelianov <xemul@openvz.org> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Cedric Le Goater <clg@fr.ibm.com> Cc: Dave Hansen <haveblue@us.ibm.com> Cc: Herbert Poetzel <herbert@13thfloor.at> Cc: Kirill Korotaev <dev@sw.ru> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-19setup vma->vm_page_prot by vm_get_page_prot()Coly Li
This patch uses vm_get_page_prot() to setup vma->vm_page_prot. Though inside vm_get_page_prot() the protection flags is AND with (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED), it does not hurt correct code. Signed-off-by: Coly Li <coyli@suse.de> Cc: Hugh Dickins <hugh@veritas.com> Cc: Tony Luck <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-17Add vmcoreinfoKen'ichi Ohmichi
This patch set frees the restriction that makedumpfile users should install a vmlinux file (including the debugging information) into each system. makedumpfile command is the dump filtering feature for kdump. It creates a small dumpfile by filtering unnecessary pages for the analysis. To distinguish unnecessary pages, it needs a vmlinux file including the debugging information. These days, the debugging package becomes a huge file, and it is hard to install it into each system. To solve the problem, kdump developers discussed it at lkml and kexec-ml. As the result, we reached the conclusion that necessary information for dump filtering (called "vmcoreinfo") should be embedded into the first kernel file and it should be accessed through /proc/vmcore during the second kernel. (http://www.uwsg.iu.edu/hypermail/linux/kernel/0707.0/1806.html) Dan Aloni created the patch set for the above implementation. (http://www.uwsg.iu.edu/hypermail/linux/kernel/0707.1/1053.html) And I updated it for multi architectures and memory models. (http://lists.infradead.org/pipermail/kexec/2007-August/000479.html) Signed-off-by: Dan Aloni <da-x@monatomic.org> Signed-off-by: Ken'ichi Ohmichi <oomichi@mxs.nes.nec.co.jp> Signed-off-by: Bernhard Walle <bwalle@suse.de> Signed-off-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>