aboutsummaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2008-01-30x86/vmi: fix compilation as a result of pte_t changesJeremy Fitzhardinge
Fix various compilation problems as a result of changing pte_t. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Zachary Amsden <zach@vmware.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: page.h: make pte_t a union to always includeJeremy Fitzhardinge
Make sure pte_t, whatever its definition, has a pte element with type pteval_t. This allows common code to access it without needing to be specifically parameterised on what pagetable mode we're compiling for. For 32-bit, this means that pte_t becomes a union with "pte" and "{ pte_low, pte_high }" (PAE) or just "pte_low" (non-PAE). Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: unify pgtable accessors which useJeremy Fitzhardinge
Make users of supported_pte_mask common. This has the side-effect of introducing the variable for 32-bit non-PAE, but I think its a pretty small cost to simplify the code. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: avoid name conflict for Voyager leave_mmJeremy Fitzhardinge
Avoid a conflict between Voyager's leave_mm and asm-x86/mmu.h's leave_mm. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: coding style fixes in arch/x86/ia32/audit.cPaolo Ciarrocchi
Fix one error reported by checkpatch, it now reports: total: 0 errors, 0 warnings, 42 lines checked Signed-off-by: Paolo Ciarrocchi <paolo.ciarrocchi@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: make NUMA work on 32-bit againMel Gorman
On 32-bit NUMA, the memmap representing struct pages on each node is allocated from node-local memory if possible. As only node-0 has memory from ZONE_NORMAL, the memmap must be mapped into low memory. This is done by reserving space in the Kernel Virtual Area (KVA) for the memmap belonging to other nodes by taking pages from the end of ZONE_NORMAL and remapping the other nodes memmap into those virtual addresses. The node boundaries are then adjusted so that the region of pages is not used and it is marked as reserved in the bootmem allocator. This reserved portion of the KVA is PMD aligned althought strictly speaking that requirement could be lifted (see thread at http://lkml.org/lkml/2007/8/24/220). The problem is that when aligned, there may be a portion of ZONE_NORMAL at the end that is not used for memmap and does not have an initialised memmap nor is it marked reserved in the bootmem allocator. Later in the boot process, these pages are freed and a storm of Bad page state messages result. This patch marks these pages reserved that are wasted due to alignment in the bootmem allocator so they are not accidently freed. It is worth noting that memory from node-0 is wasted where it could have been put into ZONE_HIGHMEM on NUMA machines. Worse, the KVA is always reserved from the location of real memory even when there is plenty of spare virtual address space. This patch also makes sure that reserve_bootmem() is not called with a 0-length size in numa_kva_reserve(). When this happens, it usually means that a kernel built for Summit is being booted on a normal machine. The resulting BUG_ON() is misleading so it is caught here. Signed-off-by: Mel Gorman <mel@csn.ul.ie> Signed-off-by: Andy Whitcroft <apw@shadowen.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-01-30x86, ptrace: add bts_struct size to status commandMarkus Metzger
Return the size of bts_struct in the PTRACE_BTS_STATUS command. Change types to u32. Signed-off-by: Markus Metzger <markus.t.metzger@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-01-30x86: unify arch/x86/kernel/acpi/sleep*.cPavel Machek
Unify arch/x86/kernel/acpi/sleep*.c Pretty trivial unification; when two functions differed, it was usually in error handling, and better of the two was picked up. Signed-off-by: Pavel Machek <pavel@suse.cz> Looks-okay-to: Rafael J. Wysocki <rjw@sisk.pl> Tested-by: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-01-30x86: clean up arch/x86/mm/fault_64.cIngo Molnar
clean up arch/x86/mm/fault_64.c a bit. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30percpu: make the asm-generic/percpu.h more "generic"travis@sgi.com
- add support for PER_CPU_ATTRIBUTES - fix generic smp percpu_modcopy to use per_cpu_offset() macro. Add the ability to use generic/percpu even if the arch needs to override several aspects of its operations. This will enable the use of generic percpu.h for all arches. An arch may define: __per_cpu_offset Do not use the generic pointer array. Arch must define per_cpu_offset(cpu) (used by x86_64, s390). __my_cpu_offset Can be defined to provide an optimized way to determine the offset for variables of the currently executing processor. Used by ia64, x86_64, x86_32, sparc64, s/390. SHIFT_PTR(ptr, offset) If an arch defines it then special handling of pointer arithmentic may be implemented. Used by s/390. (Some of these special percpu arch implementations may be later consolidated so that there are less cases to deal with.) Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-01-30percpu: use a kconfig variable to signal arch specific percpu setuptravis@sgi.com
The use of the __GENERIC_PERCPU is a bit problematic since arches may want to run their own percpu setup while using the generic percpu definitions. Replace it through a kconfig variable. Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-01-30i386: handle an initrd in highmem (version 2)H. Peter Anvin
The boot protocol has until now required that the initrd be located in lowmem, which makes the lowmem/highmem boundary visible to the boot loader. This was exported to the bootloader via a compile-time field. Unfortunately, the vmalloc= command-line option breaks this part of the protocol; instead of adding yet another hack that affects the bootloader, have the kernel relocate the initrd down below the lowmem boundary inside the kernel itself. Note that this does not rely on HIGHMEM being enabled in the kernel. Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-01-30x86 boot : export boot_params via debugfs for debuggingHuang, Ying
This patch export the boot parameters via debugfs for debugging. The files added are as follow: boot_params/data : binary file for struct boot_params boot_params/version : boot protocol version This patch is based on 2.6.24-rc5-mm1 and has been tested on i386 and x86_64 platform. This patch is based on the Peter Anvin's proposal. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-01-30x86: reboot_{32|64}.c unificationMiguel Boton
reboot_{32|64}.c unification patch. This patch unifies the code from the reboot_32.c and reboot_64.c files. It has been tested in computers with X86_32 and X86_64 kernels and it looks like all reboot modes work fine (EFI restart system hasn't been tested yet). Probably I made some mistakes (like I usually do) so I hope we can identify and fix them soon. Signed-off-by: Miguel Boton <mboton@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: kprobes change kprobe_handler flowAbhishek Sagar
Signed-off-by: Abhishek Sagar <sagar.abhishek@gmail.com> Signed-off-by: Quentin Barnes <qbarnes@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: make arch/x86/kernel/acpi/wakeup_32.S use a separateEric Dumazet
While examining vmlinux namelist on i386 (nm -v vmlinux) I noticed : c01021d0 t es7000_rename_gsi c010221a T es7000_start_cpu <Big Hole> c0103000 T thread_saved_pc and c0113218 T acpi_restore_state_mem c0113219 T acpi_save_state_mem <Big Hole> c0114000 t wakeup_code This is because arch/x86/kernel/acpi/wakeup_32.S forces a .text alignment of 4096 bytes. (I have no idea if it is really needed, since arch/x86/kernel/acpi/wakeup_64.S uses a 16 bytes alignment *only*) So arch/x86/kernel/built-in.o also has this alignment arch/x86/kernel/built-in.o: file format elf32-i386 Sections: Idx Name Size VMA LMA File off Algn 0 .text 00018c94 00000000 00000000 00001000 2**12 CONTENTS, ALLOC, LOAD, RELOC, READONLY, CODE But as arch/x86/kernel/acpi/wakeup_32.o is not the first object linked into arch/x86/kernel/built-in.o, linker had to build several holes to meet alignement requirements, because of .o nestings in the kbuild process. This can be solved by using a special section, .text.page_aligned, so that no holes are needed. # size vmlinux.before vmlinux.after text data bss dec hex filename 4619942 422838 458752 5501532 53f25c vmlinux.before 4610534 422838 458752 5492124 53cd9c vmlinux.after This saves 9408 bytes Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: mark memory_setup __initAndi Kleen
Otherwise WARNING: vmlinux.o(.text+0x64a9): Section mismatch: reference to .init.text:machine_specific_memory_setup (between 'memory_setup' and 'show_cpuinfo') Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: Set CFQ as default in 32-bit defconfigAndi Kleen
Someone complained that the 32-bit defconfig contains AS as default IO scheduler. Change that to CFQ. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: compile apm and voyager module only when selected in KconfigAndi Kleen
Previously the complete files were #ifdef'ed, but now handle that in the Makefile. May save a minor bit of compilation time. [ Stephen Rothwell <sfr@canb.auug.org.au>: build dependency fix ] Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: document fdimage/isoimage completely in make helpAndi Kleen
Add missing targets and missing options in x86 make help [ mingo@elte.hu: more whitespace cleanups ] Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: remove CPU capabitilites printks on 32-bitAndi Kleen
I don't know of any case where they have been useful and they look ugly. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86/efi: fix improper use of lvalueJeremy Fitzhardinge
# HG changeset patch # User Jeremy Fitzhardinge <jeremy@xensource.com> # Date 1199391030 28800 # Node ID 5d35c92fdf0e2c52edbb6fc4ccd06c7f65f25009 # Parent 22f6a5902285b58bfc1fbbd9e183498c9017bd78 x86/efi: fix improper use of lvalue pgd_val is no longer valid as an lvalue, so don't try to assign to it. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: fix detection of CONSTANT_TSC bit for AMD CPUsAndreas Herrmann
Commits - c52f61fcbdb2aa84f0e4d831ef07f375e6b99b2c (x86: allow TSC clock source on AMD Fam10h and some cleanup) - e30436f05d456efaff77611e4494f607b14c2782 (x86: move X86_FEATURE_CONSTANT_TSC into early cpu feature detection) are supposed to fix the detection of contant TSC for AMD CPUs. Unfortunately on x86_64 it does still not work with current x86/mm. For a Phenom I still get: ... TSC calibrated against PM_TIMER Marking TSC unstable due to TSCs unsynchronized time.c: Detected 2288.366 MHz processor. ... We have to set c->x86_power in early_identify_cpu to properly detect the CONSTANT_TSC bit in early_init_amd. Attached patch fixes this issue. Following the relevant boot messages when the fix is used: ... TSC calibrated against PM_TIMER time.c: Detected 2288.279 MHz processor. ... Initializing CPU#1 ... checking TSC synchronization [CPU#0 -> CPU#1]: passed. ... Initializing CPU#2 ... checking TSC synchronization [CPU#0 -> CPU#2]: passed. ... Booting processor 3/4 APIC 0x3 ... checking TSC synchronization [CPU#0 -> CPU#3]: passed. Brought up 4 CPUs ... Patch is against x86/mm (v2.6.24-rc8-672-ga9f7faa). Please apply. Set c->x86_power in early_identify_cpu. This ensures that X86_FEATURE_CONSTANT_TSC can properly be set in early_init_amd. Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: remove explicit C3 TSC check on 64bitAndi Kleen
Trust the ACPI code to disable TSC instead when C3 is used. AMD Fam10h does not disable TSC in any C states so the check was incorrect there anyways after the change to handle this like Intel on AMD too. This allows to use the TSC when C3 is disabled in software (acpi.max_c_state=2), but the BIOS supports it anyways. Match i386 behaviour. Cc: lenb@kernel.org Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: allow TSC clock source on AMD Fam10h and some cleanupAndi Kleen
After a lot of discussions with AMD it turns out that TSC on Fam10h CPUs is synchronized when the CONSTANT_TSC cpuid bit is set. Or rather that if there are ever systems where that is not true it would be their BIOS' task to disable the bit. So finally use TSC gettimeofday on Fam10h by default. Or rather it is always used now on CPUs where the AMD specific CONSTANT_TSC bit is set. This gives a nice speed bost for gettimeofday() on these systems which tends to be by far the most common v/syscall. On a Fam10h system here TSC gtod uses about 20% of the CPU time of acpi_pm based gtod(). This was measured on 32bit, on 64bit it is even better because TSC gtod() can use a vsyscall and stay in ring 3, which acpi_pm doesn't. The Intel check simply checks for CONSTANT_TSC too without hardcoding Intel vendor. This is equivalent on 64bit because all 64bit capable Intel CPUs will have CONSTANT_TSC set. On Intel there is no CPU supplied CONSTANT_TSC bit currently, but we synthesize one based on hardcoded knowledge which steppings have p-state invariant TSC. So the new logic is now: On CPUs which have the AMD specific CONSTANT_TSC bit set or on Intel CPUs which are new enough to be known to have p-state invariant TSC always use TSC based gettimeofday() Cc: lenb@kernel.org Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: move X86_FEATURE_CONSTANT_TSC into early cpu feature detectionAndi Kleen
Need this in the next patch in time_init and that happens early. This includes a minor fix on i386 where early_intel_workarounds() [which is now called early_init_intel] really executes early as the comments say. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: fix sched_clock()Ingo Molnar
[ andi@firstfloor.org: build fix ] Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: remove get_cycles_syncAndi Kleen
rdtsc is now speculation-safe, so no need for the sync variants of the APIs. [ mingo@elte.hu: removed the nsec_barrier() complication. ] Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: read_tsc syncIngo Molnar
make native_read_tsc() always non-speculative. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: map vsyscalls early enoughIngo Molnar
map vsyscalls early enough. This is important if a __vsyscall_fn function is used by other kernel code too. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: move native_read_tsc() offlineIngo Molnar
move native_read_tsc() offline. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: lfence fixIngo Molnar
LFENCE is available on XMM2 or higher Intel CPUs - not XMM or higher... this caused boot failures on XMM1 & !XMM1 capable CPUs. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: Implement support to synchronize RDTSC with LFENCE on Intel CPUsAndi Kleen
According to Intel RDTSC can be always synchronized with LFENCE on all current CPUs. Implement the necessary CPUID bit for that. It is unclear yet if that is true for all future CPUs too, but if there's another way the kernel can be always updated. Cc: asit.k.mallick@intel.com Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: implement support to synchronize RDTSC through MFENCE on AMD CPUsAndi Kleen
According to AMD RDTSC can be synchronized through MFENCE. Implement the necessary CPUID bit for that. Cc: andreas.herrmann3@amd.com Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: clean up k8topology.cCarlos R. Mafra
This patch fixes all errors pointed out by checkpatch.pl. errors lines of code errors/KLOC arch/x86/mm/k8topology_64.c (before) 72 185 389.1 arch/x86/mm/k8topology_64.c (after) 0 185 0 No code changed. text data bss dec hex filename 1506 0 0 1506 5e2 k8topology_64.o.after 1506 0 0 1506 5e2 k8topology_64.o.before md5sum: f9f48331a7eca4fc60d2a03369dc5f53 k8topology_64.o.after f9f48331a7eca4fc60d2a03369dc5f53 k8topology_64.o.before Signed-off-by: Carlos R. Mafra <crmafra@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: clean up apic_32.c, take 2Hiroshi Shimamoto
More white space and coding style clean up. Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: debug: double-check the empty zero pageIngo Molnar
temporary debugging - remove before this hits v2.6.25. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: not clear empty_zero_page againYinghai Lu
empty_zero_page is in .bss section, and it is cleared in clear_bss by x86_64_start_kernel(). So don't clear that again in mem_init Signed-off-by: Yinghai Lu <yinghai.lu@sun.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: clean up apic_32/64.cHiroshi Shimamoto
White space and coding style clean up. Make apic_32/64.c similar. Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: introduce force_sig_info_fault helper to X86_64Harvey Harrison
Use the force_sig_info_fault helper from X86_32 in X86_64. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: begin fault_{32|64}.c unificationHarvey Harrison
Move X86_32 only get_segment_eip to X86_64 Move X86_64 only is_errata93 to X86_32 Change X86_32 loop in is_prefetch to highlight the differences between them. Fold the logic from __is_prefetch in as well on X86_32. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: fault_32.c cleanupHarvey Harrison
We get die() from kdebug.h, no need for forward declaration. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: fix style errors in nmi_int.cCarlos R. Mafra
This patch fixes most errors detected by checkpatch.pl. errors lines of code errors/KLOC arch/x86/oprofile/nmi_int.c (after) 1 461 2.1 arch/x86/oprofile/nmi_int.c (before) 60 477 125.7 No code changed. size: text data bss dec hex filename 2675 264 472 3411 d53 nmi_int.o.after 2675 264 472 3411 d53 nmi_int.o.before md5sum: 847aea0cc68fe1a2b5e7019439f3b4dd nmi_int.o.after 847aea0cc68fe1a2b5e7019439f3b4dd nmi_int.o.before Signed-off-by: Carlos R. Mafra <crmafra@gmail.com> Reviewed-by: Jesper Juhl <jesper.juhl@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: code clarification patch to Kprobes arch codeQuentin Barnes
When developing the Kprobes arch code for ARM, I ran across some code found in x86 and s390 Kprobes arch code which I didn't consider as good as it could be. Once I figured out what the code was doing, I changed the code for ARM Kprobes to work the way I felt was more appropriate. I've tested the code this way in ARM for about a year and would like to push the same change to the other affected architectures. The code in question is in kprobe_exceptions_notify() which does: ==== /* kprobe_running() needs smp_processor_id() */ preempt_disable(); if (kprobe_running() && kprobe_fault_handler(args->regs, args->trapnr)) ret = NOTIFY_STOP; preempt_enable(); ==== For the moment, ignore the code having the preempt_disable()/ preempt_enable() pair in it. The problem is that kprobe_running() needs to call smp_processor_id() which will assert if preemption is enabled. That sanity check by smp_processor_id() makes perfect sense since calling it with preemption enabled would return an unreliable result. But the function kprobe_exceptions_notify() can be called from a context where preemption could be enabled. If that happens, the assertion in smp_processor_id() happens and we're dead. So what the original author did (speculation on my part!) is put in the preempt_disable()/preempt_enable() pair to simply defeat the check. Once I figured out what was going on, I considered this an inappropriate approach. If kprobe_exceptions_notify() is called from a preemptible context, we can't be in a kprobe processing context at that time anyways since kprobes requires preemption to already be disabled, so just check for preemption enabled, and if so, blow out before ever calling kprobe_running(). I wrote the ARM kprobe code like this: ==== /* To be potentially processing a kprobe fault and to * trust the result from kprobe_running(), we have * be non-preemptible. */ if (!preemptible() && kprobe_running() && kprobe_fault_handler(args->regs, args->trapnr)) ret = NOTIFY_STOP; ==== The above code has been working fine for ARM Kprobes for a year. So I changed the x86 code (2.6.24-rc6) to be the same way and ran the Systemtap tests on that kernel. As on ARM, Systemtap on x86 comes up with the same test results either way, so it's a neutral external functional change (as expected). This issue has been discussed previously on linux-arm-kernel and the Systemtap mailing lists. Pointers to the by base for the two discussions: http://lists.arm.linux.org.uk/lurker/message/20071219.223225.1f5c2a5e.en.html http://sourceware.org/ml/systemtap/2007-q1/msg00251.html Signed-off-by: Quentin Barnes <qbarnes@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com> Acked-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com>
2008-01-30x86: get rid of checkpatch.pl complains on apm_32.cCyrill Gorcunov
This patch eliminates most of code-style errors discovered by checkpatch.pl on arch/x86/kernel/apm_32.c no code changed: text data bss dec hex filename 12142 1837 84 14063 36ef apm_32.o.before 12142 1837 84 14063 36ef apm_32.o.after md5: 2676b881ad55e387da4a995e8b9ee372 apm_32.o.before.asm 2676b881ad55e387da4a995e8b9ee372 apm_32.o.after.asm Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: default to PCI=yAdrian Bunk
PCI is one of the few hardware stuff where defaulting to y makes sense. Signed-off-by: Adrian Bunk <bunk@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: gitignore arch/x86/vdso filesSam Ravnborg
Teach git to ignore generated files in arch/x86/vdso/* Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Cc: Roland McGrath <roland@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: coding style cleanup for kernel/bootflag.cCyrill Gorcunov
This patch eliminates checkpatch.pl complaints on bootflag.c No code changed: text data bss dec hex filename 321 8 0 329 149 bootflag.o.before 321 8 0 329 149 bootflag.o.after md5: 9c1b474bcf25ddc1724a29c19880043f bootflag.o.before.asm 9c1b474bcf25ddc1724a29c19880043f bootflag.o.after.asm Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: hlt on early crashIngo Molnar
H. Peter Anvin <hpa@zytor.com> wrote: > It probably should actually HLT, to avoid sucking power, and stressing > the thermal system. We're dead at this point, and the early 486's > which had problems with HLT will lock up - we don't care. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: reduce CONFIG_X86_PPRO_FENCE bloatNick Piggin
CONFIG_X86_PPRO_FENCE bloats text: i386 allmodconf: size mm/built-in.o text data bss dec hex text ratio vanilla: 163082 20372 40120 223574 36956 100.00% bugfix : 163509 20372 40120 224001 36b01 0.26% noppro : 162191 20372 40120 222683 365db - 0.55% both : 162267 20372 40120 222759 36627 - 0.50% (+0.05% vs noppro) So with the ppro memory ordering bug out of the way, the PG_uptodate fix only adds 76 bytes of text. allow this config to be specified by distros. [ mingo@elte.hu: x86.git merge ] Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>