From 6ffac1e90a17ea0aded5c581204397421eec91b6 Mon Sep 17 00:00:00 2001 From: Suresh Siddha Date: Thu, 24 Jul 2008 18:07:56 -0700 Subject: x64, fpu: fix possible FPU leakage in error conditions On Thu, Jul 24, 2008 at 03:43:44PM -0700, Linus Torvalds wrote: > So how about this patch as a starting point? This is the RightThing(tm) to > do regardless, and if it then makes it easier to do some other cleanups, > we should do it first. What do you think? restore_fpu_checking() calls init_fpu() in error conditions. While this is wrong(as our main intention is to clear the fpu state of the thread), this was benign before commit 92d140e21f1 ("x86: fix taking DNA during 64bit sigreturn"). Post commit 92d140e21f1, live FPU registers may not belong to this process at this error scenario. In the error condition for restore_fpu_checking() (especially during the 64bit signal return), we are doing init_fpu(), which saves the live FPU register state (possibly belonging to some other process context) into the thread struct (through unlazy_fpu() in init_fpu()). This is wrong and can leak the FPU data. For the signal handler restore error condition in restore_i387(), clear the fpu state present in the thread struct(before ultimately sending a SIGSEGV for badframe). For the paranoid error condition check in math_state_restore(), send a SIGSEGV, if we fail to restore the state. Signed-off-by: Suresh Siddha Cc: Cc: Linus Torvalds Signed-off-by: Ingo Molnar --- include/asm-x86/i387.h | 2 -- 1 file changed, 2 deletions(-) (limited to 'include/asm-x86') diff --git a/include/asm-x86/i387.h b/include/asm-x86/i387.h index 96fa8449ff1..0048fb77afc 100644 --- a/include/asm-x86/i387.h +++ b/include/asm-x86/i387.h @@ -62,8 +62,6 @@ static inline int restore_fpu_checking(struct i387_fxsave_struct *fx) #else : [fx] "cdaSDb" (fx), "m" (*fx), "0" (0)); #endif - if (unlikely(err)) - init_fpu(current); return err; } -- cgit v1.2.3 From d388e5fdc461344d04307a3fa83862b9ed429647 Mon Sep 17 00:00:00 2001 From: "Eric W. Biederman" Date: Sat, 9 Aug 2008 15:09:02 -0700 Subject: x86: Restore proper vector locking during cpu hotplug Having cpu_online_map change during assign_irq_vector can result in some really nasty and weird things happening. The one that bit me last time was accessing non existent per cpu memory for non existent cpus. This locking was removed in a sloppy x86_64 and x86_32 merge patch. Guys can we please try and avoid subtly breaking x86 when we are merging files together? Signed-off-by: Eric W. Biederman Signed-off-by: H. Peter Anvin --- include/asm-x86/hw_irq.h | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) (limited to 'include/asm-x86') diff --git a/include/asm-x86/hw_irq.h b/include/asm-x86/hw_irq.h index 77ba51df566..edd0b95f14d 100644 --- a/include/asm-x86/hw_irq.h +++ b/include/asm-x86/hw_irq.h @@ -98,9 +98,17 @@ extern void (*const interrupt[NR_IRQS])(void); #else typedef int vector_irq_t[NR_VECTORS]; DECLARE_PER_CPU(vector_irq_t, vector_irq); -extern spinlock_t vector_lock; #endif -extern void setup_vector_irq(int cpu); + +#if defined(CONFIG_X86_IO_APIC) && defined(CONFIG_X86_64) +extern void lock_vector_lock(void); +extern void unlock_vector_lock(void); +extern void __setup_vector_irq(int cpu); +#else +static inline void lock_vector_lock(void) {} +static inline void unlock_vector_lock(void) {} +static inline void __setup_vector_irq(int cpu) {} +#endif #endif /* !ASSEMBLY_ */ -- cgit v1.2.3 From 3c7569b284e1be55d086b61a70d9f545326f6d74 Mon Sep 17 00:00:00 2001 From: "Eric W. Biederman" Date: Sun, 10 Aug 2008 00:35:50 -0700 Subject: x86_64: restore the proper NR_IRQS define so larger systems work. As pointed out and tracked by Yinghai Lu : Dhaval Giani got: kernel BUG at arch/x86/kernel/io_apic_64.c:357! invalid opcode: 0000 [1] SMP CPU 24 ... his system (x3950) has 8 ioapic, irq > 256 This was caused by: commit 9b7dc567d03d74a1fbae84e88949b6a60d922d82 Author: Thomas Gleixner Date: Fri May 2 20:10:09 2008 +0200 x86: unify interrupt vector defines The interrupt vector defines are copied 4 times around with minimal differences. Move them all into asm-x86/irq_vectors.h It appears that Thomas did not notice that x86_64 does something completely different when he merge irq_vectors.h We can solve this for 2.6.27 by simply reintroducing the old heuristic for setting NR_IRQS on x86_64 to a usable value, which trivially removes the regression. Long term it would be nice to harmonize the handling of ioapic interrupts of x86_32 and x86_64 so we don't have this kind of confusion. Dhaval Giani tested an earlier version of this patch by YH which confirms simply increasing NR_IRQS fixes the problem. Signed-off-by: Eric W. Biederman Acked-by: Yinghai Lu Cc: Dhaval Giani Cc: Mike Travis Cc: Andrew Morton Signed-off-by: Ingo Molnar --- include/asm-x86/irq_vectors.h | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) (limited to 'include/asm-x86') diff --git a/include/asm-x86/irq_vectors.h b/include/asm-x86/irq_vectors.h index 90b1d1f12f0..b95d167b7fb 100644 --- a/include/asm-x86/irq_vectors.h +++ b/include/asm-x86/irq_vectors.h @@ -109,7 +109,15 @@ #define LAST_VM86_IRQ 15 #define invalid_vm86_irq(irq) ((irq) < 3 || (irq) > 15) -#if !defined(CONFIG_X86_VOYAGER) +#ifdef CONFIG_X86_64 +# if NR_CPUS < MAX_IO_APICS +# define NR_IRQS (NR_VECTORS + (32 * NR_CPUS)) +# else +# define NR_IRQS (NR_VECTORS + (32 * MAX_IO_APICS)) +# endif +# define NR_IRQ_VECTORS NR_IRQS + +#elif !defined(CONFIG_X86_VOYAGER) # if defined(CONFIG_X86_IO_APIC) || defined(CONFIG_PARAVIRT) || defined(CONFIG_X86_VISWS) -- cgit v1.2.3 From b0fbaa6b5976962434349849673b9ff63631b6d4 Mon Sep 17 00:00:00 2001 From: Randy Dunlap Date: Thu, 7 Aug 2008 15:12:39 -0700 Subject: EFI, x86: fix function prototype Fix function prototype in header file to match source code: linux-next-20080807/arch/x86/kernel/efi_64.c:100:14: error: symbol 'efi_ioremap' redeclared with different type (originally declared at include2/asm/efi.h:89) - different address spaces Signed-off-by: Randy Dunlap Signed-off-by: Ingo Molnar --- include/asm-x86/efi.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'include/asm-x86') diff --git a/include/asm-x86/efi.h b/include/asm-x86/efi.h index 7ed2bd7a7f5..d4f2b0abe92 100644 --- a/include/asm-x86/efi.h +++ b/include/asm-x86/efi.h @@ -86,7 +86,7 @@ extern u64 efi_call6(void *fp, u64 arg1, u64 arg2, u64 arg3, efi_call6((void *)(efi.systab->runtime->f), (u64)(a1), (u64)(a2), \ (u64)(a3), (u64)(a4), (u64)(a5), (u64)(a6)) -extern void *efi_ioremap(unsigned long addr, unsigned long size); +extern void __iomem *efi_ioremap(unsigned long addr, unsigned long size); #endif /* CONFIG_X86_32 */ -- cgit v1.2.3 From 0ed89b06e49c326bff81d81f24b9ba955eb912d5 Mon Sep 17 00:00:00 2001 From: Johannes Weiner Date: Wed, 13 Aug 2008 10:17:24 +0200 Subject: x86: propagate new nonpanic bootmem macros to CONFIG_HAVE_ARCH_BOOTMEM_NODE Commit 74768ed833344b "page allocator: use no-panic variant of alloc_bootmem() in alloc_large_system_hash()" introduced two new _nopanic macros which are undefined for CONFIG_HAVE_ARCH_BOOTMEM_NODE. Signed-off-by: Johannes Weiner Acked-by: "Jan Beulich" Signed-off-by: Ingo Molnar --- include/asm-x86/mmzone_32.h | 6 ++++++ 1 file changed, 6 insertions(+) (limited to 'include/asm-x86') diff --git a/include/asm-x86/mmzone_32.h b/include/asm-x86/mmzone_32.h index b2298a22756..5862e646065 100644 --- a/include/asm-x86/mmzone_32.h +++ b/include/asm-x86/mmzone_32.h @@ -97,10 +97,16 @@ static inline int pfn_valid(int pfn) reserve_bootmem_node(NODE_DATA(0), (addr), (size), (flags)) #define alloc_bootmem(x) \ __alloc_bootmem_node(NODE_DATA(0), (x), SMP_CACHE_BYTES, __pa(MAX_DMA_ADDRESS)) +#define alloc_bootmem_nopanic(x) \ + __alloc_bootmem_node_nopanic(NODE_DATA(0), (x), SMP_CACHE_BYTES, \ + __pa(MAX_DMA_ADDRESS)) #define alloc_bootmem_low(x) \ __alloc_bootmem_node(NODE_DATA(0), (x), SMP_CACHE_BYTES, 0) #define alloc_bootmem_pages(x) \ __alloc_bootmem_node(NODE_DATA(0), (x), PAGE_SIZE, __pa(MAX_DMA_ADDRESS)) +#define alloc_bootmem_pages_nopanic(x) \ + __alloc_bootmem_node_nopanic(NODE_DATA(0), (x), PAGE_SIZE, \ + __pa(MAX_DMA_ADDRESS)) #define alloc_bootmem_low_pages(x) \ __alloc_bootmem_node(NODE_DATA(0), (x), PAGE_SIZE, 0) #define alloc_bootmem_node(pgdat, x) \ -- cgit v1.2.3 From e49140120c88eb99db1a9172d9ac224c0f2bbdd2 Mon Sep 17 00:00:00 2001 From: Suresh Siddha Date: Wed, 13 Aug 2008 22:02:26 +1000 Subject: crypto: padlock - fix VIA PadLock instruction usage with irq_ts_save/restore() Wolfgang Walter reported this oops on his via C3 using padlock for AES-encryption: ################################################################## BUG: unable to handle kernel NULL pointer dereference at 000001f0 IP: [] __switch_to+0x30/0x117 *pde = 00000000 Oops: 0002 [#1] PREEMPT Modules linked in: Pid: 2071, comm: sleep Not tainted (2.6.26 #11) EIP: 0060:[] EFLAGS: 00010002 CPU: 0 EIP is at __switch_to+0x30/0x117 EAX: 00000000 EBX: c0493300 ECX: dc48dd00 EDX: c0493300 ESI: dc48dd00 EDI: c0493530 EBP: c04cff8c ESP: c04cff7c DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 0068 Process sleep (pid: 2071, ti=c04ce000 task=dc48dd00 task.ti=d2fe6000) Stack: dc48df30 c0493300 00000000 00000000 d2fe7f44 c03b5b43 c04cffc8 00000046 c0131856 0000005a dc472d3c c0493300 c0493470 d983ae00 00002696 00000000 c0239f54 00000000 c04c4000 c04cffd8 c01025fe c04f3740 00049800 c04cffe0 Call Trace: [] ? schedule+0x285/0x2ff [] ? pm_qos_requirement+0x3c/0x53 [] ? acpi_processor_idle+0x0/0x434 [] ? cpu_idle+0x73/0x7f [] ? rest_init+0x61/0x63 ======================= Wolfgang also found out that adding kernel_fpu_begin() and kernel_fpu_end() around the padlock instructions fix the oops. Suresh wrote: These padlock instructions though don't use/touch SSE registers, but it behaves similar to other SSE instructions. For example, it might cause DNA faults when cr0.ts is set. While this is a spurious DNA trap, it might cause oops with the recent fpu code changes. This is the code sequence that is probably causing this problem: a) new app is getting exec'd and it is somewhere in between start_thread() and flush_old_exec() in the load_xyz_binary() b) At pont "a", task's fpu state (like TS_USEDFPU, used_math() etc) is cleared. c) Now we get an interrupt/softirq which starts using these encrypt/decrypt routines in the network stack. This generates a math fault (as cr0.ts is '1') which sets TS_USEDFPU and restores the math that is in the task's xstate. d) Return to exec code path, which does start_thread() which does free_thread_xstate() and sets xstate pointer to NULL while the TS_USEDFPU is still set. e) At the next context switch from the new exec'd task to another task, we have a scenarios where TS_USEDFPU is set but xstate pointer is null. This can cause an oops during unlazy_fpu() in __switch_to() Now: 1) This should happen with or with out pre-emption. Viro also encountered similar problem with out CONFIG_PREEMPT. 2) kernel_fpu_begin() and kernel_fpu_end() will fix this problem, because kernel_fpu_begin() will manually do a clts() and won't run in to the situation of setting TS_USEDFPU in step "c" above. 3) This was working before the fpu changes, because its a spurious math fault which doesn't corrupt any fpu/sse registers and the task's math state was always in an allocated state. With out the recent lazy fpu allocation changes, while we don't see oops, there is a possible race still present in older kernels(for example, while kernel is using kernel_fpu_begin() in some optimized clear/copy page and an interrupt/softirq happens which uses these padlock instructions generating DNA fault). This is the failing scenario that existed even before the lazy fpu allocation changes: 0. CPU's TS flag is set 1. kernel using FPU in some optimized copy routine and while doing kernel_fpu_begin() takes an interrupt just before doing clts() 2. Takes an interrupt and ipsec uses padlock instruction. And we take a DNA fault as TS flag is still set. 3. We handle the DNA fault and set TS_USEDFPU and clear cr0.ts 4. We complete the padlock routine 5. Go back to step-1, which resumes clts() in kernel_fpu_begin(), finishes the optimized copy routine and does kernel_fpu_end(). At this point, we have cr0.ts again set to '1' but the task's TS_USEFPU is stilll set and not cleared. 6. Now kernel resumes its user operation. And at the next context switch, kernel sees it has do a FP save as TS_USEDFPU is still set and then will do a unlazy_fpu() in __switch_to(). unlazy_fpu() will take a DNA fault, as cr0.ts is '1' and now, because we are in __switch_to(), math_state_restore() will get confused and will restore the next task's FP state and will save it in prev tasks's FP state. Remember, in __switch_to() we are already on the stack of the next task but take a DNA fault for the prev task. This causes the fpu leakage. Fix the padlock instruction usage by calling them inside the context of new routines irq_ts_save/restore(), which clear/restore cr0.ts manually in the interrupt context. This will not generate spurious DNA in the context of the interrupt which will fix the oops encountered and the possible FPU leakage issue. Reported-and-bisected-by: Wolfgang Walter Signed-off-by: Suresh Siddha Signed-off-by: Herbert Xu --- include/asm-x86/i387.h | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) (limited to 'include/asm-x86') diff --git a/include/asm-x86/i387.h b/include/asm-x86/i387.h index 96fa8449ff1..6d3b2106341 100644 --- a/include/asm-x86/i387.h +++ b/include/asm-x86/i387.h @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -236,6 +237,37 @@ static inline void kernel_fpu_end(void) preempt_enable(); } +/* + * Some instructions like VIA's padlock instructions generate a spurious + * DNA fault but don't modify SSE registers. And these instructions + * get used from interrupt context aswell. To prevent these kernel instructions + * in interrupt context interact wrongly with other user/kernel fpu usage, we + * should use them only in the context of irq_ts_save/restore() + */ +static inline int irq_ts_save(void) +{ + /* + * If we are in process context, we are ok to take a spurious DNA fault. + * Otherwise, doing clts() in process context require pre-emption to + * be disabled or some heavy lifting like kernel_fpu_begin() + */ + if (!in_interrupt()) + return 0; + + if (read_cr0() & X86_CR0_TS) { + clts(); + return 1; + } + + return 0; +} + +static inline void irq_ts_restore(int TS_state) +{ + if (TS_state) + stts(); +} + #ifdef CONFIG_X86_64 static inline void save_init_fpu(struct task_struct *tsk) -- cgit v1.2.3 From 519c31bacf78a969efa8d2e55ed8862848f28590 Mon Sep 17 00:00:00 2001 From: Joerg Roedel Date: Thu, 14 Aug 2008 19:55:15 +0200 Subject: x86, AMD IOMMU: use status bit instead of memory write-back for completion wait Signed-off-by: Joerg Roedel Signed-off-by: Ingo Molnar --- include/asm-x86/amd_iommu_types.h | 4 ++++ 1 file changed, 4 insertions(+) (limited to 'include/asm-x86') diff --git a/include/asm-x86/amd_iommu_types.h b/include/asm-x86/amd_iommu_types.h index 22aa58ca199..32543229db7 100644 --- a/include/asm-x86/amd_iommu_types.h +++ b/include/asm-x86/amd_iommu_types.h @@ -69,6 +69,9 @@ #define MMIO_EVT_TAIL_OFFSET 0x2018 #define MMIO_STATUS_OFFSET 0x2020 +/* MMIO status bits */ +#define MMIO_STATUS_COM_WAIT_INT_MASK 0x04 + /* feature control bits */ #define CONTROL_IOMMU_EN 0x00ULL #define CONTROL_HT_TUN_EN 0x01ULL @@ -89,6 +92,7 @@ #define CMD_INV_IOMMU_PAGES 0x03 #define CMD_COMPL_WAIT_STORE_MASK 0x01 +#define CMD_COMPL_WAIT_INT_MASK 0x02 #define CMD_INV_IOMMU_PAGES_SIZE_MASK 0x01 #define CMD_INV_IOMMU_PAGES_PDE_MASK 0x02 -- cgit v1.2.3 From 9f5f5fb35d2934fe7dc0cb019854a030efd10cd7 Mon Sep 17 00:00:00 2001 From: Joerg Roedel Date: Thu, 14 Aug 2008 19:55:16 +0200 Subject: x86, AMD IOMMU: initialize device table properly This patch adds device table initializations which forbids memory accesses for devices per default and disables all page faults. Signed-off-by: Joerg Roedel Signed-off-by: Ingo Molnar --- include/asm-x86/amd_iommu_types.h | 1 + 1 file changed, 1 insertion(+) (limited to 'include/asm-x86') diff --git a/include/asm-x86/amd_iommu_types.h b/include/asm-x86/amd_iommu_types.h index 32543229db7..f0beca73e36 100644 --- a/include/asm-x86/amd_iommu_types.h +++ b/include/asm-x86/amd_iommu_types.h @@ -103,6 +103,7 @@ #define DEV_ENTRY_TRANSLATION 0x01 #define DEV_ENTRY_IR 0x3d #define DEV_ENTRY_IW 0x3e +#define DEV_ENTRY_NO_PAGE_FAULT 0x62 #define DEV_ENTRY_EX 0x67 #define DEV_ENTRY_SYSMGT1 0x68 #define DEV_ENTRY_SYSMGT2 0x69 -- cgit v1.2.3 From 8a456695c5020d6317f9c7af190999e9414b0d3e Mon Sep 17 00:00:00 2001 From: Joerg Roedel Date: Thu, 14 Aug 2008 19:55:17 +0200 Subject: x86m AMD IOMMU: cleanup: replace LOW_U32 macro with generic lower_32_bits Signed-off-by: Joerg Roedel Signed-off-by: Ingo Molnar --- include/asm-x86/amd_iommu_types.h | 3 --- 1 file changed, 3 deletions(-) (limited to 'include/asm-x86') diff --git a/include/asm-x86/amd_iommu_types.h b/include/asm-x86/amd_iommu_types.h index f0beca73e36..dcc81206739 100644 --- a/include/asm-x86/amd_iommu_types.h +++ b/include/asm-x86/amd_iommu_types.h @@ -31,9 +31,6 @@ #define ALIAS_TABLE_ENTRY_SIZE 2 #define RLOOKUP_TABLE_ENTRY_SIZE (sizeof(void *)) -/* helper macros */ -#define LOW_U32(x) ((x) & ((1ULL << 32)-1)) - /* Length of the MMIO region for the AMD IOMMU */ #define MMIO_REGION_LENGTH 0x4000 -- cgit v1.2.3 From 394a15051c33f2b18e72f42283b36a9388fa414b Mon Sep 17 00:00:00 2001 From: Mark Langsdorf Date: Thu, 14 Aug 2008 09:11:26 -0500 Subject: x86: invalidate caches before going into suspend MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit When a CPU core is shut down, all of its caches need to be flushed to prevent stale data from causing errors if the core is resumed. Current Linux suspend code performs an assignment after the flush, which can add dirty data back to the cache.  On some AMD platforms, additional speculative reads have caused crashes on resume because of this dirty data. Relocate the cache flush to be the very last thing done before halting.  Tie into an assembly line so the compile will not reorder it.  Add some documentation explaining what is going on and why we're doing this. Signed-off-by: Mark Langsdorf Acked-by: Mark Borden Acked-by: Michael Hohmuth Signed-off-by: Ingo Molnar --- include/asm-x86/processor.h | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) (limited to 'include/asm-x86') diff --git a/include/asm-x86/processor.h b/include/asm-x86/processor.h index 5f58da401b4..4df3e2f6fb5 100644 --- a/include/asm-x86/processor.h +++ b/include/asm-x86/processor.h @@ -728,6 +728,29 @@ extern unsigned long boot_option_idle_override; extern unsigned long idle_halt; extern unsigned long idle_nomwait; +/* + * on systems with caches, caches must be flashed as the absolute + * last instruction before going into a suspended halt. Otherwise, + * dirty data can linger in the cache and become stale on resume, + * leading to strange errors. + * + * perform a variety of operations to guarantee that the compiler + * will not reorder instructions. wbinvd itself is serializing + * so the processor will not reorder. + * + * Systems without cache can just go into halt. + */ +static inline void wbinvd_halt(void) +{ + mb(); + /* check for clflush to determine if wbinvd is legal */ + if (cpu_has_clflush) + asm volatile("cli; wbinvd; 1: hlt; jmp 1b" : : : "memory"); + else + while (1) + halt(); +} + extern void enable_sep_cpu(void); extern int sysenter_setup(void); -- cgit v1.2.3 From 1c5b0eb66d74683e2be5da0c53e33c1f4ca982fd Mon Sep 17 00:00:00 2001 From: Mikael Pettersson Date: Wed, 13 Aug 2008 21:07:07 +0200 Subject: x86: fix readb() et al compile error with gcc-3.2.3 Building 2.6.27-rc1 on x86 with gcc-3.2.3 fails with: In file included from include/asm/dma.h:12, from include/linux/bootmem.h:8, from init/main.c:26: include/asm/io.h: In function `readb': include/asm/io.h:32: syntax error before string constant include/asm/io.h: In function `readw': include/asm/io.h:33: syntax error before string constant include/asm/io.h: In function `readl': include/asm/io.h:34: syntax error before string constant include/asm/io.h: In function `__readb': include/asm/io.h:36: syntax error before string constant include/asm/io.h: In function `__readw': include/asm/io.h:37: syntax error before string constant include/asm/io.h: In function `__readl': include/asm/io.h:38: syntax error before string constant make[1]: *** [init/main.o] Error 1 make: *** [init] Error 2 Starting with 2.6.27-rc1 readb() et al are generated by a build_mmio_read() macro, which generates asm() statements with output register constraints like "=" "q", i.e. as two adjacent string literals. This doesn't work with gcc-3.2.3. Fixed by moving the "=" part into the callers' reg parameter (as suggested by Ingo). Build and boot-tested with gcc-3.2.3 on 32 and 64-bit x86. Fixes . Signed-off-by: Mikael Pettersson Signed-off-by: Ingo Molnar --- include/asm-x86/io.h | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) (limited to 'include/asm-x86') diff --git a/include/asm-x86/io.h b/include/asm-x86/io.h index bf5d629b3a3..0f954dc89cb 100644 --- a/include/asm-x86/io.h +++ b/include/asm-x86/io.h @@ -21,7 +21,7 @@ extern void __iomem *fix_ioremap(unsigned idx, unsigned long phys); #define build_mmio_read(name, size, type, reg, barrier) \ static inline type name(const volatile void __iomem *addr) \ -{ type ret; asm volatile("mov" size " %1,%0":"=" reg (ret) \ +{ type ret; asm volatile("mov" size " %1,%0":reg (ret) \ :"m" (*(volatile type __force *)addr) barrier); return ret; } #define build_mmio_write(name, size, type, reg, barrier) \ @@ -29,13 +29,13 @@ static inline void name(type val, volatile void __iomem *addr) \ { asm volatile("mov" size " %0,%1": :reg (val), \ "m" (*(volatile type __force *)addr) barrier); } -build_mmio_read(readb, "b", unsigned char, "q", :"memory") -build_mmio_read(readw, "w", unsigned short, "r", :"memory") -build_mmio_read(readl, "l", unsigned int, "r", :"memory") +build_mmio_read(readb, "b", unsigned char, "=q", :"memory") +build_mmio_read(readw, "w", unsigned short, "=r", :"memory") +build_mmio_read(readl, "l", unsigned int, "=r", :"memory") -build_mmio_read(__readb, "b", unsigned char, "q", ) -build_mmio_read(__readw, "w", unsigned short, "r", ) -build_mmio_read(__readl, "l", unsigned int, "r", ) +build_mmio_read(__readb, "b", unsigned char, "=q", ) +build_mmio_read(__readw, "w", unsigned short, "=r", ) +build_mmio_read(__readl, "l", unsigned int, "=r", ) build_mmio_write(writeb, "b", unsigned char, "q", :"memory") build_mmio_write(writew, "w", unsigned short, "r", :"memory") @@ -59,8 +59,8 @@ build_mmio_write(__writel, "l", unsigned int, "r", ) #define mmiowb() barrier() #ifdef CONFIG_X86_64 -build_mmio_read(readq, "q", unsigned long, "r", :"memory") -build_mmio_read(__readq, "q", unsigned long, "r", ) +build_mmio_read(readq, "q", unsigned long, "=r", :"memory") +build_mmio_read(__readq, "q", unsigned long, "=r", ) build_mmio_write(writeq, "q", unsigned long, "r", :"memory") build_mmio_write(__writeq, "q", unsigned long, "r", ) -- cgit v1.2.3 From 7bc069c6bc4ede519a7116be1b9e149a1dbf787a Mon Sep 17 00:00:00 2001 From: Jan Beulich Date: Mon, 4 Aug 2008 14:38:54 +0100 Subject: x86: fix spin_is_contended() The masked difference is what needs to be compared against 1, rather than the difference of masked values (which can be negative). Signed-off-by: Jan Beulich Acked-by: Nick Piggin Cc: Signed-off-by: Ingo Molnar --- include/asm-x86/spinlock.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'include/asm-x86') diff --git a/include/asm-x86/spinlock.h b/include/asm-x86/spinlock.h index 4f9a9861799..e39c790dbfd 100644 --- a/include/asm-x86/spinlock.h +++ b/include/asm-x86/spinlock.h @@ -65,7 +65,7 @@ static inline int __ticket_spin_is_contended(raw_spinlock_t *lock) { int tmp = ACCESS_ONCE(lock->slock); - return (((tmp >> 8) & 0xff) - (tmp & 0xff)) > 1; + return (((tmp >> 8) - tmp) & 0xff) > 1; } static __always_inline void __ticket_spin_lock(raw_spinlock_t *lock) @@ -127,7 +127,7 @@ static inline int __ticket_spin_is_contended(raw_spinlock_t *lock) { int tmp = ACCESS_ONCE(lock->slock); - return (((tmp >> 16) & 0xffff) - (tmp & 0xffff)) > 1; + return (((tmp >> 16) - tmp) & 0xffff) > 1; } static __always_inline void __ticket_spin_lock(raw_spinlock_t *lock) -- cgit v1.2.3 From 0d5cdc97e242a5589e5dca23277675f4b4482490 Mon Sep 17 00:00:00 2001 From: Jens Rottmann Date: Mon, 4 Aug 2008 14:40:16 +0200 Subject: x86, geode-mfgpt: check IRQ before using MFGPT as clocksource Adds a simple IRQ autodetection to the AMD Geode MFGPT driver, and more importantly, adds some checks, if IRQs can actually be received on the chosen line. This fixes cases where MFGPT is selected as clocksource though not producing any ticks, so the kernel simply starves during boot. Signed-off-by: Jens Rottmann Cc: Andres Salomon Cc: linux-geode@bombadil.infradead.org Cc: Jordan Crouse Signed-off-by: Ingo Molnar --- include/asm-x86/geode.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) (limited to 'include/asm-x86') diff --git a/include/asm-x86/geode.h b/include/asm-x86/geode.h index bb06027fc83..2c1cda0b8a8 100644 --- a/include/asm-x86/geode.h +++ b/include/asm-x86/geode.h @@ -50,6 +50,7 @@ extern int geode_get_dev_base(unsigned int dev); #define MSR_PIC_YSEL_HIGH 0x51400021 #define MSR_PIC_ZSEL_LOW 0x51400022 #define MSR_PIC_ZSEL_HIGH 0x51400023 +#define MSR_PIC_IRQM_LPC 0x51400025 #define MSR_MFGPT_IRQ 0x51400028 #define MSR_MFGPT_NR 0x51400029 @@ -237,7 +238,7 @@ static inline u16 geode_mfgpt_read(int timer, u16 reg) } extern int geode_mfgpt_toggle_event(int timer, int cmp, int event, int enable); -extern int geode_mfgpt_set_irq(int timer, int cmp, int irq, int enable); +extern int geode_mfgpt_set_irq(int timer, int cmp, int *irq, int enable); extern int geode_mfgpt_alloc_timer(int timer, int domain); #define geode_mfgpt_setup_irq(t, c, i) geode_mfgpt_set_irq((t), (c), (i), 1) -- cgit v1.2.3 From 66d4bdf22b8652cda215e2653c8bbec7a767ed57 Mon Sep 17 00:00:00 2001 From: Jan Beulich Date: Thu, 31 Jul 2008 16:48:31 +0100 Subject: x86-64: fix overlap of modules and fixmap areas Plus add a build time check so this doesn't go unnoticed again. Signed-off-by: Jan Beulich Signed-off-by: Ingo Molnar --- include/asm-x86/pgtable_64.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'include/asm-x86') diff --git a/include/asm-x86/pgtable_64.h b/include/asm-x86/pgtable_64.h index ac5fff4cc58..549144d03d9 100644 --- a/include/asm-x86/pgtable_64.h +++ b/include/asm-x86/pgtable_64.h @@ -151,7 +151,7 @@ static inline void native_pgd_clear(pgd_t *pgd) #define VMALLOC_END _AC(0xffffe1ffffffffff, UL) #define VMEMMAP_START _AC(0xffffe20000000000, UL) #define MODULES_VADDR _AC(0xffffffffa0000000, UL) -#define MODULES_END _AC(0xfffffffffff00000, UL) +#define MODULES_END _AC(0xffffffffff000000, UL) #define MODULES_LEN (MODULES_END - MODULES_VADDR) #ifndef __ASSEMBLY__ -- cgit v1.2.3 From 163f6876f5c3ff8215e900b93779e960a56b3694 Mon Sep 17 00:00:00 2001 From: Huang Ying Date: Fri, 15 Aug 2008 00:40:22 -0700 Subject: kexec jump: rename KEXEC_CONTROL_CODE_SIZE to KEXEC_CONTROL_PAGE_SIZE Rename KEXEC_CONTROL_CODE_SIZE to KEXEC_CONTROL_PAGE_SIZE, because control page is used for not only code on some platform. For example in kexec jump, it is used for data and stack too. [akpm@linux-foundation.org: unbreak powerpc and arm, finish conversion] Signed-off-by: Huang Ying Cc: Pavel Machek Cc: "Rafael J. Wysocki" Cc: "Eric W. Biederman" Cc: Vivek Goyal Cc: Ingo Molnar Cc: Russell King Cc: Benjamin Herrenschmidt Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- include/asm-x86/kexec.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'include/asm-x86') diff --git a/include/asm-x86/kexec.h b/include/asm-x86/kexec.h index c0e52a14fd4..f6fb3d21883 100644 --- a/include/asm-x86/kexec.h +++ b/include/asm-x86/kexec.h @@ -63,7 +63,7 @@ /* Maximum address we can use for the control code buffer */ # define KEXEC_CONTROL_MEMORY_LIMIT TASK_SIZE -# define KEXEC_CONTROL_CODE_SIZE 4096 +# define KEXEC_CONTROL_PAGE_SIZE 4096 /* The native architecture */ # define KEXEC_ARCH KEXEC_ARCH_386 @@ -79,7 +79,7 @@ # define KEXEC_CONTROL_MEMORY_LIMIT (0xFFFFFFFFFFUL) /* Allocate one page for the pdp and the second for the code */ -# define KEXEC_CONTROL_CODE_SIZE (4096UL + 4096UL) +# define KEXEC_CONTROL_PAGE_SIZE (4096UL + 4096UL) /* The native architecture */ # define KEXEC_ARCH KEXEC_ARCH_X86_64 -- cgit v1.2.3 From fb45daa69d287b394eca1619b3fadff7c0215c71 Mon Sep 17 00:00:00 2001 From: Huang Ying Date: Fri, 15 Aug 2008 00:40:23 -0700 Subject: kexec jump: check code size in control page Kexec/Kexec-jump require code size in control page is less than PAGE_SIZE/2. This patch add link-time checking for this. ASSERT() of ld link script is used as the link-time checking mechanism. [akpm@linux-foundation.org: build fix] Signed-off-by: Huang Ying Cc: Pavel Machek Cc: "Rafael J. Wysocki" Cc: "Eric W. Biederman" Acked-by: Vivek Goyal Cc: Ingo Molnar Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- include/asm-x86/kexec.h | 4 ++++ 1 file changed, 4 insertions(+) (limited to 'include/asm-x86') diff --git a/include/asm-x86/kexec.h b/include/asm-x86/kexec.h index f6fb3d21883..4246ab7dc98 100644 --- a/include/asm-x86/kexec.h +++ b/include/asm-x86/kexec.h @@ -41,6 +41,10 @@ # define PAGES_NR 17 #endif +#ifdef CONFIG_X86_32 +# define KEXEC_CONTROL_CODE_MAX_SIZE 2048 +#endif + #ifndef __ASSEMBLY__ #include -- cgit v1.2.3 From 2fdc86901d2ab30a12402b46238951d2a7891590 Mon Sep 17 00:00:00 2001 From: Ingo Molnar Date: Wed, 13 Aug 2008 18:02:18 +0200 Subject: x86: add MAP_STACK mmap flag as per this discussion: http://lkml.org/lkml/2008/8/12/423 Pardo reported that 64-bit threaded apps, if their stacks exceed the combined size of ~4GB, slow down drastically in pthread_create() - because glibc uses MAP_32BIT to allocate the stacks. The use of MAP_32BIT is a legacy hack - to speed up context switching on certain early model 64-bit P4 CPUs. So introduce a new flag to be used by glibc instead, to not constrain 64-bit apps like this. glibc can switch to this new flag straight away - it will be ignored by the kernel. If those old CPUs ever matter to anyone, support for it can be implemented. Signed-off-by: Ingo Molnar Acked-by: Ulrich Drepper --- include/asm-x86/mman.h | 1 + 1 file changed, 1 insertion(+) (limited to 'include/asm-x86') diff --git a/include/asm-x86/mman.h b/include/asm-x86/mman.h index c1682b542da..90bc4108a4f 100644 --- a/include/asm-x86/mman.h +++ b/include/asm-x86/mman.h @@ -12,6 +12,7 @@ #define MAP_NORESERVE 0x4000 /* don't check for reservations */ #define MAP_POPULATE 0x8000 /* populate (prefault) pagetables */ #define MAP_NONBLOCK 0x10000 /* do not block on IO */ +#define MAP_STACK 0x20000 /* give out an address that is best suited for process/thread stacks */ #define MCL_CURRENT 1 /* lock all current mappings */ #define MCL_FUTURE 2 /* lock all future mappings */ -- cgit v1.2.3 From cd98a04a59e2f94fa64d5bf1e26498d27427d5e7 Mon Sep 17 00:00:00 2001 From: Ingo Molnar Date: Wed, 13 Aug 2008 18:02:18 +0200 Subject: x86: add MAP_STACK mmap flag as per this discussion: http://lkml.org/lkml/2008/8/12/423 Pardo reported that 64-bit threaded apps, if their stacks exceed the combined size of ~4GB, slow down drastically in pthread_create() - because glibc uses MAP_32BIT to allocate the stacks. The use of MAP_32BIT is a legacy hack - to speed up context switching on certain early model 64-bit P4 CPUs. So introduce a new flag to be used by glibc instead, to not constrain 64-bit apps like this. glibc can switch to this new flag straight away - it will be ignored by the kernel. If those old CPUs ever matter to anyone, support for it can be implemented. Signed-off-by: Ingo Molnar Acked-by: Ulrich Drepper Signed-off-by: Linus Torvalds --- include/asm-x86/mman.h | 1 + 1 file changed, 1 insertion(+) (limited to 'include/asm-x86') diff --git a/include/asm-x86/mman.h b/include/asm-x86/mman.h index c1682b542da..90bc4108a4f 100644 --- a/include/asm-x86/mman.h +++ b/include/asm-x86/mman.h @@ -12,6 +12,7 @@ #define MAP_NORESERVE 0x4000 /* don't check for reservations */ #define MAP_POPULATE 0x8000 /* populate (prefault) pagetables */ #define MAP_NONBLOCK 0x10000 /* do not block on IO */ +#define MAP_STACK 0x20000 /* give out an address that is best suited for process/thread stacks */ #define MCL_CURRENT 1 /* lock all current mappings */ #define MCL_FUTURE 2 /* lock all future mappings */ -- cgit v1.2.3 From 3c3b5c3b0bf798316a410e27e3d7e6f015663602 Mon Sep 17 00:00:00 2001 From: Mathieu Desnoyers Date: Sat, 16 Aug 2008 03:39:26 -0400 Subject: x86: correct register constraints for 64-bit atomic operations x86_64 add/sub atomic ops does not seems to accept integer values bigger than 32 bits as immediates. Intel's add/sub documentation specifies they have to be passed as registers. The only operations in the x86-64 architecture which accept arbitrary 64-bit immediates is "movq" to any register; similarly, the only operation which accept arbitrary 64-bit displacement is "movabs" to or from al/ax/eax/rax. http://gcc.gnu.org/onlinedocs/gcc-4.3.0/gcc/Machine-Constraints.html states : e 32-bit signed integer constant, or a symbolic reference known to fit that range (for immediate operands in sign-extending x86-64 instructions). Z 32-bit unsigned integer constant, or a symbolic reference known to fit that range (for immediate operands in zero-extending x86-64 instructions). Since add/sub does sign extension, using the "e" constraint seems appropriate. It applies to 2.6.27-rc, 2.6.26, 2.6.25... Signed-off-by: Mathieu Desnoyers Signed-off-by: H. Peter Anvin Signed-off-by: Ingo Molnar --- include/asm-x86/atomic_64.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) (limited to 'include/asm-x86') diff --git a/include/asm-x86/atomic_64.h b/include/asm-x86/atomic_64.h index a0095191c02..91c7d03e65b 100644 --- a/include/asm-x86/atomic_64.h +++ b/include/asm-x86/atomic_64.h @@ -228,7 +228,7 @@ static inline void atomic64_add(long i, atomic64_t *v) { asm volatile(LOCK_PREFIX "addq %1,%0" : "=m" (v->counter) - : "ir" (i), "m" (v->counter)); + : "er" (i), "m" (v->counter)); } /** @@ -242,7 +242,7 @@ static inline void atomic64_sub(long i, atomic64_t *v) { asm volatile(LOCK_PREFIX "subq %1,%0" : "=m" (v->counter) - : "ir" (i), "m" (v->counter)); + : "er" (i), "m" (v->counter)); } /** @@ -260,7 +260,7 @@ static inline int atomic64_sub_and_test(long i, atomic64_t *v) asm volatile(LOCK_PREFIX "subq %2,%0; sete %1" : "=m" (v->counter), "=qm" (c) - : "ir" (i), "m" (v->counter) : "memory"); + : "er" (i), "m" (v->counter) : "memory"); return c; } @@ -341,7 +341,7 @@ static inline int atomic64_add_negative(long i, atomic64_t *v) asm volatile(LOCK_PREFIX "addq %2,%0; sets %1" : "=m" (v->counter), "=qm" (c) - : "ir" (i), "m" (v->counter) : "memory"); + : "er" (i), "m" (v->counter) : "memory"); return c; } -- cgit v1.2.3 From c72a5efec1193faa2ef34c0bd48d7251a70ec934 Mon Sep 17 00:00:00 2001 From: Marcin Slusarz Date: Mon, 11 Aug 2008 00:11:13 +0200 Subject: x86: mmconf: fix section mismatch warning WARNING: arch/x86/kernel/built-in.o(.cpuinit.text+0x1591): Section mismatch in reference from the function init_amd() to the function .init.text:check_enable_amd_mmconf_dmi() The function __cpuinit init_amd() references a function __init check_enable_amd_mmconf_dmi(). If check_enable_amd_mmconf_dmi is only used by init_amd then annotate check_enable_amd_mmconf_dmi with a matching annotation. check_enable_amd_mmconf_dmi is only called from init_amd which is __cpuinit Signed-off-by: Marcin Slusarz Signed-off-by: H. Peter Anvin Signed-off-by: Ingo Molnar --- include/asm-x86/mmconfig.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'include/asm-x86') diff --git a/include/asm-x86/mmconfig.h b/include/asm-x86/mmconfig.h index 95beda07c6f..e293ab81e85 100644 --- a/include/asm-x86/mmconfig.h +++ b/include/asm-x86/mmconfig.h @@ -3,7 +3,7 @@ #ifdef CONFIG_PCI_MMCONFIG extern void __cpuinit fam10h_check_enable_mmcfg(void); -extern void __init check_enable_amd_mmconf_dmi(void); +extern void __cpuinit check_enable_amd_mmconf_dmi(void); #else static inline void fam10h_check_enable_mmcfg(void) { } static inline void check_enable_amd_mmconf_dmi(void) { } -- cgit v1.2.3 From c6a92a2501b35880d2e357dbd7f2cbc9a06f1058 Mon Sep 17 00:00:00 2001 From: Marcin Slusarz Date: Sun, 17 Aug 2008 17:50:50 +0200 Subject: x86, percpu: silence section mismatch warnings related to EARLY_PER_CPU variables Quoting Mike Travis in "x86: cleanup early per cpu variables/accesses v4" (23ca4bba3e20c6c3cb11c1bb0ab4770b724d39ac): The DEFINE macro defines the per_cpu variable as well as the early map and pointer. It also initializes the per_cpu variable and map elements to "_initvalue". The early_* macros provide access to the initial map (usually setup during system init) and the early pointer. This pointer is initialized to point to the early map but is then NULL'ed when the actual per_cpu areas are setup. After that the per_cpu variable is the correct access to the variable. As these variables are NULL'ed before __init sections are dropped (in setup_per_cpu_maps), they can be safely annotated as __ref. This change silences following section mismatch warnings: WARNING: vmlinux.o(.data+0x46c0): Section mismatch in reference from the variable x86_cpu_to_apicid_early_ptr to the variable .init.data:x86_cpu_to_apicid_early_map The variable x86_cpu_to_apicid_early_ptr references the variable __initdata x86_cpu_to_apicid_early_map If the reference is valid then annotate the variable with __init* (see linux/init.h) or name the variable: *driver, *_template, *_timer, *_sht, *_ops, *_probe, *_probe_one, *_console, WARNING: vmlinux.o(.data+0x46c8): Section mismatch in reference from the variable x86_bios_cpu_apicid_early_ptr to the variable .init.data:x86_bios_cpu_apicid_early_map The variable x86_bios_cpu_apicid_early_ptr references the variable __initdata x86_bios_cpu_apicid_early_map If the reference is valid then annotate the variable with __init* (see linux/init.h) or name the variable: *driver, *_template, *_timer, *_sht, *_ops, *_probe, *_probe_one, *_console, WARNING: vmlinux.o(.data+0x46d0): Section mismatch in reference from the variable x86_cpu_to_node_map_early_ptr to the variable .init.data:x86_cpu_to_node_map_early_map The variable x86_cpu_to_node_map_early_ptr references the variable __initdata x86_cpu_to_node_map_early_map If the reference is valid then annotate the variable with __init* (see linux/init.h) or name the variable: *driver, *_template, *_timer, *_sht, *_ops, *_probe, *_probe_one, *_console, Signed-off-by: Marcin Slusarz Cc: Mike Travis Signed-off-by: Ingo Molnar --- include/asm-x86/percpu.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'include/asm-x86') diff --git a/include/asm-x86/percpu.h b/include/asm-x86/percpu.h index 4e91ee1e37a..f643a3a92da 100644 --- a/include/asm-x86/percpu.h +++ b/include/asm-x86/percpu.h @@ -182,7 +182,7 @@ do { \ DEFINE_PER_CPU(_type, _name) = _initvalue; \ __typeof__(_type) _name##_early_map[NR_CPUS] __initdata = \ { [0 ... NR_CPUS-1] = _initvalue }; \ - __typeof__(_type) *_name##_early_ptr = _name##_early_map + __typeof__(_type) *_name##_early_ptr __refdata = _name##_early_map #define EXPORT_EARLY_PER_CPU_SYMBOL(_name) \ EXPORT_PER_CPU_SYMBOL(_name) -- cgit v1.2.3