aboutsummaryrefslogtreecommitdiff
path: root/include/asm-x86/system_64.h
AgeCommit message (Collapse)Author
2007-12-18x86: also define AT_VECTOR_SIZE_ARCHJan Beulich
The patch introducing this left out 64-bit x86 despite it also having extra entries. this solves Xen guest troubles. Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2007-10-17x86: remove STR() macrosGlauber de Oliveira Costa
This patch removes the __STR() and STR() macros from x86_64 header files. They seem to be legacy, and has no more users. Even if there were users, they should use __stringify() instead. In fact, there were one third place in which this macro was defined (ia32_binfmt.c), and used just below. In this file, usage was properly converted to __stringify() [ tglx: arch/x86 adaptation ] Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2007-10-17x86: Create clflush() inline, remove hardcoded wbinvdH. Peter Anvin
Create an inline function for clflush(), with the proper arguments, and use it instead of hard-coding the instruction. This also removes one instance of hard-coded wbinvd, based on a patch by Bauder de Oliveira Costa. [ tglx: arch/x86 adaptation ] Cc: Andi Kleen <andi@firstfloor.org> Cc: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2007-10-17x86: mark read_crX() asm code as volatileKirill Korotaev
Some gcc versions (I checked at least 4.1.1 from RHEL5 & 4.1.2 from gentoo) can generate incorrect code with read_crX()/write_crX() functions mix up, due to cached results of read_crX(). The small app for x8664 below compiled with -O2 demonstrates this (i686 does the same thing):
2007-10-12x86: optimise barriersNick Piggin
According to latest memory ordering specification documents from Intel and AMD, both manufacturers are committed to in-order loads from cacheable memory for the x86 architecture. Hence, smp_rmb() may be a simple barrier. Also according to those documents, and according to existing practice in Linux (eg. spin_unlock doesn't enforce ordering), stores to cacheable memory are visible in program order too. Special string stores are safe -- their constituent stores may be out of order, but they must complete in order WRT surrounding stores. Nontemporal stores to WB memory can go out of order, and so they should be fenced explicitly to make them appear in-order WRT other stores. Hence, smp_wmb() may be a simple barrier. http://developer.intel.com/products/processor/manuals/318147.pdf http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/24593.pdf In userspace microbenchmarks on a core2 system, fence instructions range anywhere from around 15 cycles to 50, which may not be totally insignificant in performance critical paths (code size will go down too). However the primary motivation for this is to have the canonical barrier implementation for x86 architecture. smp_rmb on buggy pentium pros remains a locked op, which is apparently required. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-12x86: fix IO write barrierNick Piggin
wmb() on x86 must always include a barrier, because stores can go out of order in many cases when dealing with devices (eg. WC memory). Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-11i386/x86_64: move headers to include/asm-x86Thomas Gleixner
Move the headers to include/asm-x86 and fixup the header install make rules Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>