diff options
author | Nick Piggin <npiggin@suse.de> | 2008-11-11 17:51:18 +0000 |
---|---|---|
committer | Paul Mackerras <paulus@samba.org> | 2008-11-19 16:04:57 +1100 |
commit | 957ab07b44d839ee8267e827fc4e8f1853798f57 (patch) | |
tree | d282112a07dfc82dfc44522ad9e5a0baffdd6888 /arch/powerpc/include/asm | |
parent | 46d075be585eae2b74265e4e64ca38dde16a09c6 (diff) |
powerpc: Optimise smp_rmb
After commit 598056d5af8fef1dbe8f96f5c2b641a528184e5a ("[POWERPC] Fix
rmb to order cacheable vs. noncacheable"), rmb() becomes a sync
instruction, which is needed to order cacheable vs noncacheable loads.
However smp_rmb() is #defined to rmb(), and smp_rmb() can be an
lwsync.
This restores smp_rmb() performance by using lwsync there and updates
the comments.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Diffstat (limited to 'arch/powerpc/include/asm')
-rw-r--r-- | arch/powerpc/include/asm/system.h | 20 |
1 files changed, 11 insertions, 9 deletions
diff --git a/arch/powerpc/include/asm/system.h b/arch/powerpc/include/asm/system.h index 917f515bc67..2a4be19a92c 100644 --- a/arch/powerpc/include/asm/system.h +++ b/arch/powerpc/include/asm/system.h @@ -23,15 +23,17 @@ * read_barrier_depends() prevents data-dependent loads being reordered * across this point (nop on PPC). * - * We have to use the sync instructions for mb(), since lwsync doesn't - * order loads with respect to previous stores. Lwsync is fine for - * rmb(), though. Note that rmb() actually uses a sync on 32-bit - * architectures. + * *mb() variants without smp_ prefix must order all types of memory + * operations with one another. sync is the only instruction sufficient + * to do this. * - * For wmb(), we use sync since wmb is used in drivers to order - * stores to system memory with respect to writes to the device. - * However, smp_wmb() can be a lighter-weight lwsync or eieio barrier - * on SMP since it is only used to order updates to system memory. + * For the smp_ barriers, ordering is for cacheable memory operations + * only. We have to use the sync instruction for smp_mb(), since lwsync + * doesn't order loads with respect to previous stores. Lwsync can be + * used for smp_rmb() and smp_wmb(). + * + * However, on CPUs that don't support lwsync, lwsync actually maps to a + * heavy-weight sync, so smp_wmb() can be a lighter-weight eieio. */ #define mb() __asm__ __volatile__ ("sync" : : : "memory") #define rmb() __asm__ __volatile__ ("sync" : : : "memory") @@ -51,7 +53,7 @@ #endif #define smp_mb() mb() -#define smp_rmb() rmb() +#define smp_rmb() __asm__ __volatile__ (stringify_in_c(LWSYNC) : : :"memory") #define smp_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory") #define smp_read_barrier_depends() read_barrier_depends() #else |