diff options
author | Ivan Kokshaysky <ink@jurassic.park.msu.ru> | 2009-01-15 13:51:19 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2009-01-15 16:39:40 -0800 |
commit | 5f7dc5d75076fd1c1fc6bc09f2467509d20db24a (patch) | |
tree | c105f8463607381acd7d02bdda75641b3f497e37 /arch/h8300/lib | |
parent | 2f88d151cb8e73587983d7feccd70672ff6730fe (diff) |
alpha: fix RTC on marvel
Unlike other alphas, marvel doesn't have real PC-style CMOS clock hardware
- RTC accesses are emulated via PAL calls. Unfortunately, for unknown
reason these calls work only on CPU #0. So current implementation for
arbitrary CPU makes CMOS_READ/WRITE to be executed on CPU #0 via IPI.
However, for obvious reason this doesn't work with standard
get/set_rtc_time() functions, where a bunch of CMOS accesses is done with
disabled interrupts.
Solved by making the IPI calls for entire get/set_rtc_time() functions,
not for individual CMOS accesses. Which is also a lot more effective
performance-wise.
The patch is largely based on the code from Jay Estabrook.
My changes:
- tweak asm-generic/rtc.h by adding a couple of #defines to
avoid a massive code duplication in arch/alpha/include/asm/rtc.h;
- sys_marvel.c: fix get/set_rtc_time() return values (Jay's FIXMEs).
NOTE: this fixes *only* LIB_RTC drivers. Legacy (CONFIG_RTC) driver
wont't work on marvel. Actually I think that we should just disable
CONFIG_RTC on alpha (maybe in 2.6.30?), like most other arches - AFAIK,
all modern distributions use LIB_RTC anyway.
Signed-off-by: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'arch/h8300/lib')
0 files changed, 0 insertions, 0 deletions