alpha: fix RTC on marvel

Unlike other alphas, marvel doesn't have real PC-style CMOS clock hardware
- RTC accesses are emulated via PAL calls.  Unfortunately, for unknown
reason these calls work only on CPU #0.  So current implementation for
arbitrary CPU makes CMOS_READ/WRITE to be executed on CPU #0 via IPI.
However, for obvious reason this doesn't work with standard
get/set_rtc_time() functions, where a bunch of CMOS accesses is done with
disabled interrupts.

Solved by making the IPI calls for entire get/set_rtc_time() functions,
not for individual CMOS accesses.  Which is also a lot more effective
performance-wise.

The patch is largely based on the code from Jay Estabrook.
My changes:
- tweak asm-generic/rtc.h by adding a couple of #defines to
  avoid a massive code duplication in arch/alpha/include/asm/rtc.h;
- sys_marvel.c: fix get/set_rtc_time() return values (Jay's FIXMEs).

NOTE: this fixes *only* LIB_RTC drivers.  Legacy (CONFIG_RTC) driver
wont't work on marvel.  Actually I think that we should just disable
CONFIG_RTC on alpha (maybe in 2.6.30?), like most other arches - AFAIK,
all modern distributions use LIB_RTC anyway.

Signed-off-by: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Ivan Kokshaysky
2009-01-15 13:51:19 -08:00
committed by Linus Torvalds
parent 2f88d151cb
commit 5f7dc5d750
9 changed files with 98 additions and 17 deletions

View File

@@ -1,9 +1,15 @@
#ifndef _ALPHA_RTC_H
#define _ALPHA_RTC_H
/*
* Alpha uses the default access methods for the RTC.
*/
#if defined(CONFIG_ALPHA_GENERIC)
# define get_rtc_time alpha_mv.rtc_get_time
# define set_rtc_time alpha_mv.rtc_set_time
#else
# if defined(CONFIG_ALPHA_MARVEL) && defined(CONFIG_SMP)
# define get_rtc_time marvel_get_rtc_time
# define set_rtc_time marvel_set_rtc_time
# endif
#endif
#include <asm-generic/rtc.h>