aboutsummaryrefslogtreecommitdiff
path: root/arch/ia64/kernel
AgeCommit message (Collapse)Author
2005-06-21[IA64] printk needs KERN_INFO arch/ia64/kernel/smp.cChristophe Lucas
printk() calls should include appropriate KERN_* constant. Signed-off-by: Christophe Lucas <clucas@rotomalug.org> Signed-off-by: Domen Puncer <domen@coderock.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-06-20[IA64] Drop spurious paren in entry.hDavid Mosberger-Tang
The latest assembler catches this typo. (reported by Jim Wilson). Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-06-15Auto merge with /home/aegl/GIT/linusTony Luck
2005-06-09[IA64] Fix race condition in the rt_sigprocmask fastcallChristoph Lameter
current->blocked will be set to the value of current->thread_info->flags if the cmpxchg to update thread_info->flags fails. For performance reasons the store into current->blocked was placed in the cmpxchg loop. However, the cmpxchg overwrites the register holding the value to be stored. In the rare case of a retry the value of thread_info->flags will be written into current->blocked. The fix is to use another register so that the register containing the current->blocked value is not overwritten. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-06-08[PATCH] ia64: fix floating-point preemption problemPeter Chubb
There've been reports of problems with CONFIG_PREEMPT=y and the high floating point partition. This is caused by the possibility of preemption and rescheduling on a different processor while saving or restioirng the high partition. The only places where the FPU state is touched are in ptrace, in switch_to(), and where handling a floating-point exception. In switch_to() preemption is off. So it's only in trap.c and ptrace.c that we need to prevent preemption. Here is a patch that adds commentary to make the conditions clear, and adds appropriate preempt_{en,dis}able() calls to make it so. In trap.c I use preempt_enable_no_resched(), as we're about to return to user space where the preemption flag will be checked anyway. Signed-off-by: Peter Chubb <peterc@gelato.unsw.edu.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-08[IA64] Extract correct break number for break.bKeith Owens
break.b does not store the break number in cr.iim, instead it stores 0, which makes all break.b instructions look like BUG(). Extract the break number from the instruction itself. Signed-off-by: Keith Owens <kaos@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-06-08[IA64] Update comment to describe modes set in default control register.Tony Luck
Christian Hildner pointed out that the comment did not match what the code does in cpu_init() when we set up the default control register. Patch based on suggestions from Ken Chen. Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-06-08[IA64] Module gp must point to valid memoryKeith Owens
Some bits of the kernel assume that gp always points to valid memory, in particular PHYSICAL_MODE_ENTER() assumes that both gp and sp are valid virtual addresses with associated physical pages. The IA64 module loader puts gp well past the end of the module, with no physical backing. Offsets on gp are still valid, but physical mode addressing breaks for modules. Ensure that gp always falls within the module body. Also ensure that gp is 8 byte aligned. Signed-off-by: Keith Owens <kaos@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-06-01[IA64] Cleanup compile warnings for ski configPeter Chubb
The attached patch cleans up a compilation warning when ACPI is turned off (i.e., when compiling for the Ski simulator). Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-31[IA64] Use "PER_CPU" form of EXPORT macroTony Luck
I was gently reminded that there are per-cpu forms of the EXPORT_SYMBOL macros. Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-26[IA64] sys_mmap doesn't follow posix.1 when parameter len=0Zhang Yanmin
In IA64 kernel, sys_mmap calls do_mmap2 and do_mmap2 returns addr if len=0, which means the mmap sys call succeeds. Posix.1 says: The mmap() function shall fail if: [EINVAL] The value of len is zero. Here is a patch to fix it. Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com> Acked-by: David Mosberger <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-18[IA64] initialize spinlock pfm_alt_install_checkTony Luck
I applied the penultimate version of the perfmon patch, which didn't have the initialization of the new spinlock that was added. Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-18[IA64] alternate perfmon handlerTony Luck
Patch from Charles Spirakis Some linux customers want to optimize their applications on the latest hardware but are not yet willing to upgrade to the latest kernel. This patch provides a way to plug in an alternate, basic, and GPL'ed PMU subsystem to help with their monitoring needs or for specialty work. It can also be used in case of serious unexpected bugs in perfmon. Mutual exclusion between the two subsystems is guaranteed, hence no conflict can arise from both subsystem being present. Acked-by: Stephane Eranian <eranian@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-17Merge with temp tree to get David's gdb inferior calls patchTony Luck
2005-05-17[IA64] Fix convert_to_non_syscall() so gdb inferior calls work againDavid Mosberger-Tang
Fix convert_to_non_syscall() so it arranges for the kernel to be left via ia64_leave_kernel() rather than ia64_leave_syscall(). The latter no longer tolerates being called with pSys=0 and pNonSys=1. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-17[IA64-SGI] cpe interrupts are not being enabled.Russ Anderson
acpi_request_vector() is called in ia64_mca_init() to get the cpe_vector. The problem is that acpi_request_vector() looks in platform_intr_list[] to get the vector, but platform_intr_list[] is not initialized with a valid vector until later (in sn_setup()). Without a valid vector the code defaults to polling mode. This patch moves the call to acpi_request_vector() from ia64_mca_init() to ia64_mca_late_init(), which is after platform_intr_list[] is initialized. Signed-off-by: Russ Anderson (rja@sgi.com) Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-17[IA64] Correct convert_to_non_syscall()David Mosberger-Tang
convert_to_non_syscall() has the same problem that unwind_to_user() used to have. Fix it likewise. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-10[IA64] Avoid .spillpsp directive in handcoded assemblyDavid Mosberger-Tang
Some time ago, GAS was fixed to bring the .spillpsp directive in line with the Intel assembler manual (there was some disagreement as to whether or not there is a built-in 16-byte offset). Unfortunately, there are two places in the kernel where this directive is used in handwritten assembly files and those of course relied on the "buggy" behavior. As a result, when using a "fixed" assembler, the kernel picks up the UNaT bits from the wrong place (off by 16) and randomly sets NaT bits on the scratch registers. This can be noticed easily by looking at a coredump and finding various scratch registers with unexpected NaT values. The patch below fixes this by using the .spillsp directive instead, which works correctly no matter what assembler is in use. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-09[IA64] fix "section mismatch" compile-time-errorDavid Mosberger-Tang
I noticed this typo when trying to compile a kernel which had CONFIG_HOTPLUG turned off. In that case, __devinit is no longer a no-op and the compiler then detects a section-conflict. Fix by using __devinitdata instead of __devinit. Same patch also submitted by Darren Williams to fix compilation error using sim_defconfig (which has CONFIG_HOTPLUG=n). Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Darren Williams <dsw@gelato.unsw.edu.au> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-06[IA64] Fix stack placement when INIT hits in kernel mode.David Mosberger-Tang
Without this patch, the stack is placed _below_ the current task structure, which is risky at best. Tony, I think this patch needs to go into 2.6.12, since it fixes a real bug. Without it, INIT may case secondary errors, which would be most unpleasant. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-05[IA64] Merge audit fix for fsyscalls with syscall-optimizationsDavid Mosberger-Tang
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-05Merge with master.kernel.org:/pub/scm/linux/kernel/git/torvalds/linux-2.6.gitDavid Woodhouse
2005-05-03[IA64] Fix two warnings introduced by perfmon patches.Tony Luck
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-03[IA64] another perfmon fix (take2)stephane eranian
- pfm_context_load(): change return value from EINVAL to EBUSY when context is already loaded. - pfm_check_task_state(): pass test if context state is MASKED. It is safe to give access on PFM_CTX_MASKED because the PMU state (PMD) is stable and saved in software state. This helps multiplexing programs such as the example given in libpfm-3.1. Signed-off-by: stephane eranian <eranian@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-03[IA64] perfmon & PAL_HALT againStephane Eranian
The pmu_active test is based on the values of PSR.up. THIS IS THE PROBLEM as it does not take into account the lazy restore logic which is as follow (simplified): context switch out: save PMDs clear psr.up release ownership context switch in: if (ctx->last_cpu == smp_processor_id() && ctx->cpu_activation == cpu_activation) { set psr.up return } restore PMD restore PMC ctx->last_cpu = smp_processor_id(); ctx->activation = ++cpu_activation; set psr.up The key here is that on context switch out, we clear psr.up and on context switch in we check if nobody else used the PMU on that processor since last time we came. In that case, we assume the PMD/PMC are ours and we simply reactivate. The Caliper problem is that between the moment we context switch out and the moment we come back, nobody effectively used the PMU BUT the processor went idle. Normally this would have no incidence but PAL_HALT does alter the PMU registers. In default_idle(), the test on psr.up is not strong enough to cover this case and we go into PAL which trashed the PMU resgisters. When we come back we falsely assume that this is our state yet it is corrupted. Very nasty indeed. To avoid the problem it is necessary to forbid going to PAL_HALT as soon as perfmon installs some valid state in the PMU registers. This happens with an application attaches a context to a thread or CPU. It is not enough to check the psr/dcr bits. Hence I propose the attached patch. It adds a callback in process.c to modify the condition to enter PAL on idle. Basically, now it is conditional to pal_halt=1 AND perfmon saying it is okay. Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-03[IA64] MCA recovery improvementsRuss Anderson
Jack Steiner uncovered some opportunities for improvement in the MCA recovery code. 1) Set bsp to save registers on the kernel stack. 2) Disable interrupts while in the MCA recovery code. 3) Change the way the user process is killed, to avoid a panic in schedule. Testing shows that these changes make the recovery code much more reliable with the 2.6.12 kernel. Signed-off-by: Russ Anderson <rja@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-03[IA64] fix ia64 syscall auditingDavid Woodhouse
Attached is a patch against David's audit.17 kernel that adds checks for the TIF_SYSCALL_AUDIT thread flag to the ia64 system call and signal handling code paths. The patch enables auditing of system calls set up via fsys_bubble_down, as well as ensuring that audit_syscall_exit() is called on return from sigreturn. Neglecting to check for TIF_SYSCALL_AUDIT at these points results in incorrect information in audit_context, causing frequent system panics when system call auditing is enabled on an ia64 system. I have tested this patch and have seen no problems with it. [Original patch from Amy Griffis ported to current kernel by David Woodhouse] From: Amy Griffis <amy.griffis@hp.com> From: David Woodhouse <dwmw2@infradead.org> Signed-off-by: Chris Wright <chrisw@osdl.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-03[IA64] reduce cacheline bouncing in cpu_idle_waitZwane Mwaikambo
Andi noted that during normal runtime cpu_idle_map is bounced around a lot, and occassionally at a higher frequency than the timer interrupt wakeup which we normally exit pm_idle from. So switch to a percpu variable. I didn't move things to the slow path because it would involve adding scheduler code to wakeup the idle thread on the cpus we're waiting for. Signed-off-by: Zwane Mwaikambo <zwane@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-03[IA64] use common pxm functionAlex Williamson
This patch simplifies a couple places where we search for _PXM values in ACPI namespace. Thanks, Signed-off-by: Alex Williamson <alex.williamson@hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-03[IA64] fix typos caught by new assemblerDavid Mosberger-Tang
Patch below fixes 3 trivial typos which are caught by the new assembler (v2.169.90). Please apply. [Note: fix to memcpy that was also part of this patch was separately applied from patches by H.J. and Andreas ... so the delta here only has the other two fixes. -Tony] Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-05-03Merge with master.kernel.org:/pub/scm/linux/kernel/git/torvalds/linux-2.6.gitDavid Woodhouse
2005-05-01[PATCH] convert that currently tests _NSIG directly to use valid_signal()Jesper Juhl
Convert most of the current code that uses _NSIG directly to instead use valid_signal(). This avoids gcc -W warnings and off-by-one errors. Signed-off-by: Jesper Juhl <juhl-lkml@dif.dk> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-05-01[PATCH] consolidate sys_shmatStephen Rothwell
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-04-29[PATCH] fix ia64 syscall auditingAmy Griffis
Attached is a patch against David's audit.17 kernel that adds checks for the TIF_SYSCALL_AUDIT thread flag to the ia64 system call and signal handling code paths.The patch enables auditing of system calls set up via fsys_bubble_down, as well as ensuring that audit_syscall_exit() is called on return from sigreturn. Neglecting to check for TIF_SYSCALL_AUDIT at these points results in incorrect information in audit_context, causing frequent system panics when system call auditing is enabled on an ia64 system. Signed-off-by: Amy Griffis <amy.griffis@hp.com> Signed-off-by: David Woodhouse <dwmw2@infradead.org>
2005-04-29[AUDIT] Don't allow ptrace to fool auditing, log arch of audited syscalls.
We were calling ptrace_notify() after auditing the syscall and arguments, but the debugger could have _changed_ them before the syscall was actually invoked. Reorder the calls to fix that. While we're touching ever call to audit_syscall_entry(), we also make it take an extra argument: the architecture of the syscall which was made, because some architectures allow more than one type of syscall. Also add an explicit success/failure flag to audit_syscall_exit(), for the benefit of architectures which return that in a condition register rather than only returning a single register. Change type of syscall return value to 'long' not 'int'. Signed-off-by: David Woodhouse <dwmw2@infradead.org>
2005-04-27[IA64] need r29=psr *after* rsm psr.iDavid Mosberger-Tang
Yanmin Zhang pointed out a sequence problem when saving the psr. David Mosberger provided this patch (which gave up a cycle). Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27[IA64] use srlz.d instead of srlz.i in ia64_leave_kernel()David Mosberger-Tang
This patch switches the srlz.i in ia64_leave_kernel() to srlz.d. As per architecture manual, the former is needed only to ensure that the clearing of PSR.IC is seen by the VHPT for subsequent instruction fetches. However, since the remainder of the code (up to and including the RFI instruction) is mapped by a pinned TLB entry, there is no chance of an iTLB miss and we don't care whether or not the VHPT sees PSR.IC cleared. Since srlz.d is substantially cheaper than srlz.i, this should shave off a few cycles off the interrupt path (unverified though; I'm not setup to measure this at the moment). Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27[IA64] Annotate fsys_bubble_down() with McKinley dispatch info.David Mosberger-Tang
This patch changes comments & formatting only. There is no code change. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27[IA64] Reschedule fsys_bubble_down().David Mosberger-Tang
Improvements come from eliminating srlz.i, not scheduling AR/CR-reads too early (while there are others still pending), scheduling the backing-store switch as well as possible, splitting the BBB bundle into a MIB/MBB pair. Why is it safe to eliminate the srlz.i? Observe that we used to clear bits ~PSR_PRESERVED_BITS in PSR.L. Since PSR_PRESERVED_BITS==PSR.{UP,MFL,MFH,PK,DT,PP,SP,RT,IC}, we ended up clearing PSR.{BE,AC,I,DFL,DFH,DI,DB,SI,TB}. However, PSR.BE : already is turned off in __kernel_syscall_via_epc() PSR.AC : don't care (kernel normally turns PSR.AC on) PSR.I : already turned off by the time fsys_bubble_down gets invoked PSR.DFL: always 0 (kernel never turns it on) PSR.DFH: don't care --- kernel never touches f32-f127 on its own initiative PSR.DI : always 0 (kernel never turns it on) PSR.SI : always 0 (kernel never turns it on) PSR.DB : don't care --- kernel never enables kernel-level breakpoints PSR.TB : must be 0 already; if it wasn't zero on entry to __kernel_syscall_via_epc, the branch to fsys_bubble_down will trigger a taken branch; the taken-trap-handler then converts the syscall into a break-based system-call. In other words: all the bits we're clearying are either 0 already or are don't cares! Thus, we don't have to write PSR.L at all and we don't have to do a srlz.i either. Good for another ~20 cycle improvement for EPC-based heavy-weight syscalls. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27[IA64] Annotate __kernel_syscall_via_epc() with McKinley dispatch info.David Mosberger-Tang
Two other very minor changes: use "mov.i" instead of "mov" for reading ar.pfs (for clarity; doesn't affect the code at all). Also, predicate the load of r14 for consistency. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27[IA64] Reschedule __kernel_syscall_via_epc().David Mosberger-Tang
Avoid some stalls, which is good for about 2 cycles when invoking a light-weight handler. When invoking a heavy-weight handler, this helps by about 7 cycles, with most of the improvement coming from the improved branch-prediction achieved by splitting the BBB bundle into two MIB bundles. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27[IA64] Reschedule break_fault() for better performance.David Mosberger-Tang
This patch reorganizes break_fault() to optimistically assume that a system-call is being performed from user-space (which is almost always the case). If it turns out that (a) we're not being called due to a system call or (b) we're being called from within the kernel, we fixup the no-longer-valid assumptions in non_syscall() and .break_fixup(), respectively. With this approach, there are 3 major phases: - Phase 1: Read various control & application registers, in particular the current task pointer from AR.K6. - Phase 2: Do all memory loads (load system-call entry, load current_thread_info()->flags, prefetch kernel register-backing store) and switch to kernel register-stack. - Phase 3: Call ia64_syscall_setup() and invoke syscall-handler. Good for 26-30 cycles of improvement on break-based syscall-path. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27[IA64] In ia64_leave_syscall(), fix comments and whitespace only.David Mosberger-Tang
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27[IA64] Schedule ia64_leave_syscall() to read ar.bsp earlierDavid Mosberger-Tang
Reschedule code to read ar.bsp as early as possible. To enable this, don't bother clearing some of the registers when we're returning to kernel stacks. Also, instead of trying to support the pNonSys case (which makes no sense), do a bugcheck instead (with break 0). Finally, remove a clear of r14 which is a left-over from the previous patch. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27[IA64] In syscall-entry, use st8 instead of stf8 to clear pt_regs.r8David Mosberger-Tang
Using stf8 seemed like a clever idea at the time, but stf8 forces the cache-line to be invalidated in the L1D (if it happens to be there already). This patch eliminates a guaranteed L1D cache-miss and, by itself, is good for a 1-2 cycle improvement for heavy-weight syscalls. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27[IA64] On return from syscall, hint b7 with __kernel_syscall_via_epc().David Mosberger-Tang
Why is this a good idea? Clearing b7 to 0 is guaranteed to do us no good and writing it with __kernel_syscall_via_epc() yields a 6 cycle improvement _if_ the application performs another EPC-based system- call without overwriting b7, which is not all that uncommon. Well worth the minimal cost of 1 bundle of code. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27[IA64] Schedule fp-clearing insns at least 6 cycles after reading ar.bsp.David Mosberger-Tang
Decreases syscall overhead by approximately 6 cycles. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27[IA64] Use dynamic prediction for RSE-clearing branches.David Mosberger-Tang
This by itself is good for a 1-2 cycle speed up. Effect is bigger when combined with the later patches. Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-27[IA64] __ia64_syscall() is no longer used anywhere in the kernel. Remove it.David Mosberger-Tang
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-04-25[IA64] iosapic.c: typo ... s/spin_unlock_irq/spin_unlock/Kenji Kaneshige
vector sharing patch had a typo ... mismatched spin_lock() with a spin_unlock_irq(). Fix from Kenji Kaneshige. Signed-off-by: Tony Luck <tony.luck@intel.com>