Age | Commit message (Collapse) | Author |
|
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
add comments to the FPU structures of processor.h.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
setup_trampoline() looks very similar between architectures, and this
patch unifies them. The i386 version allocates bootmem memory, while
the x86_64 version uses a fixed address.
In this patch, we initialize the global trampoline_base to the x86_64 version,
and i386 allocation can later override it.
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
In here, they can serve both architectures
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
They are now equal, and are moved to a common file
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
definitions that are inside CONFIG_HOTPLUG_CPU in
the arch-specific smp*.h files are moved to common
header
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
And move its extern definition to smp.h, the common header
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
move definitions that are now equal in type from
smpboot_{32,64}.c to smpboot.c
cpu_callin_map is put temporarily in smp_64.h (already
exists in smp_32.h), and will soon be merged.
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
so they can have the same type as x86_64
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
it is already defined in smp.h
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
function definition is moved to common header.
x86_64 version is now called native_smp_send_stop
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
In x86_64, hlt always work. in i386, we'll query the cpuinfo associated
with this cpu
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
this patches moves prefill_possible_map() to smpboot.c
Right now it is x86_64-specific, but nothing intrinsically
prevents it to be used by i386
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
disabled_cpus is (up to now) a x86_64-only contruction.
But it's extern declaration can be moved to common header anyway
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
definition is moved to common header. x86_64 version is now called
native_smp_cpus_done
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
definition is moved to common header. x86_64 version is now called
native_smp_prepare_cpus
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
definition is moved to common header. x86_64 version is now called
native_prepare_boot_cpu
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
function definition is moved to common header. x86_64 version
is now called native_cpu_up
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
definition is moved to common header, x86_64 function name
now is native_smp_call_function_mask
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
function definition is moved to common header, x86_64 version is now called
native_smp_send_reschedule
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
the smp_ops symbol is temporarily defined in smp_64.c, but it will soon
be unified
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
x86_64 will benefit from it
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
move extern definitions that are the same between smp_{32,64}.h
to smp.h
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
move extern function definitions that are the same between smp_{32,64}.h
to smp.h
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
this is the first step of integrating smp.h between x86_64
and i386
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Liu Pingfan noticed that switch_to() clobbers more registers than its
asm constraints specify.
We get away with this due to luck mostly - schedule()
by its nature only has 'local' state which gets reloaded
automatically. Fix it nevertheless, we could hit this anytime.
it turns out that with the extra constraints gcc manages to make
schedule() even more compact:
text data bss dec hex filename
28626 684 2640 31950 7cce sched.o.before
28613 684 2640 31937 7cc1 sched.o.after
Reported-by: Liu Pingfan <kernelfans@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Make the code more readable and more hackable:
- use symbolic asm parameters
- use readable indentation
- add comments that explains the details
No code changed:
kernel/sched.o:
text data bss dec hex filename
28626 684 2640 31950 7cce sched.o.before
28626 684 2640 31950 7cce sched.o.after
md5:
2823d406c18b781975cdb2e7cfea0059 sched.o.before.asm
2823d406c18b781975cdb2e7cfea0059 sched.o.after.asm
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Comment says wmb is a nop, but it is implemented as lock addl
below... Should it be compiled to nop if we know we are running on
"good" Intel cpu?
At least remove confusing comment for now.
Signed-off-by: Pavel Machek <pavel@suse.cz>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
redo commit cded932b75ab0a5f9181e.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
notrace signals that a function should not be traced. Most of the
time this is used by tracers to annotate code that cannot be
traced - it's in a volatile state (such as in user vdso context
or NMI context) or it's in the tracer internals.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
introduce test_cpu_cap() for raw access to the real CPU
capabilities as they are present in x86_capability.
(cpu_has() will shortcut certain tests during build-time)
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
quad core 8 socket system will have apic id lifting.the apic id range could
be [4, 0x23]. and apic_is_clustered_box will think that need to three clusters
and that is larger than 2. So it is treated as a clustered_box.
and will get:
Marking TSC unstable due to TSCs unsynchronized
even if the CPUs have X86_FEATURE_CONSTANT_TSC set.
this quick fix will check if the cpu is from AMD.
but vsmp still needs that checking...
this patch is fix to make sure that vsmp not to be passed.
Signed-off-by: Yinghai Lu <yinghai.lu@sun.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
e820_resource_resources could use insert_resource instead of request_resource
also move code_resource, data_resource, bss_resource, and crashk_res
out of e820_reserve_resources.
Signed-off-by: Yinghai Lu <yinghai.lu@sun.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
basic style cleanup to flush out years of neglect:
- consistent indentation
- whitespace fixes
- consistent comments
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
x86: define outb_pic and inb_pic to stop using outb_p and inb_p
The delay between io port accesses to the PIC is now defined using outb_pic
and inb_pic. This fix provides the next step, using udelay(2) to define the
*PIC specific* timing requirements, rather than on bus-oriented timing, which
is not well calibrated.
Again, the primary reason for fixing this is to use proper delay strategy,
and in particular to fix crashes that can result from using port 80 writes
on machines that have resources on port 80, such as the ENE chips used by Quanta
in latops it designs and sells to, e.g. HP.
Signed-off-by: David P. Reed <dpreed@reed.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
It becomes to early for ioremap, so we use early_ioremap
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ravikiran Thirumalai <kiran@scalemp.com>
Acked-by: Shai Fultheim <shai@scalemp.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Yinghai Lu <yinghai.lu@sun.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Cc: Roland McGrath <roland@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Cc: Roland McGrath <roland@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Change size to unsigned long, becase caller and user all used unsigned long.
Also make bad_addr take an alignment parameter.
Signed-off-by: Yinghai Lu <yinghai.lu@sun.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
These new controls toggle experimental support for a new CPU feature,
the straightforward extension of largepages from the pmd level to the
pud level, which allows 1GB (kernel) TLBs instead of 2MB TLBs.
Turn it off by default, as this code has not been tested well enough yet.
Use the CONFIG_DIRECT_GBPAGES=y .config option or gbpages on the
boot line can be used to enable it. If enabled in the .config then
nogbpages boot option disables it.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
people sometimes do crazy stuff like building really large static
arrays into their kernels or building allyesconfig kernels. Give
more space to the kernel and push modules up a bit: kernel has
512 MB and modules have 1.5 GB.
Should be enough for a few years ;-)
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
It's really a pretty ugly thing to need, and some day it will hopefully
be obviated by teaching gcc about the magic calling conventions for the
low-level system call code, but in the meantime we can at least add big
honking comments about why we need these insane and strange macros.
I took my comments from my version of the macro, but I ended up deciding
to just pick Roland's version of the actual code instead (with his
prettier syntax that uses vararg macros). Thus the previous two commits
that actually implement it.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
The prevent_tail_call() macro works around the problem of the compiler
clobbering argument words on the stack, which for asmlinkage functions
is the caller's (user's) struct pt_regs. The tail/sibling-call
optimization is not the only way that the compiler can decide to use
stack argument words as scratch space, which we have to prevent.
Other optimizations can do it too.
Until we have new compiler support to make "asmlinkage" binding on the
compiler's own use of the stack argument frame, we have work around all
the manifestations of this issue that crop up.
More cases seem to be prevented by also keeping the incoming argument
variables live at the end of the function. This makes their original
stack slots attractive places to leave those variables, so the compiler
tends not clobber them for something else. It's still no guarantee, but
it handles some observed cases that prevent_tail_call() did not.
Signed-off-by: Roland McGrath <roland@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
ASM_NOP's for 64-bit kernel with CONFIG_GENERIC_CPU is broken
with the recent x86 nops merge. They were using GENERIC_NOPS
which will truncate the upper 32bits of %rsi, because of the missing
64bit rex prefix.
For now, fall back ASM NOPS for generic cpu to K8 NOPS, similar
to the code before the wrong x86 nop merge.
This should resolve the crash seen by Ingo on a test-system:
BUG: unable to handle kernel paging request at 00000000d80d8ee8
IP: [<ffffffff802121af>] save_i387_ia32+0x61/0xd8
PGD b8e0067 PUD 51490067 PMD 0
Oops: 0000 [1] SMP
CPU 2
Modules linked in:
Pid: 3871, comm: distcc Not tainted 2.6.25-rc7-sched-devel.git-x86-latest.git #359
RIP: 0010:[<ffffffff802121af>] [<ffffffff802121af>] save_i387_ia32+0x61/0xd8
RSP: 0000:ffff81003abd3cb8 EFLAGS: 00010246
RAX: ffff810082e93400 RBX: 00000000ffc37f84 RCX: ffff8100d80d8ee0
RDX: 0000000000000000 RSI: 00000000d80d8ee0 RDI: ffff810082e93400
RBP: 00000000ffc37fdc R08: 00000000ffc37f88 R09: 0000000000000008
R10: ffff81003abd2000 R11: 0000000000000000 R12: ffff810082e93400
R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
FS: 0000000000000000(0000) GS:ffff81011fb12dc0(0063) knlGS:00000000f7f1a6c0
CS: 0010 DS: 002b ES: 002b CR0: 0000000080050033
CR2: 00000000d80d8ee8 CR3: 0000000076922000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process distcc (pid: 3871, threadinfo ffff81003abd2000, task ffff8100d80d8ee0)
Stack: ffff8100bb670380 ffffffff8026de50 0000000000000118 0000000000000002
0000000000000002 ffff81003abd3e68 ffff81003abd3ed8 ffff81003abd3de8
ffff81003abd3d18 ffffffff80229785 ffff8100d80d8ee0 ffff810001041280
Call Trace:
[<ffffffff8026de50>] ? __generic_file_aio_write_nolock+0x343/0x377
[<ffffffff80229785>] ? update_curr+0x54/0x64
[<ffffffff80227cd3>] ? ia32_setup_sigcontext+0x125/0x1d2
[<ffffffff8022839f>] ? ia32_setup_frame+0x73/0x1a5
[<ffffffff8020b2a5>] ? do_notify_resume+0x1aa/0x7db
[<ffffffff8024ae8c>] ? getnstimeofday+0x31/0x85
[<ffffffff80249858>] ? ktime_get_ts+0x17/0x48
[<ffffffff80249933>] ? ktime_get+0xc/0x41
[<ffffffff8024973e>] ? hrtimer_nanosleep+0x75/0xd5
[<ffffffff80249261>] ? hrtimer_wakeup+0x0/0x21
[<ffffffff8020bfbc>] ? int_signal+0x12/0x17
[<ffffffff8030e6b3>] ? dummy_file_free_security+0x0/0x1
Code: a6 08 05 00 00 f6 40 14 01 74 34 4c 89 e7 48 0f ae 07 48 8b 86 08 05 00 00 80 78 02 00 79 02 db e2 90 8d b4 26 00 00 00 00 89 f6 <48> 8b 46 08 83 60 14 fe 0f 20 c0 48 83 c8 08 0f 22 c0 eb 07 c6
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
25-rc* stopped working with CONFIG_X86_VSMP on vSMP machines.
Looks like the vsmp irq ops got accidentally removed during merge of x86_64
pvops in 2.6.25. -- commit 6abcd98ffafbff81f0bfd7ee1d129e634af13245 removed
vsmp irq ops.
Tested with both CONFIG_X86_VSMP and without CONFIG_X86_VSMP, on vSMP and non
vSMP x86_64 machines.
Please apply.
Signed-off-by: Ravikiran Thirumalai <kiran@scalex86.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Took some cycles to re-read the Lguest Journey end-to-end, fix some
rot and tighten some phrases.
Only comments change. No new jokes, but a couple of recycled old jokes.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
This patch fixes the use of GPIO routines which are in the PCI
configuration space of the RDC321x, therefore reading/writing
to this space without spinlock protection can be problematic.
We also now request and free GPIOs and support the MGB100
board, previous code was very AR525W-centric.
Signed-off-by: Volker Weiss <volker@tintuc.de>
Signed-off-by: Florian Fainelli <florian.fainelli@telecomint.eu>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|