Commit Graph

47 Commits

Author SHA1 Message Date
Michael Ellerman
7db57e7758 powerpc/spe: Mark expected switch fall-throughs
Mark switch cases where we are expecting to fall through.

Fixes errors such as below, seen with mpc85xx_defconfig:

  arch/powerpc/kernel/align.c: In function 'emulate_spe':
  arch/powerpc/kernel/align.c:178:8: error: this statement may fall through
    ret |= __get_user_inatomic(temp.v[3], p++);
        ^~

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20190730141917.21817-1-mpe@ellerman.id.au
2019-07-31 00:19:34 +10:00
Thomas Gleixner
2874c5fd28 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152
Based on 1 normalized pattern(s):

  this program is free software you can redistribute it and or modify
  it under the terms of the gnu general public license as published by
  the free software foundation either version 2 of the license or at
  your option any later version

extracted by the scancode license scanner the SPDX license identifier

  GPL-2.0-or-later

has been chosen to replace the boilerplate/reference in 3029 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-30 11:26:32 -07:00
Linus Torvalds
96d4f267e4 Remove 'type' argument from access_ok() function
Nobody has actually used the type (VERIFY_READ vs VERIFY_WRITE) argument
of the user address range verification function since we got rid of the
old racy i386-only code to walk page tables by hand.

It existed because the original 80386 would not honor the write protect
bit when in kernel mode, so you had to do COW by hand before doing any
user access.  But we haven't supported that in a long time, and these
days the 'type' argument is a purely historical artifact.

A discussion about extending 'user_access_begin()' to do the range
checking resulted this patch, because there is no way we're going to
move the old VERIFY_xyz interface to that model.  And it's best done at
the end of the merge window when I've done most of my merges, so let's
just get this done once and for all.

This patch was mostly done with a sed-script, with manual fix-ups for
the cases that weren't of the trivial 'access_ok(VERIFY_xyz' form.

There were a couple of notable cases:

 - csky still had the old "verify_area()" name as an alias.

 - the iter_iov code had magical hardcoded knowledge of the actual
   values of VERIFY_{READ,WRITE} (not that they mattered, since nothing
   really used it)

 - microblaze used the type argument for a debug printout

but other than those oddities this should be a total no-op patch.

I tried to fix up all architectures, did fairly extensive grepping for
access_ok() uses, and the changes are trivial, but I may have missed
something.  Any missed conversion should be trivially fixable, though.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-03 18:57:57 -08:00
Ravi Bangoria
e6684d07e4 powerpc/sstep: Introduce GETTYPE macro
Replace 'op->type & INSTR_TYPE_MASK' expression with GETTYPE(op->type)
macro.

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-06-03 21:19:40 +10:00
Paul Mackerras
158f19698b powerpc: Fix check for copy/paste instructions in alignment handler
Commit 07d2a628bc ("powerpc/64s: Avoid cpabort in context switch
when possible", 2017-06-09) changed the definition of PPC_INST_COPY
and in so doing inadvertently broke the check for copy/paste
instructions in the alignment fault handler. The check currently
matches no instructions.

This fixes it by ANDing both sides of the comparison with the mask.

Fixes: 07d2a628bc ("powerpc/64s: Avoid cpabort in context switch when possible")
Cc: stable@vger.kernel.org # v4.13+
Reported-by: Markus Trippelsdorf <markus@trippelsdorf.de>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-10-25 12:42:35 +02:00
Paul Mackerras
1bc944cee6 powerpc: Fix handling of alignment interrupt on dcbz instruction
This fixes the emulation of the dcbz instruction in the alignment
interrupt handler.  The error was that we were comparing just the
instruction type field of op.type rather than the whole thing,
and therefore the comparison "type != CACHEOP + DCBZ" was always
true.

Fixes: 31bfdb036f ("powerpc: Use instruction emulation infrastructure to handle alignment faults")
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Tested-by: Michal Sojka <sojkam1@fel.cvut.cz>
Tested-by: Christian Zigotzky <chzigotzky@xenosoft.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-09-15 08:41:18 +10:00
Paul Mackerras
31bfdb036f powerpc: Use instruction emulation infrastructure to handle alignment faults
This replaces almost all of the instruction emulation code in
fix_alignment() with calls to analyse_instr(), emulate_loadstore()
and emulate_dcbz().  The only emulation code left is the SPE
emulation code; analyse_instr() etc. do not handle SPE instructions
at present.

One result of this is that we can now handle alignment faults on
all the new VSX load and store instructions that were added in POWER9.
VSX loads/stores will take alignment faults for unaligned accesses
to cache-inhibited memory.

Another effect is that we no longer rely on the DAR and DSISR values
set by the processor.

With this, we now need to include the instruction emulation code
unconditionally.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-09-01 16:42:43 +10:00
Michael Ellerman
f9effe9250 powerpc: Fix DAR reporting when alignment handler faults
Anton noticed that if we fault part way through emulating an unaligned
instruction, we don't update the DAR to reflect that.

The DAR value is eventually reported back to userspace as the address
in the SEGV signal, and if userspace is using that value to demand
fault then it can be confused by us not setting the value correctly.

This patch is ugly as hell, but is intended to be the minimal fix and
back ports easily.

Cc: stable@vger.kernel.org
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
2017-08-31 22:06:57 +10:00
Paul Mackerras
48fe9e9488 powerpc: Don't try to fix up misaligned load-with-reservation instructions
In the past, there was only one load-with-reservation instruction,
lwarx, and if a program attempted a lwarx on a misaligned address, it
would take an alignment interrupt and the kernel handler would emulate
it as though it was lwzx, which was not really correct, but benign since
it is loading the right amount of data, and the lwarx should be paired
with a stwcx. to the same address, which would also cause an alignment
interrupt which would result in a SIGBUS being delivered to the process.

We now have 5 different sizes of load-with-reservation instruction. Of
those, lharx and ldarx cause an immediate SIGBUS by luck since their
entries in aligninfo[] overlap instructions which were not fixed up, but
lqarx overlaps with lhz and will be emulated as such. lbarx can never
generate an alignment interrupt since it only operates on 1 byte.

To straighten this out and fix the lqarx case, this adds code to detect
the l[hwdq]arx instructions and return without fixing them up, resulting
in a SIGBUS being delivered to the process.

Cc: stable@vger.kernel.org
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-04-04 23:16:57 +10:00
Benjamin Herrenschmidt
e2827fe5c1 powerpc/64: Clean up ppc64_caches using a struct per cache
We have two set of identical struct members for the I and D sides
and mostly identical bunches of code to parse the device-tree to
populate them. Instead make a ppc_cache_info structure with one
copy for I and one for D

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-02-06 19:46:04 +11:00
Benjamin Herrenschmidt
bd067f83b0 powerpc/64: Fix naming of cache block vs. cache line
In a number of places we called "cache line size" what is actually
the cache block size, which in the powerpc architecture, means the
effective size to use with cache management instructions (it can
be different from the actual cache line size).

We fix the naming across the board and properly retrieve both
pieces of information when available in the device-tree.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-02-06 19:46:04 +11:00
Linus Torvalds
7c0f6ba682 Replace <asm/uaccess.h> with <linux/uaccess.h> globally
This was entirely automated, using the script by Al:

  PATT='^[[:blank:]]*#[[:blank:]]*include[[:blank:]]*<asm/uaccess.h>'
  sed -i -e "s!$PATT!#include <linux/uaccess.h>!" \
        $(git grep -l "$PATT"|grep -v ^include/linux/uaccess.h)

to do the replacement at the end of the merge window.

Requested-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-24 11:46:01 -08:00
Kevin Hao
b92a226e52 powerpc: Move cpu_has_feature() to a separate file
We plan to use jump label for cpu_has_feature(). In order to implement
this we need to include the linux/jump_label.h in asm/cputable.h.

Unfortunately if we do that it leads to an include loop. The root of the
problem seems to be that reg.h needs cputable.h (for CPU_FTRs), and then
cputable.h via jump_label.h eventually pulls in hw_irq.h which needs
reg.h (for MSR_EE).

So move cpu_has_feature() to a separate file on its own.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[mpe: Rename to cpu_has_feature.h and flesh out change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-08-01 11:15:03 +10:00
Chris Smart
ae26b36f80 powerpc: Send SIGBUS on unaligned copy and paste
Calling ISA 3.0 instructions copy, copy_first, paste and paste_last
generates an alignment fault when copying or pasting unaligned
data (128 byte). We catch this and send SIGBUS to the userspace
process that caused it.

We do not emulate these because paste may contain additional metadata
when pasting to a co-processor and paste_last is the synchronisation
point for preceding copy/paste sequences.

Thanks to Michael Neuling <mikey@neuling.org> for his help.

Signed-off-by: Chris Smart <chris@distroguy.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-05 23:49:51 +10:00
Daniel Axtens
a9650e9bc5 powerpc/align: Use #ifdef __BIG_ENDIAN__ #else for REG_BYTE
Sparse complains that it doesn't know what REG_BYTE is:

  arch/powerpc/kernel/align.c:313:29: error: undefined identifier 'REG_BYTE'

REG_BYTE is defined differently based on whether we're compiling for
LE, BE32 or BE64. Sparse apparently doesn't provide __BIG_ENDIAN__ or
__LITTLE_ENDIAN__, which means we get no definition.

Rather than check for __BIG_ENDIAN__ and then separately for
__LITTLE_ENDIAN__, just switch the #ifdef to check for __BIG_ENDIAN__
and then #else we define the little endian version. Technically that's
dicey because PDP_ENDIAN is also a possibility, but we already do it in
a lot of places so one more hardly matters.

Signed-off-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-06-16 22:40:19 +10:00
Anton Blanchard
dc4fbba11e powerpc: Create disable_kernel_{fp,altivec,vsx,spe}()
The enable_kernel_*() functions leave the relevant MSR bits enabled
until we exit the kernel sometime later. Create disable versions
that wrap the kernel use of FP, Altivec VSX or SPE.

While we don't want to disable it normally for performance reasons
(MSR writes are slow), it will be used for a debug boot option that
does this and catches bad uses in other areas of the kernel.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-12-01 13:52:25 +11:00
Anton Blanchard
6f791bef76 powerpc: Remove double braces in alignment code.
Looks like I introduced this when adding LE support.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-11-10 09:59:32 +11:00
Aneesh Kumar K.V
ddca156ae6 KVM: PPC: BOOK3S: Remove open coded make_dsisr in alignment handler
Use make_dsisr instead of open coding it. This also have
the added benefit of handling alignment interrupt on additional
instructions.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-05-30 14:26:25 +02:00
Anton Blanchard
f83319d710 powerpc: Add lq/stq emulation
Recent CPUs support quad word load and store instructions. Add
support to the alignment handler for them.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-04-09 12:53:28 +10:00
Tom Musta
630c8a5fc9 powerpc: Enable Little Endian Alignment Handler for Float Pair Instructions
This patch enables alignment handling for the load/store floating point
pair instructions (lfdp, lfdpx, stfdp, stfdpx).  The handler routine
is properly coded and only needs to be enabled.

Signed-off-by: Tom Musta <tmusta@gmail.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-10-30 16:01:23 +11:00
Tom Musta
075f6311af powerpc: Fix Handler of Unaligned Load/Store Strings
The alignment handler is incorrect for unaligned string instructions
in little endian mode.  These instructions access data as arrays of
bytes and thus are endian neutral.  However, the routine also handles
the load/store multiple instructions, which are NOT endian neutral.

This patch toggles the byte swapping flag for the string instructions
in little endian builds.  This effectively disables the byte swapping
logic.

Signed-off-by: Tom Musta <tmusta@gmail.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-10-30 16:01:17 +11:00
Benjamin Herrenschmidt
3ad26e5c44 Merge branch 'for-kvm' into next
Topic branch for commits that the KVM tree might want to pull
in separately.

Hand merged a few files due to conflicts with the LE stuff

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-10-11 18:23:53 +11:00
Paul Mackerras
de79f7b9f6 powerpc: Put FP/VSX and VR state into structures
This creates new 'thread_fp_state' and 'thread_vr_state' structures
to store FP/VSX state (including FPSCR) and Altivec/VSX state
(including VSCR), and uses them in the thread_struct.  In the
thread_fp_state, the FPRs and VSRs are represented as u64 rather
than double, since we rarely perform floating-point computations
on the values, and this will enable the structures to be used
in KVM code as well.  Similarly FPSCR is now a u64 rather than
a structure of two 32-bit values.

This takes the offsets out of the macros such as SAVE_32FPRS,
REST_32FPRS, etc.  This enables the same macros to be used for normal
and transactional state, enabling us to delete the transactional
versions of the macros.   This also removes the unused do_load_up_fpu
and do_load_up_altivec, which were in fact buggy since they didn't
create large enough stack frames to account for the fact that
load_up_fpu and load_up_altivec are not designed to be called from C
and assume that their caller's stack frame is an interrupt frame.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-10-11 17:26:49 +11:00
Anton Blanchard
52055d07ce powerpc: Handle VSX alignment faults in little endian mode
Things are complicated by the fact that VSX elements are big
endian ordered even in little endian mode. 8 byte loads and
stores also write to the top 8 bytes of the register.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-10-11 16:48:38 +11:00
Anton Blanchard
835e206a67 powerpc: Add little endian support to alignment handler
Handle most unaligned load and store faults in little
endian mode. Strings, multiples and VSX are not supported.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-10-11 16:48:37 +11:00
Anton Blanchard
a5841a4602 powerpc: Alignment handler shouldn't access VSX registers with TS_FPR
The TS_FPR macro selects the FPR component of a VSX register (the
high doubleword). emulate_vsx is using this macro to get the
address of the associated VSX register. This happens to work on big
endian, but fails on little endian.

Replace it with an explicit array access.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-10-11 16:48:36 +11:00
Anton Blanchard
c324496456 powerpc: Remove hard coded FP offsets in alignment handler
The alignment handler assumes big endian ordering when selecting
the low word of a 64bit floating point value. Use the existing
union which works in both little and big endian.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-10-11 16:48:35 +11:00
Anton Blanchard
f626190d27 powerpc: Remove open coded byte swap macro in alignment handler
Use swab64/32/16 instead of open coding it.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-10-11 16:48:35 +11:00
Anton Blanchard
5c2e08231b powerpc: Never handle VSX alignment exceptions from kernel
The VSX alignment handler needs to write out the existing VSX
state to memory before operating on it (flush_vsx_to_thread()).
If we take a VSX alignment exception in the kernel bad things
will happen. It looks like we could write the kernel state out
to the user process, or we could handle the kernel exception
using data from the user process (depending if MSR_VSX is set
or not).

Worse still, if the code to read or write the VSX state causes an
alignment exception, we will recurse forever. I ended up with
hundreds of megabytes of kernel stack to look through as a result.

Floating point and SPE code have similar issues but already include
a user check. Add the same check to emulate_vsx().

With this patch any unaligned VSX loads and stores in the kernel
will show up as a clear oops rather than silent corruption of
kernel or userspace VSX state, or worse, corruption of a potentially
unlimited amount of kernel memory.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-08-27 14:44:26 +10:00
Anton Blanchard
230aef7a6a powerpc: Handle unaligned ldbrx/stdbrx
Normally when we haven't implemented an alignment handler for
a load or store instruction the process will be terminated.

The alignment handler uses the DSISR (or a pseudo one) to locate
the right handler. Unfortunately ldbrx and stdbrx overlap lfs and
stfs so we incorrectly think ldbrx is an lfs and stdbrx is an
stfs.

This bug is particularly nasty - instead of terminating the
process we apply an incorrect fixup and continue on.

With more and more overlapping instructions we should stop
creating a pseudo DSISR and index using the instruction directly,
but for now add a special case to catch ldbrx/stdbrx.

Signed-off-by: Anton Blanchard <anton@samba.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-08-14 11:50:20 +10:00
David Howells
ae3a197e3d Disintegrate asm/system.h for PowerPC
Disintegrate asm/system.h for PowerPC.

Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
cc: linuxppc-dev@lists.ozlabs.org
2012-03-28 18:30:02 +01:00
Andreas Schwab
05d77ac90c powerpc: Remove fpscr use from [kvm_]cvt_{fd,df}
Neither lfs nor stfs touch the fpscr, so remove the restore/save of it
around them.

Signed-off-by: Andreas Schwab <schwab@linux-m68k.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-09-02 14:07:32 +10:00
Neil Campbell
bb7f20b1c6 powerpc: Handle VSX alignment faults correctly in little-endian mode
This patch fixes the handling of VSX alignment faults in little-endian
mode (the current code assumes the processor is in big-endian mode).

The patch also makes the handlers clear the top 8 bytes of the register
when handling an 8 byte VSX load.

This is based on 2.6.32.

Signed-off-by: Neil Campbell <neilc@linux.vnet.ibm.com>
Cc: <stable@kernel.org>
Acked-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-12-18 14:55:43 +11:00
Anton Blanchard
eecff81d1f powerpc: Create PPC_WARN_ALIGNMENT to match PPC_WARN_EMULATED
perf_event wants a separate event for alignment and emulation faults,
so create another emulation event.  This will make it easy to hook in
perf_event at one spot.

We pass in regs which will be required for these events.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2009-10-28 16:13:03 +11:00
Geert Uytterhoeven
80947e7c99 powerpc: Keep track of emulated instructions
If CONFIG_PPC_EMULATED_STATS is enabled, make available counters for the
various classes of emulated instructions under
/sys/kernel/debug/powerpc/emulated_instructions/ (assumed debugfs is mounted on
/sys/kernel/debug).  Optionally (controlled by
/sys/kernel/debug/powerpc/emulated_instructions/do_warn), rate-limited warnings
can be printed to the console when instructions are emulated.

Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-05-21 15:44:26 +10:00
Michael Neuling
553631e25f powerpc: Fix load/store float double alignment handler
When we introduced VSX, we changed the way FPRs are stored in the
thread_struct.  Unfortunately we missed the load/store float double
alignment handler code when updating how we access FPRs in the
thread_struct.

Below fixes this and merges the little/big endian case.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-02-23 15:53:05 +11:00
Michael Neuling
545bba1824 powerpc: Add alignment handler for new lfiwzx instruction
lfiwzx is a new floating point load instruction in 2.06 that needs an
alignment handler for Linux.

Turns out to be the worlds easiest handler to add.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-02-23 15:53:04 +11:00
Michael Neuling
26456dcfb8 powerpc/vsx: Fix VSX alignment handler for regs 32-63
Fix the VSX alignment handler for VSX registers > 32.  32-63 are stored
in the VMX part of the thread_struct not the FPR part.

Signed-off-by: Michael Neuling <mikey@neuling.org>
CC: stable@kernel.org (2.6.27 & .28 please)
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-02-13 16:37:45 +11:00
Michael Neuling
78fbc824ed powerpc: Fix uninitialised variable in VSX alignment code
This fixes an uninitialised variable in the VSX alignment code.  It can
cause warnings from GCC (noticed with gcc-4.1.1).  Gcc is actually
correct in this instance, and this bug could cause the alignment
interrupt handler to send a SIGSEGV to the process on a legitimate
access.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-09-03 20:53:14 +10:00
Michael Neuling
cd6f37be7f powerpc: Add VSX load/store alignment exception handler
VSX loads and stores will take an alignment exception when the address
is not on a 4 byte boundary.

This add support for these alignment exceptions and will emulate the
requested load or store.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-15 12:29:25 +10:00
Michael Neuling
b887ec620a powerpc: remove unused variable in emulate_fp_pair
regs is not used in emulate_fp_pair so remove it.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-09 16:30:47 +10:00
Michael Neuling
9c75a31c35 powerpc: Add macros to access floating point registers in thread_struct.
We are going to change where the floating point registers are stored
in the thread_struct, so in preparation add some macros to access the
floating point registers.  Update all code to use these new macros.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:43 +10:00
Kumar Gala
26caeb2ee1 [POWERPC] Handle alignment faults on SPE load/store instructions
This adds code to handle alignment traps generated by the following
SPE (signal processing engine) load/store instructions, by emulating
the instruction in the kernel (as is done for other instructions that
generate alignment traps):

evldd[x]         Vector Load Double Word into Double Word [Indexed]
evldw[x]         Vector Load Double into Two Words [Indexed]
evldh[x]         Vector Load Double into Four Half Words [Indexed]
evlhhesplat[x]   Vector Load Half Word into Half Words Even and Splat [Indexed]
evlhhousplat[x]  Vector Load Half Word into Half Word Odd Unsigned and Splat [Indexed]
evlhhossplat[x]  Vector Load Half Word into Half Word Odd Signed and Splat [Indexed]
evlwhe[x]        Vector Load Word into Two Half Words Even [Indexed]
evlwhou[x]       Vector Load Word into Two Half Words Odd Unsigned (zero-extended) [Indexed]
evlwhos[x]       Vector Load Word into Two Half Words Odd Signed (with sign extension) [Indexed]
evlwwsplat[x]    Vector Load Word into Word and Splat [Indexed]
evlwhsplat[x]    Vector Load Word into Two Half Words and Splat [Indexed]
evstdd[x]        Vector Store Double of Double [Indexed]
evstdw[x]        Vector Store Double of Two Words [Indexed]
evstdh[x]        Vector Store Double of Four Half Words [Indexed]
evstwhe[x]       Vector Store Word of Two Half Words from Even [Indexed]
evstwho[x]       Vector Store Word of Two Half Words from Odd [Indexed]
evstwwe[x]       Vector Store Word of Word from Even [Indexed]
evstwwo[x]       Vector Store Word of Word from Odd [Indexed]

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2007-09-14 08:51:48 -05:00
Paul Mackerras
c6d4267ece [POWERPC] Handle alignment faults on new FP load/store instructions
This adds code to handle alignment traps generated by the following
new floating-point load/store instructions, by emulating the
instruction in the kernel (as is done for other instructions that
generate alignment traps):

lfiwax	load floating-point as integer word algebraic indexed
stfiwx	store floating-point as integer word indexed
lfdp	load floating-point double pair
lfdpx	load floating-point double pair indexed
stfdp	store floating-point double pair
stfdpx	store floating-point double pair indexed

All these except stfiwx are new in POWER6.

lfdp/lfdpx/stfdp/stfdpx load and store 16 bytes of memory into an
even/odd FP register pair.  In little-endian mode each 8-byte value is
byte-reversed separately (i.e. not as a 16-byte unit).  lfiwax/stfiwx
load or store the lower 4 bytes of a floating-point register from/to
memory; lfiwax sets the upper 4 bytes of the FP register to the sign
extension of the value loaded.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-17 11:01:55 +10:00
Benjamin Herrenschmidt
e4ee3891db [POWERPC] Alignment exception uses __get/put_user_inatomic
Make the alignment exception handler use the new _inatomic variants
of __get/put_user. This fixes erroneous warnings in the very rare
cases where we manage to have copy_tofrom_user_inatomic() trigger
an alignment exception.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

 arch/powerpc/kernel/align.c |   56 ++++++++++++++++++++++++--------------------
 1 file changed, 31 insertions(+), 25 deletions(-)
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13 04:09:38 +10:00
Paul Mackerras
fab5db97e4 [PATCH] powerpc: Implement support for setting little-endian mode via prctl
This adds the PowerPC part of the code to allow processes to change
their endian mode via prctl.

This also extends the alignment exception handler to be able to fix up
alignment exceptions that occur in little-endian mode, both for
"PowerPC" little-endian and true little-endian.

We always enter signal handlers in big-endian mode -- the support for
little-endian mode does not amount to the creation of a little-endian
user/kernel ABI.  If the signal handler returns, the endian mode is
restored to what it was when the signal was delivered.

We have two new kernel CPU feature bits, one for PPC little-endian and
one for true little-endian.  Most of the classic 32-bit processors
support PPC little-endian, and this is reflected in the CPU feature
table.  There are two corresponding feature bits reported to userland
in the AT_HWCAP aux vector entry.

This is based on an earlier patch by Anton Blanchard.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-06-09 21:24:15 +10:00
Benjamin Herrenschmidt
5daf9071b5 [PATCH] powerpc: merge align.c
This patch merges align.c, the result isn't quite what was in ppc64 nor
what was in ppc32 :) It should implement all the functionalities of both
though. Kumar, since you played with that in the past, I suppose you
have some test cases for verifying that it works properly before I dig
out the 601 machine ? :)

Since it's likely that I won't be able to test all scenario, code
inspection is much welcome.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-11-18 14:39:23 +11:00