The previous hack for calculating the return address for the first frame
we unwind (dwarf_unwinder_dump) didn't always work. The problem was that
it assumed once it read the rule for calculating the return address,
there would be no new rules for calculating it. This isn't true because
the way in which the CFA is calculated can change as you progress
through a function and the return address is figured out using the
CFA. Therefore, the way to calculate the return address can change.
So, instead of using some offset from the beginning of
dwarf_unwind_stack which is just a flakey approach, and instead of
executing instructions from the FDE until the return address is setup,
we now figure out the pc in dwarf_unwind_stack() just before we call
dwarf_cfa_execute_insns().
Signed-off-by: Matt Fleming <matt@console-pimps.org>
commit 111b9dc5 ("e1000e: add aer support") introduces pcie aer
support for e1000e, but it is not reasonable to disable it in
e1000_remove but enable it in e1000_resume. This patch enables aer
support in e1000_probe.
Signed-off-by: Xiaotian Feng <dfeng@redhat.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
With manageability (Intel AMT) enabled via BIOS, PHY wakeup does not get
configured on newer parts which use PHY wakeup vs. MAC wakeup which causes
WoL to not work. The driver should configure PHY wakeup whether or not
manageability is enabled.
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The slow path ulp_init and ulp_exit calls to the bnx2i driver
are sleepable calls and therefore should not be protected using
rcu_read_lock. Fix it by using mutex and refcount during these
calls. cnic_unregister_driver() will now wait for the refcount
to go to zero before completing the call.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Reviewed-by: Benjamin Li <benli@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The slow path ulp_start and ulp_stop calls to the bnx2i driver
are sleepable calls and therefore should not be protected using
rcu_read_lock. Fix it by using mutex and setting a bit during
these calls. cnic_unregister_device() will now wait for the bit
to clear before completing the call.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Reviewed-by: Benjamin Li <benli@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The slow path calls to the cnic driver are sleepable calls so we
cannot use rcu_read_lock(). Use mutex for these slow path calls
instead.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Reviewed-by: Benjamin Li <benli@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Register and unregister with bnx2 during NETDEV_UP and NETDEV_DOWN
events. This simplifies the sequence of events and allows locking
fixes in the next patch.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Reviewed-by: Benjamin Li <benli@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When the cnic driver tries to grab a symbol from bnx2 when bnx2 is
running init code, symbol_get() will succeed but symbol_put_addr()
will hit BUG() a moment later. module_text_address() fails because
bnx2 is still in init code.
This is fixed by using symbol_put() instead which does the exact
opposite of symbol_get().
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The triggered field of struct poll_wqueues introduced in commit
5f820f648c ("poll: allow f_op->poll to
sleep").
It was first set to 1 in pollwake() (now __pollwake() ), tested and
later set to 0 in poll_schedule_timeout(), but not initialized before.
As a result when the process needs to sleep, triggered was likely to be
non-zero even if pollwake() is not called before the first
poll_schedule_timeout(), meaning schedule_hrtimeout_range() would not be
called and an extra loop calling all ->poll() would be done.
This patch initialize triggered to 0 in poll_initwait() so the ->poll()
are not called twice before the process goes to sleep when it needs to.
Signed-off-by: Guillaume Knispel <gknispel@proformatique.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Tejun Heo <tj@kernel.org>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
We really don't want to be doing all these indirects, updating
the GPU gart table is something we do often so the less overhead the
better.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Fixes 3D apps timing out in the WAIT_VBLANK ioctl.
AVIVO bits compile-tested only.
Signed-off-by: Michel Dänzer <daenzer@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Now that the SH-5 code is more or less behaving with the new cacheflush
interface, wire up the initialization code.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
These will be handled through the shared cache interface instead, and
they are presently undefined anyways.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The caches enabled case needs more work, but is presently broken
regardless, so this can be done incrementally.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The u300_init_check_chip() function was not properly tagged with
the __init macro and provided a initsection mismatch on
compilation.
Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Currently, highmem is selectable, and you can request an increased
vmalloc area. However, none of this has any effect on the memory
layout since a patch in the highmem series was accidentally dropped.
Moreover, even if you did want highmem, all memory would still be
registered as lowmem, possibly resulting in overflow of the available
virtual mapping space.
The highmem boundary is determined by the highest allowed beginning
of the vmalloc area, which depends on its configurable minimum size
(see commit 60296c71f6 for details on
this).
We should create mappings and initialize bootmem only for low memory,
while the zone allocator must still be told about highmem.
Currently, memory nodes which are completely located in high memory
are not supported. This is not a huge limitation since systems
relying on highmem support are unlikely to have discontiguous memory
with large holes.
[ A similar patch was meant to be merged before commit 5f0fbf9eca
and be available in Linux v2.6.30, however some git rebase screw-up
of mine dropped the first commit of the series, and that goofage
escaped testing somehow as well. -- Nico ]
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Reviewed-by: Nicolas Pitre <nico@marvell.com>
We were using 'fd' locally, but there was a global 'fd' too, so
when converting from open to fopen the test made against fd
should be made against 'fp', but since we have that global
it didnt get discovered ...
Reported-by: Ulrich Drepper <drepper@redhat.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
LKML-Reference: <20090814182632.GF3490@ghostprotocols.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
The SGI UV Broadcast Assist Unit is used to send TLB shootdown
messages to remote nodes of the system. The header of the
message must contain the subnode id of the block in the
receiving hub that handles such messages. It should always be
0x10, the id of the "LB" block.
It had previously been documented as a "must be zero" field.
Signed-off-by: Cliff Wickman <cpw@sgi.com>
Acked-by: Jack Steiner <steiner@sgi.com>
LKML-Reference: <E1Mc1x7-0005Ce-6t@eag09.americas.sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
This is superfluous, as the default CPU type and family are already
established by the initial cpuinfo definition. Given that we are still
able to probe for the CPU family even if we are not able to detect the
subtype, it's preferable to let the probing code fill out what it can and
leave the rest.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch simply adds LCDC hwblk_id data for the kfr2r09 board.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch updates the SuperH Mobile sleep assembly code with
support for DBSC memory controller found in the sh7724 processor.
Without this fix the memory hooked up to the sh7724 processor
will never enter self-refresh mode before suspending to ram. The
effect of this is that the memory contents most likeley will be
lost upon resume which may or may not be what you want.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch updates the Solution Engine 7724 board code to use
in-SoC KEYSC resources for the keyboard platform device. Using
the in-SoC key scan controller fixes a crash-during-resume issue.
Without this patch the KEYSC hardware block located in the board
specific FPGA is used together with an external IRQ which is
routed through the FPGA and handled by some board specific demux
code. This board specific FPGA interrupt code does not implement
desc->set_wake() so the enable_irq_wake() call in the sh_keysc
driver will fail at suspend-to-ram time and the disable_irq_wake()
will bomb out when resuming.
Changing the platform data to use the in-SoC KEYSC hardware makes
the se7724 board support code less special which is a good thing.
Also, the board specific KEYSC pin setup code selects in-SoC pin
functions already which makes the current FPGA platform device data
look like a typo.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch updates the SuperH CMT driver with suspend and resume
callbacks for the suspend-to-ram case. This patch stops the CMT
channel at suspend time to avoid unwanted wake up events.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch updates the SuperH Mobile LCDC driver to skip
over disabled channels. Without this patch suspend-to-ram
operation will crash if deferred io is enabled.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This paves the way for allowing individual CPUs to overload the
individual flushing routines that they care about without having to
depend on weak aliases. SH-4 is converted over initially, as it wires
up pretty much everything. The majority of the other CPUs will simply use
the default no-op implementation with their own region flushers wired up.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
We use flush_cache_page() outright in copy_to_user_page(), and nothing
else needs it, so just kill it off. SH-5 still defines its own version,
but that too will go away in the same fashion once it converts over.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
All of the flush_dcache_mmap_lock()/flush_dcache_mmap_unlock()
definitions are identical across all CPUs, so just provide them
generically in asm/cacheflush.h.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
this syncs the versioning check with the code the X server uses.
Reported-by: Anssi Hannula <anssi.hannula@iki.fi>
Signed-off-by: Dave Airlie <airlied@redhat.com>
flush_dcache_all() is used internally by the SH-4 cache code, it is not
part of the exported cache API, so make it static and don't export it.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This migrates the alias computation and printing of probed cache
parameters from the SH-4 code to the shared cpu_cache_init().
This permits other platforms with aliases to make use of the same
probe logic without having to roll their own, and also produces
consistent output regardless of platform.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
rs690 is r3xx 3D engine with AVIVO modesetting so we need to allow
AVIVO register for vline synchronization. This add a specific table
to rs690 to handle that. Thanks to Marc (marvin24) for debugging
this and kudos to Andre (taiu1) for spotting the origin of the bugs.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This provides a central point for CPU cache initialization routines.
This replaces the antiquated p3_cache_init() method, which the vast
majority of CPUs never cared about.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds a family member to struct sh_cpuinfo, which allows us to fall
back more on the probe routines to work out what sort of subtype we are
running on. This will be used by the CPU cache initialization code in
order to first do family-level initialization, followed by subtype-level
optimizations.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
These were previous littered around tlb-nommu.c and pg-nommu.c, though at
this point there are more stubs than are strictly TLB or page op related,
so just consolidate them in a single nommu.c.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This does a bit of reorganizing for allowing nommu to use the new
and generic cache.c, no functional changes.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This builds in the newly created cache.c (renamed from pg-mmu.c) for both
MMU and NOMMU configurations. The kmap_coherent() stubs and alias
information recorded by each CPU family takes care of doing the right
thing while enabling the code to be commonly shared.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>