Commit 916f676f8d started reserving boot service code since some systems
require you to keep that code around until SetVirtualAddressMap is called.
However, in some cases those areas will overlap with reserved regions.
The proper medium-term fix is to fix the bootloader to prevent the
conflicts from occurring by moving the kernel to a better position,
but the kernel should check for this possibility, and only reserve regions
which can be reserved.
Signed-off-by: Maarten Lankhorst <m.b.lankhorst@gmail.com>
Link: http://lkml.kernel.org/r/4DF7A005.1050407@gmail.com
Acked-by: Matthew Garrett <mjg@redhat.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Knut Tidemann found that first packet of a multicast flow was not
correctly received, and bisected the regression to commit b23dd4fe42
(Make output route lookup return rtable directly.)
Special thanks to Knut, who provided a very nice bug report, including
sample programs to demonstrate the bug.
Reported-and-bisectedby: Knut Tidemann <knut.andre.tidemann@jotron.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add power_level module parameter to set the default power save level.
Power save level has range from 1 - 5, default power save level is 1.
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Add power_save module parameter to enable power management if needed
Default power management is disabled.
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Since the irq number is just an unsigned int, store it inside iwl_bus
instead of calling the get_irq ops every time it is needed.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Since there is no protection around SYNC host command mechanism, at least WARN
when collision happens between two SYNC host comamnds. I am not sure there is a
real issue (beyond the HCMD_ACTIVE flag maintenance) with having two SYNC host
commands at the same time, but at least now, we will know about it.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
In order to remove a few more dereference to priv->pdev that will be killed
[Asoon, there is now a method to get the IRQ number.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Continue to popule the PCI layer and the iwl_bus_ops with the power related
stuff.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Get a pointer to the struct device during probe and get the rid of all the PCI
specific DMA wrappers.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Since we have now a PCI layer, all the init and deinit code that is PCI
related should move to there.
Also move the IO functions: read8/read32/write32. They need hw_base which
is killed from priv.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Bus specific layer must know how to return the struct device* of the device.
Implement that as a callback of iwl_bus_ops and use that callback instead of
using the priv->pdev pointer which is meant to disappear soon.
Since the struct device * is needed in hot path, iwl_bus holds a pointer to it
instead of calling get_dev all the time.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
iwl_bus will represent a bus, and iwl_bus_ops all the operations that can be
done on this bus.
For the moment only set_prv_data is implemented. More to come...
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Move some PCI functionality to the new iwl_pci.[ch] files:
* the PCI_DEVICE_TABLE
* the pci_driver struct definition
* the PCI probe / remove functions
* the PCI suspend / resume functions
All these functions are now split: the trigger comes from the PCI layer which
calls to the bus generic code located in the other files.
This is the beginning only. There are still a lot of PCI related code needs
to be gathered.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
It is superfluous to disable the interrupts after we reset the NIC. The only
entity that could enable the interrupts after the NIC is reset is the driver.
So remove this pointless action.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
According to the data sheet for G4, AP4 and AG5 KEYSC MODE_6 is 8x8 keys.
Bump up MAXKEYS to 64 too.
Signed-off-by: Magnus Damm <damm@opensource.se>
Reviewed-by: Simon Horman <horms@verge.net.au>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Otherwise the updated evdev driver (commit cdda911c34,
"Input: evdev - only signal polls on full packets") no longer works on
top of omap-keypad.
Tested on Amstrad Delta.
Signed-off-by: Janusz Krzysztofik <jkrzyszt@tis.icnet.pl>
Reviewed-by: Henrik Rydberg <rydberg@euromail.se>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
We should only wake waiters on the event device when we actually post
an EV_SYN/SYN_REPORT to the queue. Otherwise we end up making waiting
threads runnable only to go right back to sleep because the device
still isn't readable.
Reported-by: Jeffrey Brown <jeffbrown@android.com>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Firmware checking is only performed when the firmware is loaded
instead of each time the driver inits the phy.
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com>
The new firmware format adds versioning as firmware for a specific
chipset appears to be subject to change. Current "legacy" format is
still supported.
Signed-off-by: Hayes Wang <hayeswang@realtek.com>
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com>
Call sysfs_attr_init() from atk_init_attribute() to handle sysfs attribute
initialization in a single function.
Signed-off-by: Guenter Roeck <guenter.roeck@ericsson.com>
Cc: Luca Tettamanti <kronos.it@gmail.com>
Acked-by: Jean Delvare <khali@linux-fr.org>
The DDX modifies DMA_SEMAPHORE on nv50 in order to implement sync-to-vblank,
things will go very wrong for cross-channel sync after this.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
While parsing the perf table, there is no check if
the num of entries read from the vbios is less than
the currently allocated number.
In case of a buggy vbios this will cause overwriting
of kernel memory, causing aditional problems.
Add a simple check in order to prevent the case
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
* 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/davej/cpufreq:
[CPUFREQ] powernow-k8: Don't try to transition if the pstate is incorrect
[CPUFREQ] powernow-k8: Don't notify of successful transition if we failed (vid case).
[CPUFREQ] Don't set stat->last_index to -1 if the pol->cur has incorrect value.
Hugh Dickins points out that lockdep (correctly) spots a potential
deadlock on the anon_vma lock, because we now do a GFP_KERNEL allocation
of anon_vma_chain while doing anon_vma_clone(). The problem is that
page reclaim will want to take the anon_vma lock of any anonymous pages
that it will try to reclaim.
So re-organize the code in anon_vma_clone() slightly: first do just a
GFP_NOWAIT allocation, which will usually work fine. But if that fails,
let's just drop the lock and re-do the allocation, now with GFP_KERNEL.
End result: not only do we avoid the locking problem, this also ends up
getting better concurrency in case the allocation does need to block.
Tim Chen reports that with all these anon_vma locking tweaks, we're now
almost back up to the spinlock performance.
Reported-and-tested-by: Hugh Dickins <hughd@google.com>
Tested-by: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This matches the anon_vma_clone() case, and uses the same lock helper
functions. Because of the need to potentially release the anon_vma's,
it's a bit more complex, though.
We traverse the 'vma->anon_vma_chain' in two phases: the first loop gets
the anon_vma lock (with the helper function that only takes the lock
once for the whole loop), and removes any entries that don't need any
more processing.
The second phase just traverses the remaining list entries (without
holding the anon_vma lock), and does any actual freeing of the
anon_vma's that is required.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Tested-by: Hugh Dickins <hughd@google.com>
Tested-by: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
In anon_vma_clone() we traverse the vma->anon_vma_chain of the source
vma, locking the anon_vma for each entry.
But they are all going to have the same root entry, which means that
we're locking and unlocking the same lock over and over again. Which is
expensive in locked operations, but can get _really_ expensive when that
root entry sees any kind of lock contention.
In fact, Tim Chen reports a big performance regression due to this: when
we switched to use a mutex instead of a spinlock, the contention case
gets much worse.
So to alleviate this all, this commit creates a small helper function
(lock_anon_vma_root()) that can be used to take the lock just once
rather than taking and releasing it over and over again.
We still have the same "take the lock and release" it behavior in the
exit path (in unlink_anon_vmas()), but that one is a bit harder to fix
since we're actually freeing the anon_vma entries as we go, and that
will touch the lock too.
Reported-and-tested-by: Tim Chen <tim.c.chen@linux.intel.com>
Tested-by: Hugh Dickins <hughd@google.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>