Each time I exit Firefox, /proc/meminfo's Committed_AS goes down almost
400 kB: OVERCOMMIT_NEVER would be allowing overcommits it should
prohibit.
Commit fc8744adc8 "Stop playing silly
games with the VM_ACCOUNT flag" changed shmem_file_setup() to set the
shmem file's VM_ACCOUNT flag according to VM_NORESERVE not being set in
the vma flags; but did so only _after_ the shmem_acct_size(flags, size)
call which is expected to pre-account a shared anonymous object.
It's all clearer if we switch shmem.c over to use VM_NORESERVE
throughout in place of !VM_ACCOUNT.
But I very nearly sent in a patch which mistakenly removed the
accounting from tmpfs files: shmem_get_inode()'s memset was good for not
setting VM_ACCOUNT, but now it needs to set VM_NORESERVE.
Rather than setting that by default, then perhaps clearing it again in
shmem_file_setup(), let's pass it as a flag to shmem_get_inode(): that
allows us to remove the #ifdef CONFIG_SHMEM from shmem_file_setup().
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
vi arch/ia64/kernel/iosapic.c +142
static struct iosapic_intr_info {
...
} iosapic_intr_info[NR_IRQS];
But at line 510 we have:
for (i = 0; i <= NR_IRQS; i++) {
s/<=/</
Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
static struct {
... :114
unsigned short hash[UNW_HASH_SIZE];
... :2152
for (index = 0; index <= UNW_HASH_SIZE; ++index) {
This is a bug, isn't it?
s/<=/</
Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
The previous commit which introduced the DMAR_DEFAULT_ON setting in
drivers/pci/dmar.c neglected to add the ability for ia64 to enable
the IOMMU by default. Rectify that mistake, doh!
Signed-off-by: Kyle McMartin <kyle@redhat.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
During host driver module removal del_gendisk() results in a final
put on drive->gendev and freeing the drive by drive_release_dev().
Convert device drivers from using struct kref to use struct device
so device driver's object holds reference on ->gendev and prevents
drive from prematurely going away.
Also fix ->remove methods to not erroneously drop reference on a
host driver by using only put_device() instead of ide*_put().
Reported-by: Stanislaw Gruszka <stf_xl@wp.pl>
Tested-by: Stanislaw Gruszka <stf_xl@wp.pl>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Just copy the comment from drivers/scsi/sr.c::sr_done()
(from which the capacity hack has been originated).
Cc: Borislav Petkov <petkovbb@gmail.com>
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Fix missing parentheses so PIO/DMA timings for master device on the
second channel are programmed correctly (IOW "8 0 24 16" offset values
should be used instead of the current "8 0 16 16").
[ The bug went unnoticed because after PIO/DMA timings get programmed
incorrectly for the third device they are overwritten with timings
for the fourth device and since BIOS should also program timings for
the third device everything should work fine until suspend/resume
cycle or user requested transfer mode changes. ]
Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
[bart: update patch description]
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Documentation/kernel-parameters.txt
- ide=nodma is no longer valid.
drivers/ide/Kconfig
- The module is ide-core.ko not ide.
drivers/ide/ide.c
- It took me a while to figure out what the arguments %d.%d:%d to nodma
module parameter ment, so I added a comment to each.
- Added a comment to each of the sscanf lines.
- There is a bug, if j is 0 it would previously clear all the other bits
except the current device, changed in three different places.
mask &= (1 << i) should be mask &= ~(1 << i).
Signed-off-by: David Fries <david@fries.net>
[bart: s/disk/device/ in ide.c, beautify patch description]
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
* 'for-linus' of git://neil.brown.name/md:
md: avoid races when stopping resync.
md/raid10: Don't call bitmap_cond_end_sync when we are doing recovery.
md/raid10: Don't skip more than 1 bitmap-chunk at a time during recovery.
When iwlan runs on IOMMU, IOMMU generates a lot of PTE write faults
because PTE write bit is not set on some of PTE's. This is because
iwlan driver calls DMA mapping with PCI_DMA_TODEVICE which is read only
in mapping PTE. But iwlan device actually writes to the mapped page to
update its contents. This issue is not exposed in swiotlb. But VT-d
hardware can capture this fault and stop the fault transaction.
The following patch fixes the issue.
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Reviewed-by: Bhavesh Davda <bhavesh@vmware.com>
Tested-by: Chris Wright <chrisw@sous-sol.org>
Acked-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
io_mapping_create_wc should take a resource_size_t parameter in place of
unsigned long. With unsigned long, there will be no way to map greater than 4GB
address in i386/32 bit.
On x86, greater than 4GB addresses cannot be mapped on i386 without PAE. Return
error for such a case.
Patch also adds a structure for io_mapping, that saves the base, size and
type on HAVE_ATOMIC_IOMAP archs, that can be used to verify the offset on
io_mapping_map calls.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: Eric Anholt <eric@anholt.net>
Cc: Keith Packard <keithp@keithp.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
This patch adds two new device ids to the asix driver.
One comes directly from the asix driver on their web site, the other was
reported by Armani Liao as needed for the MSI X320 to get the driver to
work properly for it.
Reported-by: Armani Liao <aliao@novell.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently there are two paths for filling rx buffer queues. One is
used during initialization and the other during runtime. This patch
removes ql_alloc_sbq_buffers() and ql_alloc_lbq_buffers() and replaces
them with a call to the runtime functions ql_update_lbq() and
ql_update_sbq().
Signed-off-by: Ron Mercer <ron.mercer@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
RX Buffers are refilled in chunks of 16 at a time before notifying the
hardware with a register write. This can cause several writes to take
place in a given napi poll call. This change causes the write to take place
only once at the end of the call.
Signed-off-by: Ron Mercer <ron.mercer@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Instead of taking/giving the hw semaphore repeatedly when iterating over
several frame to queue route settings, we have the caller hold it until
all are done.
This reduces PCI bus chatter and possible waits.
Signed-off-by: Ron Mercer <ron.mercer@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Instead of taking/giving the semaphore repeatedly when iterating over
several adderesses, we have the caller hold it until all are done. This
reduces PCI bus chatter and possible waits.
Signed-off-by: Ron Mercer <ron.mercer@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Setting MAC addresses and routing frames to various queues will need to
be done in response to firmware events as well as during initialization.
This change encapsulates the facilities into a single call that can
later me made from other places.
Signed-off-by: Ron Mercer <ron.mercer@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The functions time_before is more robust for comparing
jiffies against other values.
Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The functions time_before is more robust for comparing
jiffies against other values.
Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The functions time_before is more robust for comparing
jiffies against other values.
Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Remove some pointless conditionals before kfree_skb().
The semantic match that finds the problem is as follows:
(http://www.emn.fr/x-info/coccinelle/)
// <smpl>
@@
expression E;
@@
- if (E)
- kfree_skb(E);
+ kfree_skb(E);
// </smpl>
Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch changes the return value of nlmsg_notify() as follows:
If NETLINK_BROADCAST_ERROR is set by any of the listeners and
an error in the delivery happened, return the broadcast error;
else if there are no listeners apart from the socket that
requested a change with the echo flag, return the result of the
unicast notification. Thus, with this patch, the unicast
notification is handled in the same way of a broadcast listener
that has set the NETLINK_BROADCAST_ERROR socket flag.
This patch is useful in case that the caller of nlmsg_notify()
wants to know the result of the delivery of a netlink notification
(including the broadcast delivery) and take any action in case
that the delivery failed. For example, ctnetlink can drop packets
if the event delivery failed to provide reliable logging and
state-synchronization at the cost of dropping packets.
This patch also modifies the rtnetlink code to ignore the return
value of rtnl_notify() in all callers. The function rtnl_notify()
(before this patch) returned the error of the unicast notification
which makes rtnl_set_sk_err() reports errors to all listeners. This
is not of any help since the origin of the change (the socket that
requested the echoing) notices the ENOBUFS error if the notification
fails and should resync itself.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Acked-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
A receive coalescing timeout of 250 usec appears to strike a good
balance between allowing enough received frames to be aggregated for
LRO to do its job and not allowing the connection to stall due to
delaying ACKs to the remote end for too long.
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Move the netif_carrier_off() call in ->open() to port probe, so that
ethtool doesn't report the link as being up before we have up'd the
interface.
Move initialisation of the rx/tx coalescing timers from ->open() to
port probe, so that we don't reset the coalescing timers every time
the interface is up'd.
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
However we still have another issue with ioremap_wc not falling back
properly or somehow doing something else stupid, this probably needs
to be tracked down.
Signed-off-by: Dave Airlie <airlied@redhat.com>
edid->revision == 0 should be valid (at least, so the error message
indicates. :) and wikipedia seems to indicate that EDID 1.0 existed.
We can dump the entire check, since edid->revision is a u8, so
it can't ever be less than 0.
Marko reports in RH bz#476735 that his monitor claims to be
EDID 1.0, and therefore hits the check and is stuck at 800x600 because
of it.
Reported-by: Marko Ristola <marko.ristola@kolumbus.fi>
Signed-off-by: Kyle McMartin <kyle@redhat.com>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Dave Airlie <airlied@linux.ie>
The first time we install a mode, the vblank will be disabled for a pipe
and so drm_vblank_get() in drm_vblank_pre_modeset() will fail. As we
unconditionally call drm_vblank_put() afterwards, the vblank reference
counter becomes unbalanced.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Dave Airlie <airlied@linux.ie>
In some cases we may receive a mode config that has a different
CRTC<->encoder map that the current configuration. In that case, we
need to disable any re-routed encoders before setting the mode,
otherwise they may not pick up the new CRTC (if the output types are
incompatible for example).
Tested-by: Kristian Høgsberg <krh@bitplanet.net>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Dave Airlie <airlied@linux.ie>
We've seen cases in the wild where the VBT sync data is wrong, so add
some code to fix it up in that case, taking care to make sure that the
total is greater than the sync end.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Dave Airlie <airlied@linux.ie>
These are normal; we walk through different values looking for the right
one, so why flood the screen with messages?
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
There has been a race in raid10 and raid1 for a long time
which has only recently started showing up due to a scheduler changed.
When a sync_read request finishes, as soon as reschedule_retry
is called, another thread can mark the resync request as having
completed, so md_do_sync can finish, ->stop can be called, and
->conf can be freed. So using conf after reschedule_retry is not
safe.
Similarly, when finishing a sync_write, calling md_done_sync must be
the last thing we do, as it allows a chain of events which will free
conf and other data structures.
The first of these requires action in raid10.c
The second requires action in raid1.c and raid10.c
Cc: stable@kernel.org
Signed-off-by: NeilBrown <neilb@suse.de>
For raid1/4/5/6, resync (fixing inconsistencies between devices) is
very similar to recovery (rebuilding a failed device onto a spare).
The both walk through the device addresses in order.
For raid10 it can be quite different. resync follows the 'array'
address, and makes sure all copies are the same. Recover walks
through 'device' addresses and recreates each missing block.
The 'bitmap_cond_end_sync' function allows the write-intent-bitmap
(When present) to be updated to reflect a partially completed resync.
It makes assumptions which mean that it does not work correctly for
raid10 recovery at all.
In particularly, it can cause bitmap-directed recovery of a raid10 to
not recovery some of the blocks that need to be recovered.
So move the call to bitmap_cond_end_sync into the resync path, rather
than being in the common "resync or recovery" path.
Cc: stable@kernel.org
Signed-off-by: NeilBrown <neilb@suse.de>
When doing recovery on a raid10 with a write-intent bitmap, we only
need to recovery chunks that are flagged in the bitmap.
However if we choose to skip a chunk as it isn't flag, the code
currently skips the whole raid10-chunk, thus it might not recovery
some blocks that need recovering.
This patch fixes it.
In case that is confusing, it might help to understand that there
is a 'raid10 chunk size' which guides how data is distributed across
the devices, and a 'bitmap chunk size' which says how much data
corresponds to a single bit in the bitmap.
This bug only affects cases where the bitmap chunk size is smaller
than the raid10 chunk size.
Cc: stable@kernel.org
Signed-off-by: NeilBrown <neilb@suse.de>