Deactivate inbound checksumming on HiperSocket is a valid but
dangerous optimization in case the frame is routed from an OSA
network to an HiperSockets network. To go for sure we change the
default to software checksumming.
Signed-off-by: Frank Blaschka <frank.blaschka@de.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Depending on their definition in z/VM, virtual devices for z/VM
VSWITCH or GuestLAN must be configured either in layer2 or in
layer3 mode. If qeth detects a layer mismatch, device activation
fails. Trying to recover from this error cannot help; thus
scheduling a recovery should be avoided.
In addition, since recovery is forbidden during online setting of
a qeth device, existence of its network device is guaranteed for all
dev_close() calls in qeth. The corresponding checks can be removed.
Signed-off-by: Ursula Braun <ursula.braun@de.ibm.com>
Signed-off-by: Frank Blaschka <frank.blaschka@de.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If HiperSockets Network Traffic Analyzer is switched off, but trace
disabling fails somehow, the qeth driver does not switch off its
promisc mode status. A following sniffer reactivation fails, since
qeth does not see a need to reenable tracing.
At the same time the code analyzing results of trace commands is
restructured.
Signed-off-by: Ursula Braun <ursula.braun@de.ibm.com>
Signed-off-by: Frank Blaschka <frank.blaschka@de.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
dhcp frames are valid IPv4 packets so there is no need to send them
in pass thru mode. This allows dhcp packets to pass HiperSockets.
Also the dhcp release frame is send out correctly with this patch.
Signed-off-by: Frank Blaschka <frank.blaschka@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
provide qeth kmsg definitions to enable hash string generation for
kernel message created with dev_err().
Signed-off-by: Ursula Braun <ursula.braun@de.ibm.com>
Signed-off-by: Frank Blaschka <frank.blaschka@de.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This reinstates the lost unpressing of BTN_TOUCH. To prevent undesireably
touch toggles this also deals with tip switch events.
Added a trap to prevent going out of bounds for hidinputs with empty reports.
Clear bits of unused buttons which result in misidentification.
Signed-off-by: Rafi Rubin <rafi@seas.upenn.edu>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
We are taking a wrong regs snapshot when a trace event triggers.
Either we use get_irq_regs(), which gives us the interrupted
registers if we are in an interrupt, or we use task_pt_regs()
which gives us the state before we entered the kernel, assuming
we are lucky enough to be no kernel thread, in which case
task_pt_regs() returns the initial set of regs when the kernel
thread was started.
What we want is different. We need a hot snapshot of the regs,
so that we can get the instruction pointer to record in the
sample, the frame pointer for the callchain, and some other
things.
Let's use the new perf_fetch_caller_regs() for that.
Comparison with perf record -e lock: -R -a -f -g
Before:
perf [kernel] [k] __do_softirq
|
--- __do_softirq
|
|--55.16%-- __open
|
--44.84%-- __write_nocancel
After:
perf [kernel] [k] perf_tp_event
|
--- perf_tp_event
|
|--41.07%-- lock_acquire
| |
| |--39.36%-- _raw_spin_lock
| | |
| | |--7.81%-- hrtimer_interrupt
| | | smp_apic_timer_interrupt
| | | apic_timer_interrupt
The old case was producing unreliable callchains. Now having
right frame and instruction pointers, we have the trace we
want.
Also syscalls and kprobe events already have the right regs,
let's use them instead of wasting a retrieval.
v2: Follow the rename perf_save_regs() -> perf_fetch_caller_regs()
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Jason Baron <jbaron@redhat.com>
Cc: Archs <linux-arch@vger.kernel.org>
Events that trigger overflows by interrupting a context can
use get_irq_regs() or task_pt_regs() to retrieve the state
when the event triggered. But this is not the case for some
other class of events like trace events as tracepoints are
executed in the same context than the code that triggered
the event.
It means we need a different api to capture the regs there,
namely we need a hot snapshot to get the most important
informations for perf: the instruction pointer to get the
event origin, the frame pointer for the callchain, the code
segment for user_mode() tests (we always use __KERNEL_CS as
trace events always occur from the kernel) and the eflags
for further purposes.
v2: rename perf_save_regs to perf_fetch_caller_regs as per
Masami's suggestion.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Jason Baron <jbaron@redhat.com>
Cc: Archs <linux-arch@vger.kernel.org>
We were using the frame pointer based stack walker on every
contexts in x86-32, but not in x86-64 where we only use the
seven-league boots on the exception stacks.
Use it also on irq and process stacks. This utterly accelerate
the captures.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
There are rcu locked read side areas in the path where we submit
a trace event. And these rcu_read_(un)lock() trigger lock events,
which create recursive events.
One pair in do_perf_sw_event:
__lock_acquire
|
|--96.11%-- lock_acquire
| |
| |--27.21%-- do_perf_sw_event
| | perf_tp_event
| | |
| | |--49.62%-- ftrace_profile_lock_release
| | | lock_release
| | | |
| | | |--33.85%-- _raw_spin_unlock
Another pair in perf_output_begin/end:
__lock_acquire
|--23.40%-- perf_output_begin
| | __perf_event_overflow
| | perf_swevent_overflow
| | perf_swevent_add
| | perf_swevent_ctx_event
| | do_perf_sw_event
| | perf_tp_event
| | |
| | |--55.37%-- ftrace_profile_lock_acquire
| | | lock_acquire
| | | |
| | | |--37.31%-- _raw_spin_lock
The problem is not that much the trace recursion itself, as we have a
recursion protection already (though it's always wasteful to recurse).
But the trace events are outside the lockdep recursion protection, then
each lockdep event triggers a lock trace, which will trigger two
other lockdep events. Here the recursive lock trace event won't
be taken because of the trace recursion, so the recursion stops there
but lockdep will still analyse these new events:
To sum up, for each lockdep events we have:
lock_*()
|
trace lock_acquire
|
----- rcu_read_lock()
| |
| lock_acquire()
| |
| trace_lock_acquire() (stopped)
| |
| lockdep analyze
|
----- rcu_read_unlock()
|
lock_release
|
trace_lock_release() (stopped)
|
lockdep analyze
And you can repeat the above two times as we have two rcu read side
sections when we submit an event.
This is fixed in this patch by moving the lock trace event under
the lockdep recursion protection.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
Cc: Li Zefan <lizf@cn.fujitsu.com>
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Jens Axboe <jens.axboe@oracle.com>
If -vv is used just the map table will be printed, -vvv will
print the symbol table too, with it we can see that we have a
bug where some samples are not being resolved to a map when we
get them in the perf.data stream, but after we have it all
processed, we can find the right map, some reordering probably
is happening.
Upcoming patches will provide ways to ask for most PERF_SAMPLE_
conditional samples to be taken for !PERF_RECORD_SAMPLE events
too, then we'll be able to ask for PERF_SAMPLE_TIME and
PERF_SAMPLE_CPU to help diagnose this.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1268161097-17761-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Perf report does not handle multiple events being reported, even
though perf record stores them properly on disk. This patch
addresses that issue by adding the logic to perf report to use
the event stream id that is saved by record and the new data
structures to seperate the event streams and report them
individually.
Signed-off-by: Eric B Munson <ebmunson@us.ibm.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1267804269-22660-6-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Now that report can store historgrams for multiple events we
need to be able to do the post processing work for each
histogram. This patch changes the post processing functions so
that they can be called individually for each event's histogram.
Signed-off-by: Eric B Munson <ebmunson@us.ibm.com>
[ Guarantee bisectabilty by fixing up builtin-report.c ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1267804269-22660-5-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Currently perf record does not write the ID or the to disk for
events. This doesn't allow report to tell if an event stream
contains one or more types of events. This patch adds this
entry to the list of data that record will write to disk if more
than one event was requested.
Signed-off-by: Eric B Munson <ebmunson@us.ibm.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1267804269-22660-2-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
hw_perf_enable() would enable already enabled events.
This causes problems with code that assumes that ->enable/->disable calls
are balanced (like the LBR code does).
What happens is that events that were already running and left in place
would get enabled again.
Avoid this by only enabling new events that match their previous
assignment.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: paulus@samba.org
Cc: eranian@google.com
Cc: robert.richter@amd.com
Cc: fweisbec@gmail.com
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
hw_perf_enable() would disable events that were not yet enabled.
This causes problems with code that assumes that ->enable/->disable calls
are balanced (like the LBR code does).
What happens is that we disable newly added counters that match their
previous assignment, even though they are not yet programmed on the
hardware.
Avoid this by only doing the first pass over the existing events.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: paulus@samba.org
Cc: eranian@google.com
Cc: robert.richter@amd.com
Cc: fweisbec@gmail.com
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
In commit 2b0d8c251b the __early_param is
replaced with the generic early_param. This patch fixes the parameter passing
for the vram.
Signed-off-by: Thomas Weber <weber@corscience.de>
[tomi.valkeinen@nokia.com: changed the commit prefix]
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@nokia.com>
pmb_bolt_mapping() is undefined on 29-bit builds, so provide a stub.
This fixes up the NUMA build on platforms lacking PMB support.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The time interval between consecutive interrupts depends on a number of
tunables: first_conversion_delay, acquisition_time, averaging and foremost
the pen_down_acc_interval.
Since the mod_timer() action for the PEN UP event happens in the
spi_async() callback function, latencies incurred by the spi bus drivers
also need to be taken into account.
So all in all, give the PEN UP event a bit more wiggle room and increase
timeout to 100ms.
Signed-off-by: Michael Hennerich <michael.hennerich@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
The AD7873 is almost identical to the ADS7846; the only difference is
related to the Power Management bits PD0 and PD1. This results in a
slightly different PENIRQ enable behavior. For the AD7873, VREF should
be turned off during differential measurements.
So, add the AD7873/43 to the list of driver supported devices, and prevent
VREF usage during differential/ratiometric conversion modes.
Signed-off-by: Michael Hennerich <michael.hennerich@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
We need to use the nosync version of disable_irq so that we don't hang in
the IRQ handler as we don't ACK the interrupt until later. This used to
work regardless, but at some point, the IRQ behavior changed. Not sure
when exactly.
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Holding the BKL in input_open_file seems pointless because it does not
protect against updates of input_table, and all open functions from the
underlying drivers have proper mutex locking.
This makes input_open_file take the input_mutex when accessing
the table and no lock when calling into the lower function.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Thadeu Lima de Souza Cascardo <cascardo@holoscopio.com>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
serio_raw open function already uses a mutex. Also change formatting
a bit.
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@holoscopio.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
There's no need for BKL in mousedev, relevan protection is provided by
a private mutex.
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@holoscopio.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Commit 2b0d8c251b changed ARM to use
the common early_param code. Fix compile for Touch Book accordingly.
Signed-off-by: Tony Lindgren <tony@atomide.com>
This reverts commit 21b2d8bd2f.
As explained by Johannes:
When we
build a probe request frame in the buffer with the SSID, we could
arrive over the limit of 200 bytes. When we build it in the buffer
without the SSID (wildcard) we don't arrive over 200 bytes, but the
ucode still allows direct probe in addition because it has an internal
buffer that is larger when it inserts the SSID...
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Recent patch "iwlwifi: move 3945 clip groups to 3945 data" exposed a memory
corruption problem. When initializing the clip groups the code was
mistakenly using the iwlagn rate count, not the 3945 rate count. This
resulted in more memory being written than was allocated.
"iwlwifi: move 3945 clip groups to 3945 data" moved the location where the
clip groups are stored and the impact is now severe in that the number of
configured TX queues is modified. Previously the
"temperature" field was overwritten, which did not seem to affect the
operation.
Fix this one instance where wrong rate count was used. I also noticed one
more location where the iwlagn rate count was used to index an iwl3945
array, fix this. I also modified one location that modified the iwlagn rate
count to obtain the iwl3945 rate count ... just use the iwl3945 rate count
directly.
This fixes http://bugzilla.intellinuxwireless.org/show_bug.cgi?id=2165 and
http://bugzilla.intellinuxwireless.org/show_bug.cgi?id=2168
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>