Commit Graph

288 Commits

Author SHA1 Message Date
Dan Williams
56adf7e812 shdma: fix initialization error handling
1/ Error handling code following a kzalloc should free the allocated data.
2/ Report an error when no platform data is detected

Both problems fixed by moving the platform data check before the allocation,
and allows a goto to be killed.

Reported-by: Julia Lawall <julia@diku.dk>
Acked-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-11-22 12:10:10 -07:00
Dan Williams
49954c1567 ioat3: fix pq completion versus channel deallocation race
The completion of a pq operation is notified with a null descriptor
appended to the end of the chain.  This descriptor needs to be visible
to dma clients otherwise the client is precluded from ensuring all
operations are quiesced before freeing channel resources, i.e. due to
descriptor polling it may get the completion notification ahead of the
interrupt delivered by the null descriptor.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-11-19 23:21:03 -07:00
Dan Williams
7b3cc2b1fc async_tx: build-time toggling of async_{syndrome,xor}_val dma support
ioat3.2 does not support asynchronous error notifications which makes
the driver experience latencies when non-zero pq validate results are
expected.  Provide a mechanism for turning off async_xor_val and
async_syndrome_val via Kconfig.  This approach is generally useful for
any driver that specifies ASYNC_TX_DISABLE_CHANNEL_SWITCH and would like
to force the async_tx api to fall back to the synchronous path for
certain operations.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-11-19 23:21:03 -07:00
Dan Williams
4499a24dec dmaengine: include xor/pq validate in device_has_all_tx_types()
A channel must include these capabilities to satisfy
ASYNC_TX_DISABLE_CHANNEL_SWITCH.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-11-19 23:21:03 -07:00
Dan Williams
b57014def9 ioat2,3: report all uncorrectable errors
Modify is_ioat_bug() to catch all errors that are uncorrectable, or not
currently handled.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-11-19 23:21:03 -07:00
Dan Williams
de581b65f6 ioat3: specify valid address for disabled-Q or disabled-P
Although disabled, hardware still checks address validity, so duplicate
the known address.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-11-19 17:08:45 -07:00
Dan Williams
6f82b83b7a ioat2,3: disable asynchronous error notifications
Error interrupts and error completions may cause channel hangs, so
poll the channel status register after a timeout.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-11-19 17:07:57 -07:00
Dan Williams
228c4f5cfb ioat3: dca and raid operations are incompatible
RAID operations cause a system hang on platforms with DCA
(Direct-Cache-Access) enabled.  So turn off RAID capabilities in this
case.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-11-19 17:07:10 -07:00
Dan Williams
e22dde9904 ioat: silence "dca disabled" messages
Turning off dca is not an "error", and the dca-enabled state can be
viewed from sysfs.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-11-17 11:34:31 -07:00
NeilBrown
4b3df5668c Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx into for-linus 2009-09-23 18:31:11 +10:00
Dan Williams
cdef57dbb6 ioat3: fix uninitialized var warnings
drivers/dma/ioat/dma_v3.c: In function 'ioat3_prep_memset_lock':
drivers/dma/ioat/dma_v3.c:439: warning: 'fill' may be used uninitialized in this function
drivers/dma/ioat/dma_v3.c:437: warning: 'desc' may be used uninitialized in this function
drivers/dma/ioat/dma_v3.c: In function '__ioat3_prep_xor_lock':
drivers/dma/ioat/dma_v3.c:489: warning: 'xor' may be used uninitialized in this function
drivers/dma/ioat/dma_v3.c:486: warning: 'desc' may be used uninitialized in this function
drivers/dma/ioat/dma_v3.c: In function '__ioat3_prep_pq_lock':
drivers/dma/ioat/dma_v3.c:631: warning: 'pq' may be used uninitialized in this function
drivers/dma/ioat/dma_v3.c:628: warning: 'desc' may be used uninitialized in this function

gcc-4.0, unlike gcc-4.3, does not see that these variables are
initialized before use.  Convert the descriptor loops to do-while make
this initialization apparent.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-21 09:22:29 -07:00
Andrew Morton
f477f5b331 drivers/dma/ioat/dma_v2.c: fix warnings
drivers/dma/ioat/dma_v2.c: In function 'ioat2_dma_prep_memcpy_lock':
drivers/dma/ioat/dma_v2.c:680: warning: 'hw' may be used uninitialized in this function
drivers/dma/ioat/dma_v2.c:681: warning: 'desc' may be used uninitialized in this function

Cc: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-21 09:17:58 -07:00
Dan Williams
376ec37667 ioat2: clarify ring size limits
With the addition of ioat_max_alloc_order it is not clear what the
maximum allocation order is, so document that in the modinfo.  Also take
an opportunity to kill a stray semicolon.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-16 15:16:50 -07:00
Dan Williams
33f82d141c at_hdmac: Rework suspend_late()/resume_early()
This patch reworks platform driver power management code
for at_hdmac from legacy late/early callbacks to dev_pm_ops.

The callbacks are converted for CONFIG_SUSPEND like this:
  suspend_late() -> suspend_noirq()
  resume_early() -> resume_noirq()

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
2009-09-14 20:27:00 +02:00
Dan Williams
3208ca52f3 ioat: driver version 4.0
A new ring implementation and the addition of raid functionality
constitutes a bump in the driver major version number.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-10 11:27:36 -07:00
Maciej Sosnowski
1a5aeeecd5 dca: registering requesters in multiple dca domains
This patch enables DCA support on multiple-IOH/multiple-IIO architectures.
It modifies dca module by replacing single dca_providers list
with dca_domains list, each domain containing separate list of providers.
This approach lets dca driver manage multiple domains, i.e. sets of providers
and requesters mapped back to the same PCI root complex device.
The driver takes care to register each requester to a provider
from the same domain.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
2009-09-10 10:00:05 -07:00
Dan Williams
9a8de639f3 async_tx: remove HIGHMEM64G restriction
This restriction prevented ASYNC_TX_DMA from being enabled on platform
configurations where DMA address conversion could not be performed in
place on the stack.  Since commit 04ce9ab3 ("async_xor: permit callers
to pass in a 'dma/page scribble' region") the async_tx api now either
uses a caller provided 'scribble' buffer, or performs the conversion in
place when sizeof(dma_addr_t) <= sizeof(struct page *).

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:56:37 -07:00
Nobuhiro Iwamatsu
d8902adcc1 dmaengine: sh: Add Support SuperH DMA Engine driver
This supported all DMA channels, and it was tested in SH7722,
SH7780, SH7785 and SH7763.
This can not use with SH DMA API.

Signed-off-by: Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com>
Reviewed-by: Matt Fleming <matt@console-pimps.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Acked-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:56:02 -07:00
Dan Williams
bbb20089a3 Merge branch 'dmaengine' into async-tx-next
Conflicts:
	crypto/async_tx/async_xor.c
	drivers/dma/ioat/dma_v2.h
	drivers/dma/ioat/pci.c
	drivers/md/raid5.c
2009-09-08 17:55:21 -07:00
Dan Williams
3e48e65690 Merge branch 'iop-raid6' into async-tx-next 2009-09-08 17:53:57 -07:00
Atsushi Nemoto
657a77fa72 dmaengine: Move all map_sg/unmap_sg for slave channel to its client
Dan Williams wrote:
... DMA-slave clients request specific channels and know the hardware
details at a low level, so it should not be too high an expectation to
push dma mapping responsibility to the client.

Also this patch includes DMA_COMPL_{SRC,DEST}_UNMAP_SINGLE support for
dw_dmac driver.

Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:53:05 -07:00
Ira Snyder
bbea0b6e0d fsldma: Add DMA_SLAVE support
Use the DMA_SLAVE capability of the DMAEngine API to copy/from a
scatterlist into an arbitrary list of hardware address/length pairs.

This allows a single DMA transaction to copy data from several different
devices into a scatterlist at the same time.

This also adds support to enable some controller-specific features such as
external start and external pause for a DMA transaction.

[dan.j.williams@intel.com: rebased on tx_list movement]
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Acked-by: Li Yang <leoli@freescale.com>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:53:04 -07:00
Ira Snyder
e6c7ecb64e fsldma: split apart external pause and request count features
When using the Freescale DMA controller in external control mode, both the
request count and external pause bits need to be setup correctly. This was
being done with the same function.

The 83xx controller lacks the external pause feature, but has a similar
feature called external start. This feature requires that the request count
bits be setup correctly.

Split the function into two parts, to make it possible to use the external
start feature on the 83xx controller.

Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:53:04 -07:00
Dan Williams
162b96e63e ioat2,3: cacheline align software descriptor allocations
All the necessary fields for handling an ioat2,3 ring entry can fit into
one cacheline.  Move ->len prior to ->txd in struct ioat_ring_ent, and
move allocation of these entries to a hw-cache-aligned kmem cache to
reduce the number of cachelines dirtied for descriptor management.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:53:04 -07:00
Dan Williams
0803172778 dmaengine: kill tx_list
The tx_list attribute of struct dma_async_tx_descriptor is common to
most, but not all dma driver implementations.  None of the upper level
code (dmaengine/async_tx) uses it, so allow drivers to implement it
locally if they need it.  This saves sizeof(struct list_head) bytes for
drivers that do not manage descriptors with a linked list (e.g.: ioatdma
v2,3).

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:53:04 -07:00
Dan Williams
1979b186b8 txx9dmac: implement a private tx_list
Drop txx9dmac's use of tx_list from struct dma_async_tx_descriptor in
preparation for removal of this field.

Cc: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:53:03 -07:00
Dan Williams
285a3c7164 at_hdmac: implement a private tx_list
Drop at_hdmac's use of tx_list from struct dma_async_tx_descriptor in
preparation for removal of this field.

Cc: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:53:03 -07:00
Dan Williams
64203b6727 mv_xor: implement a private tx_list
Drop mv_xor's use of tx_list from struct dma_async_tx_descriptor in
preparation for removal of this field.

Cc: Saeed Bishara <saeed@marvell.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:53:03 -07:00
Dan Williams
ea25968a32 ioat: implement a private tx_list
Drop ioatdma's use of tx_list from struct dma_async_tx_descriptor in
preparation for removal of this field.

Cc: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:53:02 -07:00
Dan Williams
308136d1ab iop-adma: implement a private tx_list
Drop iop-adma's use of tx_list from struct dma_async_tx_descriptor in
preparation for removal of this field.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:53:02 -07:00
Dan Williams
eda3423457 fsldma: implement a private tx_list
Drop fsldma's use of tx_list from struct dma_async_tx_descriptor in
preparation for removal of this field.

Cc: Li Yang <leoli@freescale.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:53:02 -07:00
Dan Williams
e0bd0f8cb0 dw_dmac: implement a private tx_list
Drop dw_dmac's use of tx_list from struct dma_async_tx_descriptor in
preparation for removal of this field.

Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:53:02 -07:00
Dan Williams
e12c4fa377 Merge branch 'ioat' into dmaengine 2009-09-08 17:52:57 -07:00
Roland Dreier
a6417dd58d I/OAT: Convert to PCI_VDEVICE()
Trivial cleanup to make the PCI ID table easier to read.

[dan.j.williams@intel.com: extended to v3.2 devices]
Signed-off-by: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:43:03 -07:00
Roland Dreier
6506cbca6b Add MODULE_DEVICE_TABLE() so ioatdma module is autoloaded
The ioatdma module is missing aliases for the PCI devices it supports,
so it is not autoloaded on boot.  Add a MODULE_DEVICE_TABLE() to get
these aliases.

Signed-off-by: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:43:03 -07:00
Dan Williams
e3232714d4 ioat3: segregate raid engines
The cleanup routine for the raid cases imposes extra checks for handling
raid descriptors and extended descriptors.  If the channel does not
support raid it can avoid this extra overhead by using the ioat2 cleanup
path.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:43:02 -07:00
Tom Picard
b265b11fc1 ioat3: ioat3.2 pci ids for Jasper Forest
Jasper Forest introduces raid offload support via ioat3.2 support.  When
raid offload is enabled two (out of 8 channels) will report raid5/raid6
offload capabilities.  The remaining channels will only report ioat3.0
capabilities (memcpy).

Signed-off-by: Tom Picard <tom.s.picard@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:43:01 -07:00
Dan Williams
58c8649e0e ioat3: interrupt descriptor support
The async_tx api uses the DMA_INTERRUPT operation type to terminate a
chain of issued operations with a callback routine.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:43:00 -07:00
Dan Williams
ae786624c2 ioat3: support xor via pq descriptors
If a platform advertises pq capabilities, but not xor, then use
ioat3_prep_pqxor and ioat3_prep_pqxor_val to simulate xor support.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:43:00 -07:00
Dan Williams
d69d235b7d ioat3: pq support
ioat3.2 adds support for raid6 syndrome generation (xor sum of galois
field multiplication products) using up to 8 sources.  It can also
perform an pq-zero-sum operation to validate whether the syndrome for a
given set of sources matches a previously computed syndrome.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:42:59 -07:00
Dan Williams
9de6fc717b ioat3: xor self test
This adds a hardware specific self test to be called from ioat_probe.
In the ioat3 case we will have tests for all the different raid
operations, while ioat1 and ioat2 will continue to just test memcpy.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:42:58 -07:00
Dan Williams
b094ad3be5 ioat3: xor support
ioat3.2 adds xor offload support for up to 8 sources.  It can also
perform an xor-zero-sum operation to validate whether all given sources
sum to zero, without writing to a destination.  Xor descriptors differ
from memcpy in that one operation may require multiple descriptors
depending on the number of sources.  When the number of sources exceeds
5 an extended descriptor is needed.  These descriptors need to be
accounted for when updating the DMA_COUNT register.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:42:57 -07:00
Dan Williams
e61dacaeb3 ioat3: enable dca for completion writes
Tag completion writes for direct cache access to reduce the latency of
checking for descriptor completions.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:42:57 -07:00
Dan Williams
5669e31c5a ioat: add 'ioat' sysfs attributes
Export driver attributes for diagnostic purposes:
'ring_size': total number of descriptors available to the engine
'ring_active': number of descriptors in-flight
'capabilities': supported operation types for this channel
'version': Intel(R) QuickData specfication revision

This also allows some chattiness to be removed from the driver startup
as this information is now available via sysfs.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:42:56 -07:00
Dan Williams
bf40a6869c ioat3: split ioat3 support to its own file, add memset
Up until this point the driver for Intel(R) QuickData Technology
engines, specification versions 2 and 3, were mostly identical save for
a few quirks.  Version 3.2 hardware adds many new capabilities (like
raid offload support) requiring some infrastructure that is not relevant
for v2.  For better code organization of the new funcionality move v3
and v3.2 support to its own file dma_v3.c, and export some routines from
the base files (dma.c and dma_v2.c) that can be reused directly.

The first new capability included in this code reorganization is support
for v3.2 memset operations.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:42:55 -07:00
Dan Williams
2aec048cdc ioat3: hardware version 3.2 register / descriptor definitions
ioat3.2 adds raid5 and raid6 offload capabilities.

Signed-off-by: Tom Picard <tom.s.picard@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:42:54 -07:00
Dan Williams
128f2d567f ioat2+: add fence support
In preparation for adding more operation types to the ioat3 path the
driver needs to honor the DMA_PREP_FENCE flag.  For example the async_tx api
will hand xor->memcpy->xor chains to the driver with the 'fence' flag set on
the first xor and the memcpy operation.  This flag in turn sets the 'fence'
flag in the descriptor control field telling the hardware that future
descriptors in the chain depend on the result of the current descriptor, so
wait for all writes to complete before starting the next operation.

Note that ioat1 does not prefetch the descriptor chain, so does not
require/support fenced operations.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:42:53 -07:00
Dan Williams
83544ae9f3 dmaengine, async_tx: support alignment checks
Some engines have transfer size and address alignment restrictions.  Add
a per-operation alignment property to struct dma_device that the async
routines and dmatest can use to check alignment capabilities.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:42:53 -07:00
Dan Williams
9308add6ea dmaengine: cleanup unused transaction types
No drivers currently implement these operation types, so they can be
deleted.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:42:52 -07:00
Dan Williams
138f4c359d dmaengine, async_tx: add a "no channel switch" allocator
Channel switching is problematic for some dmaengine drivers as the
architecture precludes separating the ->prep from ->submit.  In these
cases the driver can select ASYNC_TX_DISABLE_CHANNEL_SWITCH to modify
the async_tx allocator to only return channels that support all of the
required asynchronous operations.

For example MD_RAID456=y selects support for asynchronous xor, xor
validate, pq, pq validate, and memcpy.  When
ASYNC_TX_DISABLE_CHANNEL_SWITCH=y any channel with all these
capabilities is marked DMA_ASYNC_TX allowing async_tx_find_channel() to
quickly locate compatible channels with the guarantee that dependency
chains will remain on one channel.  When
ASYNC_TX_DISABLE_CHANNEL_SWITCH=n async_tx_find_channel() may select
channels that lead to operation chains that need to cross channel
boundaries using the async_tx channel switch capability.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:42:51 -07:00