ARM: SoC-related driver updates

Various driver updates for platforms and a couple of the small driver
 subsystems we merge through our tree:
 
  - A driver for SCU (system control) on NXP i.MX8QXP
  - Qualcomm Always-on Subsystem messaging driver (AOSS QMP)
  - Qualcomm PM support for MSM8998
  - Support for a newer version of DRAM PHY driver for Broadcom (DPFE)
  - Reset controller support for Bitmain BM1880
  - TI SCI (System Control Interface) support for CPU control on AM654
    processors
  - More TI sysc refactoring and rework
 -----BEGIN PGP SIGNATURE-----
 
 iQJDBAABCAAtFiEElf+HevZ4QCAJmMQ+jBrnPN6EHHcFAl0yK3YPHG9sb2ZAbGl4
 b20ubmV0AAoJEIwa5zzehBx3WdUQAJEFRzY4+8VfsUspKmGwzHsrk7t1038JUEDE
 VL3yYlvSGeHg5a58AI5PCR5ZCsyPK7Yw9cAcYexd0frFR7BCwKWrjqem0Lb5ovdK
 CYM517DRtYPSBMF08Xw4pbZlT0yg65F1e9cf6BlUpkUZ6lJn4gUy8Y4BE6Aw/zuF
 QKtQNs6Q8BUZqS3uoOpJ/PY4JiUmLPQPO4Lry7Lud8Z7qgArCC326paC3wwqjLoC
 TpoMqb6izt7Vzo4BtTo5TUCyiEFZDlb/thhDySVlYRE7DQJusHBvRO9qgjI2ahOo
 1/935q1fJO7S6+Yvc8DIzrD/DrIUOvOshi31F/J6iWKkQkTUxtQwsVReZKaiOfSD
 fYxNVCgTcMS6ailKQSMQ0SYgXDa2gWdV3tS9XU8qML3tnDthi1nDmZks0QAAnFPS
 bXRcWGtgqeQJ+QJ7yyKrsD9POeaq3Hc5/f1DN34H//Cyn0ip/fD6fkLCMIfUDwmu
 TmO2Mnj6/fG/iBK+ToF+DaJ0/u3RiV2MC2vCE+0m3cVI9jtq9iA1y3UlmoaKUhhC
 t9znA+u8/Jc5S2zNQriI2Ja5q8nKfihL7Jf68ENvGzLc7YuAqP6yx1LMg1g6Wshc
 nLT+kHOF6DCUC3W7a8VuNyaxCwVtTbNTti+nvQVOmV6eaGiD5vzpXkHBWMbOJ7Lh
 YOBwGyb4
 =ek+j
 -----END PGP SIGNATURE-----

Merge tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull ARM SoC-related driver updates from Olof Johansson:
 "Various driver updates for platforms and a couple of the small driver
  subsystems we merge through our tree:

   - A driver for SCU (system control) on NXP i.MX8QXP

   - Qualcomm Always-on Subsystem messaging driver (AOSS QMP)

   - Qualcomm PM support for MSM8998

   - Support for a newer version of DRAM PHY driver for Broadcom (DPFE)

   - Reset controller support for Bitmain BM1880

   - TI SCI (System Control Interface) support for CPU control on AM654
     processors

   - More TI sysc refactoring and rework"

* tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (84 commits)
  reset: remove redundant null check on pointer dev
  soc: rockchip: work around clang warning
  dt-bindings: reset: imx7: Fix the spelling of 'indices'
  soc: imx: Add i.MX8MN SoC driver support
  soc: aspeed: lpc-ctrl: Fix probe error handling
  soc: qcom: geni: Add support for ACPI
  firmware: ti_sci: Fix gcc unused-but-set-variable warning
  firmware: ti_sci: Use the correct style for SPDX License Identifier
  soc: imx8: Use existing of_root directly
  soc: imx8: Fix potential kernel dump in error path
  firmware/psci: psci_checker: Park kthreads before stopping them
  memory: move jedec_ddr.h from include/memory to drivers/memory/
  memory: move jedec_ddr_data.c from lib/ to drivers/memory/
  MAINTAINERS: Remove myself as qcom maintainer
  soc: aspeed: lpc-ctrl: make parameter optional
  soc: qcom: apr: Don't use reg for domain id
  soc: qcom: fix QCOM_AOSS_QMP dependency and build errors
  memory: tegra: Fix -Wunused-const-variable
  firmware: tegra: Early resume BPMP
  soc/tegra: Select pinctrl for Tegra194
  ...
This commit is contained in:
Linus Torvalds 2019-07-19 17:13:56 -07:00
commit 8362fd64f0
70 changed files with 4622 additions and 497 deletions

View File

@ -6,7 +6,7 @@ that are provided by the hardware platform it is running on, including power
and performance functions.
This binding is intended to define the interface the firmware implementing
the SCMI as described in ARM document number ARM DUI 0922B ("ARM System Control
the SCMI as described in ARM document number ARM DEN 0056A ("ARM System Control
and Management Interface Platform Design Document")[0] provide for OSPM in
the device tree.

View File

@ -0,0 +1,11 @@
DPAA2 console support
Required properties:
- compatible
Value type: <string>
Definition: Must be "fsl,dpaa2-console".
- reg
Value type: <prop-encoded-array>
Definition: A standard property. Specifies the region where the MCFBA
(MC firmware base address) register can be found.

View File

@ -6,6 +6,8 @@ which then translates it into a corresponding voltage on a rail
Required Properties:
- compatible: Should be one of the following
* qcom,msm8996-rpmpd: RPM Power domain for the msm8996 family of SoC
* qcom,msm8998-rpmpd: RPM Power domain for the msm8998 family of SoC
* qcom,qcs404-rpmpd: RPM Power domain for the qcs404 family of SoC
* qcom,sdm845-rpmhpd: RPMh Power domain for the sdm845 family of SoC
- #power-domain-cells: number of cells in Power domain specifier
must be 1.

View File

@ -0,0 +1,18 @@
Bitmain BM1880 SoC Reset Controller
===================================
Please also refer to reset.txt in this directory for common reset
controller binding usage.
Required properties:
- compatible: Should be "bitmain,bm1880-reset"
- reg: Offset and length of reset controller space in SCTRL.
- #reset-cells: Must be 1.
Example:
rst: reset-controller@c00 {
compatible = "bitmain,bm1880-reset";
reg = <0xc00 0x8>;
#reset-cells = <1>;
};

View File

@ -45,6 +45,6 @@ Example:
};
For list of all valid reset indicies see
For list of all valid reset indices see
<dt-bindings/reset/imx7-reset.h> for i.MX7 and
<dt-bindings/reset/imx8mq-reset.h> for i.MX8MQ

View File

@ -2,8 +2,8 @@ Amlogic Canvas
================================
A canvas is a collection of metadata that describes a pixel buffer.
Those metadata include: width, height, phyaddr, wrapping, block mode
and endianness.
Those metadata include: width, height, phyaddr, wrapping and block mode.
Starting with GXBB the endianness can also be described.
Many IPs within Amlogic SoCs rely on canvas indexes to read/write pixel data
rather than use the phy addresses directly. For instance, this is the case for
@ -18,7 +18,11 @@ Video Lookup Table
--------------------------
Required properties:
- compatible: "amlogic,canvas"
- compatible: has to be one of:
- "amlogic,meson8-canvas", "amlogic,canvas" on Meson8
- "amlogic,meson8b-canvas", "amlogic,canvas" on Meson8b
- "amlogic,meson8m2-canvas", "amlogic,canvas" on Meson8m2
- "amlogic,canvas" on GXBB and newer
- reg: Base physical address and size of the canvas registers.
Example:

View File

@ -0,0 +1,81 @@
Qualcomm Always-On Subsystem side channel binding
This binding describes the hardware component responsible for side channel
requests to the always-on subsystem (AOSS), used for certain power management
requests that is not handled by the standard RPMh interface. Each client in the
SoC has it's own block of message RAM and IRQ for communication with the AOSS.
The protocol used to communicate in the message RAM is known as Qualcomm
Messaging Protocol (QMP)
The AOSS side channel exposes control over a set of resources, used to control
a set of debug related clocks and to affect the low power state of resources
related to the secondary subsystems. These resources are exposed as a set of
power-domains.
- compatible:
Usage: required
Value type: <string>
Definition: must be "qcom,sdm845-aoss-qmp"
- reg:
Usage: required
Value type: <prop-encoded-array>
Definition: the base address and size of the message RAM for this
client's communication with the AOSS
- interrupts:
Usage: required
Value type: <prop-encoded-array>
Definition: should specify the AOSS message IRQ for this client
- mboxes:
Usage: required
Value type: <prop-encoded-array>
Definition: reference to the mailbox representing the outgoing doorbell
in APCS for this client, as described in mailbox/mailbox.txt
- #clock-cells:
Usage: optional
Value type: <u32>
Definition: must be 0
The single clock represents the QDSS clock.
- #power-domain-cells:
Usage: optional
Value type: <u32>
Definition: must be 1
The provided power-domains are:
CDSP state (0), LPASS state (1), modem state (2), SLPI
state (3), SPSS state (4) and Venus state (5).
= SUBNODES
The AOSS side channel also provides the controls for three cooling devices,
these are expressed as subnodes of the QMP node. The name of the node is used
to identify the resource and must therefor be "cx", "mx" or "ebi".
- #cooling-cells:
Usage: optional
Value type: <u32>
Definition: must be 2
= EXAMPLE
The following example represents the AOSS side-channel message RAM and the
mechanism exposing the power-domains, as found in SDM845.
aoss_qmp: qmp@c300000 {
compatible = "qcom,sdm845-aoss-qmp";
reg = <0x0c300000 0x100000>;
interrupts = <GIC_SPI 389 IRQ_TYPE_EDGE_RISING>;
mboxes = <&apss_shared 0>;
#power-domain-cells = <1>;
cx_cdev: cx {
#cooling-cells = <2>;
};
mx_cdev: mx {
#cooling-cells = <2>;
};
};

View File

@ -9,7 +9,7 @@ used for audio/voice services on the QDSP.
Value type: <stringlist>
Definition: must be "qcom,apr-v<VERSION-NUMBER>", example "qcom,apr-v2"
- reg
- qcom,apr-domain
Usage: required
Value type: <u32>
Definition: Destination processor ID.
@ -49,9 +49,9 @@ by the individual bindings for the specific service
The following example represents a QDSP based sound card on a MSM8996 device
which uses apr as communication between Apps and QDSP.
apr@4 {
apr {
compatible = "qcom,apr-v2";
reg = <APR_DOMAIN_ADSP>;
qcom,apr-domain = <APR_DOMAIN_ADSP>;
q6core@3 {
compatible = "qcom,q6core";

View File

@ -2091,7 +2091,6 @@ S: Maintained
ARM/QUALCOMM SUPPORT
M: Andy Gross <agross@kernel.org>
M: David Brown <david.brown@linaro.org>
L: linux-arm-msm@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/soc/qcom/
@ -2113,7 +2112,7 @@ F: drivers/i2c/busses/i2c-qup.c
F: drivers/i2c/busses/i2c-qcom-geni.c
F: drivers/mfd/ssbi.c
F: drivers/mmc/host/mmci_qcom*
F: drivers/mmc/host/sdhci_msm.c
F: drivers/mmc/host/sdhci-msm.c
F: drivers/pci/controller/dwc/pcie-qcom.c
F: drivers/phy/qualcomm/
F: drivers/power/*/msm*
@ -6527,6 +6526,7 @@ M: Li Yang <leoyang.li@nxp.com>
L: linuxppc-dev@lists.ozlabs.org
L: linux-arm-kernel@lists.infradead.org
S: Maintained
F: Documentation/devicetree/bindings/misc/fsl,dpaa2-console.txt
F: Documentation/devicetree/bindings/soc/fsl/
F: drivers/soc/fsl/
F: include/linux/fsl/
@ -11907,11 +11907,13 @@ F: include/linux/mtd/onenand*.h
OP-TEE DRIVER
M: Jens Wiklander <jens.wiklander@linaro.org>
L: tee-dev@lists.linaro.org
S: Maintained
F: drivers/tee/optee/
OP-TEE RANDOM NUMBER GENERATOR (RNG) DRIVER
M: Sumit Garg <sumit.garg@linaro.org>
L: tee-dev@lists.linaro.org
S: Maintained
F: drivers/char/hw_random/optee-rng.c
@ -13295,7 +13297,7 @@ M: Niklas Cassel <niklas.cassel@linaro.org>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
F: Documentation/devicetree/bindings/net/qcom,dwmac.txt
F: Documentation/devicetree/bindings/net/qcom,ethqos.txt
QUALCOMM GENERIC INTERFACE I2C DRIVER
M: Alok Chauhan <alokc@codeaurora.org>
@ -15745,6 +15747,7 @@ F: include/media/i2c/tw9910.h
TEE SUBSYSTEM
M: Jens Wiklander <jens.wiklander@linaro.org>
L: tee-dev@lists.linaro.org
S: Maintained
F: include/linux/tee_drv.h
F: include/uapi/linux/tee.h

View File

@ -3442,6 +3442,7 @@ static int omap_hwmod_check_module(struct device *dev,
* @dev: struct device
* @oh: module
* @sysc_fields: sysc register bits
* @clockdomain: clockdomain
* @rev_offs: revision register offset
* @sysc_offs: sysconfig register offset
* @syss_offs: sysstatus register offset
@ -3453,6 +3454,7 @@ static int omap_hwmod_check_module(struct device *dev,
static int omap_hwmod_allocate_module(struct device *dev, struct omap_hwmod *oh,
const struct ti_sysc_module_data *data,
struct sysc_regbits *sysc_fields,
struct clockdomain *clkdm,
s32 rev_offs, s32 sysc_offs,
s32 syss_offs, u32 sysc_flags,
u32 idlemodes)
@ -3460,8 +3462,6 @@ static int omap_hwmod_allocate_module(struct device *dev, struct omap_hwmod *oh,
struct omap_hwmod_class_sysconfig *sysc;
struct omap_hwmod_class *class = NULL;
struct omap_hwmod_ocp_if *oi = NULL;
struct clockdomain *clkdm = NULL;
struct clk *clk = NULL;
void __iomem *regs = NULL;
unsigned long flags;
@ -3508,36 +3508,6 @@ static int omap_hwmod_allocate_module(struct device *dev, struct omap_hwmod *oh,
oi->user = OCP_USER_MPU | OCP_USER_SDMA;
}
if (!oh->_clk) {
struct clk_hw_omap *hwclk;
clk = of_clk_get_by_name(dev->of_node, "fck");
if (!IS_ERR(clk))
clk_prepare(clk);
else
clk = NULL;
/*
* Populate clockdomain based on dts clock. It is needed for
* clkdm_deny_idle() and clkdm_allow_idle() until we have have
* interconnect driver and reset driver capable of blocking
* clockdomain idle during reset, enable and idle.
*/
if (clk) {
hwclk = to_clk_hw_omap(__clk_get_hw(clk));
if (hwclk && hwclk->clkdm_name)
clkdm = clkdm_lookup(hwclk->clkdm_name);
}
/*
* Note that we assume interconnect driver manages the clocks
* and do not need to populate oh->_clk for dynamically
* allocated modules.
*/
clk_unprepare(clk);
clk_put(clk);
}
spin_lock_irqsave(&oh->_lock, flags);
if (regs)
oh->_mpu_rt_va = regs;
@ -3623,7 +3593,7 @@ int omap_hwmod_init_module(struct device *dev,
u32 sysc_flags, idlemodes;
int error;
if (!dev || !data)
if (!dev || !data || !data->name || !cookie)
return -EINVAL;
oh = _lookup(data->name);
@ -3694,7 +3664,8 @@ int omap_hwmod_init_module(struct device *dev,
return error;
return omap_hwmod_allocate_module(dev, oh, data, sysc_fields,
rev_offs, sysc_offs, syss_offs,
cookie->clkdm, rev_offs,
sysc_offs, syss_offs,
sysc_flags, idlemodes);
}

View File

@ -26,6 +26,7 @@
#include <linux/platform_data/wkup_m3.h>
#include <linux/platform_data/asoc-ti-mcbsp.h>
#include "clockdomain.h"
#include "common.h"
#include "common-board-devices.h"
#include "control.h"
@ -460,6 +461,62 @@ static void __init dra7x_evm_mmc_quirk(void)
}
#endif
static struct clockdomain *ti_sysc_find_one_clockdomain(struct clk *clk)
{
struct clockdomain *clkdm = NULL;
struct clk_hw_omap *hwclk;
hwclk = to_clk_hw_omap(__clk_get_hw(clk));
if (hwclk && hwclk->clkdm_name)
clkdm = clkdm_lookup(hwclk->clkdm_name);
return clkdm;
}
/**
* ti_sysc_clkdm_init - find clockdomain based on clock
* @fck: device functional clock
* @ick: device interface clock
* @dev: struct device
*
* Populate clockdomain based on clock. It is needed for
* clkdm_deny_idle() and clkdm_allow_idle() for blocking clockdomain
* clockdomain idle during reset, enable and idle.
*
* Note that we assume interconnect driver manages the clocks
* and do not need to populate oh->_clk for dynamically
* allocated modules.
*/
static int ti_sysc_clkdm_init(struct device *dev,
struct clk *fck, struct clk *ick,
struct ti_sysc_cookie *cookie)
{
if (fck)
cookie->clkdm = ti_sysc_find_one_clockdomain(fck);
if (cookie->clkdm)
return 0;
if (ick)
cookie->clkdm = ti_sysc_find_one_clockdomain(ick);
if (cookie->clkdm)
return 0;
return -ENODEV;
}
static void ti_sysc_clkdm_deny_idle(struct device *dev,
const struct ti_sysc_cookie *cookie)
{
if (cookie->clkdm)
clkdm_deny_idle(cookie->clkdm);
}
static void ti_sysc_clkdm_allow_idle(struct device *dev,
const struct ti_sysc_cookie *cookie)
{
if (cookie->clkdm)
clkdm_allow_idle(cookie->clkdm);
}
static int ti_sysc_enable_module(struct device *dev,
const struct ti_sysc_cookie *cookie)
{
@ -491,6 +548,9 @@ static struct of_dev_auxdata omap_auxdata_lookup[];
static struct ti_sysc_platform_data ti_sysc_pdata = {
.auxdata = omap_auxdata_lookup,
.init_clockdomain = ti_sysc_clkdm_init,
.clkdm_deny_idle = ti_sysc_clkdm_deny_idle,
.clkdm_allow_idle = ti_sysc_clkdm_allow_idle,
.init_module = omap_hwmod_init_module,
.enable_module = ti_sysc_enable_module,
.idle_module = ti_sysc_idle_module,

View File

@ -399,8 +399,8 @@ static int __init brcmstb_gisb_arb_probe(struct platform_device *pdev)
&gisb_panic_notifier);
}
dev_info(&pdev->dev, "registered mem: %p, irqs: %d, %d\n",
gdev->base, timeout_irq, tea_irq);
dev_info(&pdev->dev, "registered irqs: %d, %d\n",
timeout_irq, tea_irq);
return 0;
}

View File

@ -443,11 +443,31 @@ int dprc_get_obj_region(struct fsl_mc_io *mc_io,
struct fsl_mc_command cmd = { 0 };
struct dprc_cmd_get_obj_region *cmd_params;
struct dprc_rsp_get_obj_region *rsp_params;
u16 major_ver, minor_ver;
int err;
/* prepare command */
cmd.header = mc_encode_cmd_header(DPRC_CMDID_GET_OBJ_REG,
cmd_flags, token);
err = dprc_get_api_version(mc_io, 0,
&major_ver,
&minor_ver);
if (err)
return err;
/**
* MC API version 6.3 introduced a new field to the region
* descriptor: base_address. If the older API is in use then the base
* address is set to zero to indicate it needs to be obtained elsewhere
* (typically the device tree).
*/
if (major_ver > 6 || (major_ver == 6 && minor_ver >= 3))
cmd.header =
mc_encode_cmd_header(DPRC_CMDID_GET_OBJ_REG_V2,
cmd_flags, token);
else
cmd.header =
mc_encode_cmd_header(DPRC_CMDID_GET_OBJ_REG,
cmd_flags, token);
cmd_params = (struct dprc_cmd_get_obj_region *)cmd.params;
cmd_params->obj_id = cpu_to_le32(obj_id);
cmd_params->region_index = region_index;
@ -461,8 +481,12 @@ int dprc_get_obj_region(struct fsl_mc_io *mc_io,
/* retrieve response parameters */
rsp_params = (struct dprc_rsp_get_obj_region *)cmd.params;
region_desc->base_offset = le64_to_cpu(rsp_params->base_addr);
region_desc->base_offset = le64_to_cpu(rsp_params->base_offset);
region_desc->size = le32_to_cpu(rsp_params->size);
if (major_ver > 6 || (major_ver == 6 && minor_ver >= 3))
region_desc->base_address = le64_to_cpu(rsp_params->base_addr);
else
region_desc->base_address = 0;
return 0;
}

View File

@ -487,10 +487,19 @@ static int fsl_mc_device_get_mmio_regions(struct fsl_mc_device *mc_dev,
"dprc_get_obj_region() failed: %d\n", error);
goto error_cleanup_regions;
}
error = translate_mc_addr(mc_dev, mc_region_type,
/*
* Older MC only returned region offset and no base address
* If base address is in the region_desc use it otherwise
* revert to old mechanism
*/
if (region_desc.base_address)
regions[i].start = region_desc.base_address +
region_desc.base_offset;
else
error = translate_mc_addr(mc_dev, mc_region_type,
region_desc.base_offset,
&regions[i].start);
if (error < 0) {
dev_err(parent_dev,
"Invalid MC offset: %#x (for %s.%d\'s region %d)\n",
@ -504,6 +513,8 @@ static int fsl_mc_device_get_mmio_regions(struct fsl_mc_device *mc_dev,
regions[i].flags = IORESOURCE_IO;
if (region_desc.flags & DPRC_REGION_CACHEABLE)
regions[i].flags |= IORESOURCE_CACHEABLE;
if (region_desc.flags & DPRC_REGION_SHAREABLE)
regions[i].flags |= IORESOURCE_MEM;
}
mc_dev->regions = regions;

View File

@ -79,9 +79,11 @@ int dpmcp_reset(struct fsl_mc_io *mc_io,
/* DPRC command versioning */
#define DPRC_CMD_BASE_VERSION 1
#define DPRC_CMD_2ND_VERSION 2
#define DPRC_CMD_ID_OFFSET 4
#define DPRC_CMD(id) (((id) << DPRC_CMD_ID_OFFSET) | DPRC_CMD_BASE_VERSION)
#define DPRC_CMD_V2(id) (((id) << DPRC_CMD_ID_OFFSET) | DPRC_CMD_2ND_VERSION)
/* DPRC command IDs */
#define DPRC_CMDID_CLOSE DPRC_CMD(0x800)
@ -100,6 +102,7 @@ int dpmcp_reset(struct fsl_mc_io *mc_io,
#define DPRC_CMDID_GET_OBJ_COUNT DPRC_CMD(0x159)
#define DPRC_CMDID_GET_OBJ DPRC_CMD(0x15A)
#define DPRC_CMDID_GET_OBJ_REG DPRC_CMD(0x15E)
#define DPRC_CMDID_GET_OBJ_REG_V2 DPRC_CMD_V2(0x15E)
#define DPRC_CMDID_SET_OBJ_IRQ DPRC_CMD(0x15F)
struct dprc_cmd_open {
@ -199,9 +202,16 @@ struct dprc_rsp_get_obj_region {
/* response word 0 */
__le64 pad;
/* response word 1 */
__le64 base_addr;
__le64 base_offset;
/* response word 2 */
__le32 size;
__le32 pad2;
/* response word 3 */
__le32 flags;
__le32 pad3;
/* response word 4 */
/* base_addr may be zero if older MC firmware is used */
__le64 base_addr;
};
struct dprc_cmd_set_obj_irq {
@ -334,6 +344,7 @@ int dprc_set_obj_irq(struct fsl_mc_io *mc_io,
/* Region flags */
/* Cacheable - Indicates that region should be mapped as cacheable */
#define DPRC_REGION_CACHEABLE 0x00000001
#define DPRC_REGION_SHAREABLE 0x00000002
/**
* enum dprc_region_type - Region type
@ -342,7 +353,8 @@ int dprc_set_obj_irq(struct fsl_mc_io *mc_io,
*/
enum dprc_region_type {
DPRC_REGION_TYPE_MC_PORTAL,
DPRC_REGION_TYPE_QBMAN_PORTAL
DPRC_REGION_TYPE_QBMAN_PORTAL,
DPRC_REGION_TYPE_QBMAN_MEM_BACKED_PORTAL
};
/**
@ -360,6 +372,7 @@ struct dprc_region_desc {
u32 size;
u32 flags;
enum dprc_region_type type;
u64 base_address;
};
int dprc_get_obj_region(struct fsl_mc_io *mc_io,

View File

@ -71,6 +71,9 @@ static const char * const clock_names[SYSC_MAX_CLOCKS] = {
* @name: name if available
* @revision: interconnect target module revision
* @needs_resume: runtime resume needed on resume from suspend
* @clk_enable_quirk: module specific clock enable quirk
* @clk_disable_quirk: module specific clock disable quirk
* @reset_done_quirk: module specific reset done quirk
*/
struct sysc {
struct device *dev;
@ -89,10 +92,14 @@ struct sysc {
struct ti_sysc_cookie cookie;
const char *name;
u32 revision;
bool enabled;
bool needs_resume;
bool child_needs_resume;
unsigned int enabled:1;
unsigned int needs_resume:1;
unsigned int child_needs_resume:1;
unsigned int disable_on_idle:1;
struct delayed_work idle_work;
void (*clk_enable_quirk)(struct sysc *sysc);
void (*clk_disable_quirk)(struct sysc *sysc);
void (*reset_done_quirk)(struct sysc *sysc);
};
static void sysc_parse_dts_quirks(struct sysc *ddata, struct device_node *np,
@ -100,6 +107,20 @@ static void sysc_parse_dts_quirks(struct sysc *ddata, struct device_node *np,
static void sysc_write(struct sysc *ddata, int offset, u32 value)
{
if (ddata->cfg.quirks & SYSC_QUIRK_16BIT) {
writew_relaxed(value & 0xffff, ddata->module_va + offset);
/* Only i2c revision has LO and HI register with stride of 4 */
if (ddata->offsets[SYSC_REVISION] >= 0 &&
offset == ddata->offsets[SYSC_REVISION]) {
u16 hi = value >> 16;
writew_relaxed(hi, ddata->module_va + offset + 4);
}
return;
}
writel_relaxed(value, ddata->module_va + offset);
}
@ -109,7 +130,14 @@ static u32 sysc_read(struct sysc *ddata, int offset)
u32 val;
val = readw_relaxed(ddata->module_va + offset);
val |= (readw_relaxed(ddata->module_va + offset + 4) << 16);
/* Only i2c revision has LO and HI register with stride of 4 */
if (ddata->offsets[SYSC_REVISION] >= 0 &&
offset == ddata->offsets[SYSC_REVISION]) {
u16 tmp = readw_relaxed(ddata->module_va + offset + 4);
val |= tmp << 16;
}
return val;
}
@ -132,6 +160,26 @@ static u32 sysc_read_revision(struct sysc *ddata)
return sysc_read(ddata, offset);
}
static u32 sysc_read_sysconfig(struct sysc *ddata)
{
int offset = ddata->offsets[SYSC_SYSCONFIG];
if (offset < 0)
return 0;
return sysc_read(ddata, offset);
}
static u32 sysc_read_sysstatus(struct sysc *ddata)
{
int offset = ddata->offsets[SYSC_SYSSTATUS];
if (offset < 0)
return 0;
return sysc_read(ddata, offset);
}
static int sysc_add_named_clock_from_child(struct sysc *ddata,
const char *name,
const char *optfck_name)
@ -422,6 +470,30 @@ static void sysc_disable_opt_clocks(struct sysc *ddata)
}
}
static void sysc_clkdm_deny_idle(struct sysc *ddata)
{
struct ti_sysc_platform_data *pdata;
if (ddata->legacy_mode)
return;
pdata = dev_get_platdata(ddata->dev);
if (pdata && pdata->clkdm_deny_idle)
pdata->clkdm_deny_idle(ddata->dev, &ddata->cookie);
}
static void sysc_clkdm_allow_idle(struct sysc *ddata)
{
struct ti_sysc_platform_data *pdata;
if (ddata->legacy_mode)
return;
pdata = dev_get_platdata(ddata->dev);
if (pdata && pdata->clkdm_allow_idle)
pdata->clkdm_allow_idle(ddata->dev, &ddata->cookie);
}
/**
* sysc_init_resets - init rstctrl reset line if configured
* @ddata: device driver data
@ -431,7 +503,7 @@ static void sysc_disable_opt_clocks(struct sysc *ddata)
static int sysc_init_resets(struct sysc *ddata)
{
ddata->rsts =
devm_reset_control_array_get_optional_exclusive(ddata->dev);
devm_reset_control_get_optional(ddata->dev, "rstctrl");
if (IS_ERR(ddata->rsts))
return PTR_ERR(ddata->rsts);
@ -694,8 +766,11 @@ static int sysc_ioremap(struct sysc *ddata)
ddata->offsets[SYSC_SYSCONFIG],
ddata->offsets[SYSC_SYSSTATUS]);
if (size < SZ_1K)
size = SZ_1K;
if ((size + sizeof(u32)) > ddata->module_size)
return -EINVAL;
size = ddata->module_size;
}
ddata->module_va = devm_ioremap(ddata->dev,
@ -794,7 +869,9 @@ static void sysc_show_registers(struct sysc *ddata)
}
#define SYSC_IDLE_MASK (SYSC_NR_IDLEMODES - 1)
#define SYSC_CLOCACT_ICK 2
/* Caller needs to manage sysc_clkdm_deny_idle() and sysc_clkdm_allow_idle() */
static int sysc_enable_module(struct device *dev)
{
struct sysc *ddata;
@ -805,23 +882,34 @@ static int sysc_enable_module(struct device *dev)
if (ddata->offsets[SYSC_SYSCONFIG] == -ENODEV)
return 0;
/*
* TODO: Need to prevent clockdomain autoidle?
* See clkdm_deny_idle() in arch/mach-omap2/omap_hwmod.c
*/
regbits = ddata->cap->regbits;
reg = sysc_read(ddata, ddata->offsets[SYSC_SYSCONFIG]);
/* Set CLOCKACTIVITY, we only use it for ick */
if (regbits->clkact_shift >= 0 &&
(ddata->cfg.quirks & SYSC_QUIRK_USE_CLOCKACT ||
ddata->cfg.sysc_val & BIT(regbits->clkact_shift)))
reg |= SYSC_CLOCACT_ICK << regbits->clkact_shift;
/* Set SIDLE mode */
idlemodes = ddata->cfg.sidlemodes;
if (!idlemodes || regbits->sidle_shift < 0)
goto set_midle;
best_mode = fls(ddata->cfg.sidlemodes) - 1;
if (best_mode > SYSC_IDLE_MASK) {
dev_err(dev, "%s: invalid sidlemode\n", __func__);
return -EINVAL;
if (ddata->cfg.quirks & (SYSC_QUIRK_SWSUP_SIDLE |
SYSC_QUIRK_SWSUP_SIDLE_ACT)) {
best_mode = SYSC_IDLE_NO;
} else {
best_mode = fls(ddata->cfg.sidlemodes) - 1;
if (best_mode > SYSC_IDLE_MASK) {
dev_err(dev, "%s: invalid sidlemode\n", __func__);
return -EINVAL;
}
/* Set WAKEUP */
if (regbits->enwkup_shift >= 0 &&
ddata->cfg.sysc_val & BIT(regbits->enwkup_shift))
reg |= BIT(regbits->enwkup_shift);
}
reg &= ~(SYSC_IDLE_MASK << regbits->sidle_shift);
@ -832,7 +920,7 @@ static int sysc_enable_module(struct device *dev)
/* Set MIDLE mode */
idlemodes = ddata->cfg.midlemodes;
if (!idlemodes || regbits->midle_shift < 0)
return 0;
goto set_autoidle;
best_mode = fls(ddata->cfg.midlemodes) - 1;
if (best_mode > SYSC_IDLE_MASK) {
@ -844,6 +932,14 @@ static int sysc_enable_module(struct device *dev)
reg |= best_mode << regbits->midle_shift;
sysc_write(ddata, ddata->offsets[SYSC_SYSCONFIG], reg);
set_autoidle:
/* Autoidle bit must enabled separately if available */
if (regbits->autoidle_shift >= 0 &&
ddata->cfg.sysc_val & BIT(regbits->autoidle_shift)) {
reg |= 1 << regbits->autoidle_shift;
sysc_write(ddata, ddata->offsets[SYSC_SYSCONFIG], reg);
}
return 0;
}
@ -861,6 +957,7 @@ static int sysc_best_idle_mode(u32 idlemodes, u32 *best_mode)
return 0;
}
/* Caller needs to manage sysc_clkdm_deny_idle() and sysc_clkdm_allow_idle() */
static int sysc_disable_module(struct device *dev)
{
struct sysc *ddata;
@ -872,11 +969,6 @@ static int sysc_disable_module(struct device *dev)
if (ddata->offsets[SYSC_SYSCONFIG] == -ENODEV)
return 0;
/*
* TODO: Need to prevent clockdomain autoidle?
* See clkdm_deny_idle() in arch/mach-omap2/omap_hwmod.c
*/
regbits = ddata->cap->regbits;
reg = sysc_read(ddata, ddata->offsets[SYSC_SYSCONFIG]);
@ -901,14 +993,21 @@ static int sysc_disable_module(struct device *dev)
if (!idlemodes || regbits->sidle_shift < 0)
return 0;
ret = sysc_best_idle_mode(idlemodes, &best_mode);
if (ret) {
dev_err(dev, "%s: invalid sidlemode\n", __func__);
return ret;
if (ddata->cfg.quirks & SYSC_QUIRK_SWSUP_SIDLE) {
best_mode = SYSC_IDLE_FORCE;
} else {
ret = sysc_best_idle_mode(idlemodes, &best_mode);
if (ret) {
dev_err(dev, "%s: invalid sidlemode\n", __func__);
return ret;
}
}
reg &= ~(SYSC_IDLE_MASK << regbits->sidle_shift);
reg |= best_mode << regbits->sidle_shift;
if (regbits->autoidle_shift >= 0 &&
ddata->cfg.sysc_val & BIT(regbits->autoidle_shift))
reg |= 1 << regbits->autoidle_shift;
sysc_write(ddata, ddata->offsets[SYSC_SYSCONFIG], reg);
return 0;
@ -932,6 +1031,9 @@ static int __maybe_unused sysc_runtime_suspend_legacy(struct device *dev,
dev_err(dev, "%s: could not idle: %i\n",
__func__, error);
if (ddata->disable_on_idle)
reset_control_assert(ddata->rsts);
return 0;
}
@ -941,6 +1043,9 @@ static int __maybe_unused sysc_runtime_resume_legacy(struct device *dev,
struct ti_sysc_platform_data *pdata;
int error;
if (ddata->disable_on_idle)
reset_control_deassert(ddata->rsts);
pdata = dev_get_platdata(ddata->dev);
if (!pdata)
return 0;
@ -966,14 +1071,16 @@ static int __maybe_unused sysc_runtime_suspend(struct device *dev)
if (!ddata->enabled)
return 0;
sysc_clkdm_deny_idle(ddata);
if (ddata->legacy_mode) {
error = sysc_runtime_suspend_legacy(dev, ddata);
if (error)
return error;
goto err_allow_idle;
} else {
error = sysc_disable_module(dev);
if (error)
return error;
goto err_allow_idle;
}
sysc_disable_main_clocks(ddata);
@ -983,6 +1090,12 @@ static int __maybe_unused sysc_runtime_suspend(struct device *dev)
ddata->enabled = false;
err_allow_idle:
sysc_clkdm_allow_idle(ddata);
if (ddata->disable_on_idle)
reset_control_assert(ddata->rsts);
return error;
}
@ -996,10 +1109,15 @@ static int __maybe_unused sysc_runtime_resume(struct device *dev)
if (ddata->enabled)
return 0;
if (ddata->disable_on_idle)
reset_control_deassert(ddata->rsts);
sysc_clkdm_deny_idle(ddata);
if (sysc_opt_clks_needed(ddata)) {
error = sysc_enable_opt_clocks(ddata);
if (error)
return error;
goto err_allow_idle;
}
error = sysc_enable_main_clocks(ddata);
@ -1018,6 +1136,8 @@ static int __maybe_unused sysc_runtime_resume(struct device *dev)
ddata->enabled = true;
sysc_clkdm_allow_idle(ddata);
return 0;
err_main_clocks:
@ -1025,6 +1145,8 @@ static int __maybe_unused sysc_runtime_resume(struct device *dev)
err_opt_clocks:
if (sysc_opt_clks_needed(ddata))
sysc_disable_opt_clocks(ddata);
err_allow_idle:
sysc_clkdm_allow_idle(ddata);
return error;
}
@ -1106,8 +1228,10 @@ static const struct sysc_revision_quirk sysc_revision_quirks[] = {
0),
SYSC_QUIRK("timer", 0, 0, 0x10, -1, 0x4fff1301, 0xffff00ff,
0),
SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x00000046, 0xffffffff,
SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x00000052, 0xffffffff,
SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE),
SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
/* Uarts on omap4 and later */
SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x50411e03, 0xffff00ff,
SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE),
@ -1119,6 +1243,22 @@ static const struct sysc_revision_quirk sysc_revision_quirks[] = {
SYSC_QUIRK_EXT_OPT_CLOCK | SYSC_QUIRK_NO_RESET_ON_INIT |
SYSC_QUIRK_SWSUP_SIDLE),
/* Quirks that need to be set based on detected module */
SYSC_QUIRK("hdq1w", 0, 0, 0x14, 0x18, 0x00000006, 0xffffffff,
SYSC_MODULE_QUIRK_HDQ1W),
SYSC_QUIRK("hdq1w", 0, 0, 0x14, 0x18, 0x0000000a, 0xffffffff,
SYSC_MODULE_QUIRK_HDQ1W),
SYSC_QUIRK("i2c", 0, 0, 0x20, 0x10, 0x00000036, 0x000000ff,
SYSC_MODULE_QUIRK_I2C),
SYSC_QUIRK("i2c", 0, 0, 0x20, 0x10, 0x0000003c, 0x000000ff,
SYSC_MODULE_QUIRK_I2C),
SYSC_QUIRK("i2c", 0, 0, 0x20, 0x10, 0x00000040, 0x000000ff,
SYSC_MODULE_QUIRK_I2C),
SYSC_QUIRK("i2c", 0, 0, 0x10, 0x90, 0x5040000a, 0xfffff0f0,
SYSC_MODULE_QUIRK_I2C),
SYSC_QUIRK("wdt", 0, 0, 0x10, 0x14, 0x502a0500, 0xfffff0f0,
SYSC_MODULE_QUIRK_WDT),
#ifdef DEBUG
SYSC_QUIRK("adc", 0, 0, 0x10, -1, 0x47300001, 0xffffffff, 0),
SYSC_QUIRK("atl", 0, 0, -1, -1, 0x0a070100, 0xffffffff, 0),
@ -1132,11 +1272,8 @@ static const struct sysc_revision_quirk sysc_revision_quirks[] = {
SYSC_QUIRK("dwc3", 0, 0, 0x10, -1, 0x500a0200, 0xffffffff, 0),
SYSC_QUIRK("epwmss", 0, 0, 0x4, -1, 0x47400001, 0xffffffff, 0),
SYSC_QUIRK("gpu", 0, 0x1fc00, 0x1fc10, -1, 0, 0, 0),
SYSC_QUIRK("hdq1w", 0, 0, 0x14, 0x18, 0x00000006, 0xffffffff, 0),
SYSC_QUIRK("hdq1w", 0, 0, 0x14, 0x18, 0x0000000a, 0xffffffff, 0),
SYSC_QUIRK("hsi", 0, 0, 0x10, 0x14, 0x50043101, 0xffffffff, 0),
SYSC_QUIRK("iss", 0, 0, 0x10, -1, 0x40000101, 0xffffffff, 0),
SYSC_QUIRK("i2c", 0, 0, 0x10, 0x90, 0x5040000a, 0xfffff0f0, 0),
SYSC_QUIRK("lcdc", 0, 0, 0x54, -1, 0x4f201000, 0xffffffff, 0),
SYSC_QUIRK("mcasp", 0, 0, 0x4, -1, 0x44306302, 0xffffffff, 0),
SYSC_QUIRK("mcasp", 0, 0, 0x4, -1, 0x44307b02, 0xffffffff, 0),
@ -1172,7 +1309,6 @@ static const struct sysc_revision_quirk sysc_revision_quirks[] = {
SYSC_QUIRK("usb_host_hs", 0, 0, 0x10, -1, 0x50700101, 0xffffffff, 0),
SYSC_QUIRK("usb_otg_hs", 0, 0x400, 0x404, 0x408, 0x00000050,
0xffffffff, 0),
SYSC_QUIRK("wdt", 0, 0, 0x10, 0x14, 0x502a0500, 0xfffff0f0, 0),
SYSC_QUIRK("vfpe", 0, 0, 0x104, -1, 0x4d001200, 0xffffffff, 0),
#endif
};
@ -1245,6 +1381,121 @@ static void sysc_init_revision_quirks(struct sysc *ddata)
}
}
/* 1-wire needs module's internal clocks enabled for reset */
static void sysc_clk_enable_quirk_hdq1w(struct sysc *ddata)
{
int offset = 0x0c; /* HDQ_CTRL_STATUS */
u16 val;
val = sysc_read(ddata, offset);
val |= BIT(5);
sysc_write(ddata, offset, val);
}
/* I2C needs extra enable bit toggling for reset */
static void sysc_clk_quirk_i2c(struct sysc *ddata, bool enable)
{
int offset;
u16 val;
/* I2C_CON, omap2/3 is different from omap4 and later */
if ((ddata->revision & 0xffffff00) == 0x001f0000)
offset = 0x24;
else
offset = 0xa4;
/* I2C_EN */
val = sysc_read(ddata, offset);
if (enable)
val |= BIT(15);
else
val &= ~BIT(15);
sysc_write(ddata, offset, val);
}
static void sysc_clk_enable_quirk_i2c(struct sysc *ddata)
{
sysc_clk_quirk_i2c(ddata, true);
}
static void sysc_clk_disable_quirk_i2c(struct sysc *ddata)
{
sysc_clk_quirk_i2c(ddata, false);
}
/* Watchdog timer needs a disable sequence after reset */
static void sysc_reset_done_quirk_wdt(struct sysc *ddata)
{
int wps, spr, error;
u32 val;
wps = 0x34;
spr = 0x48;
sysc_write(ddata, spr, 0xaaaa);
error = readl_poll_timeout(ddata->module_va + wps, val,
!(val & 0x10), 100,
MAX_MODULE_SOFTRESET_WAIT);
if (error)
dev_warn(ddata->dev, "wdt disable spr failed\n");
sysc_write(ddata, wps, 0x5555);
error = readl_poll_timeout(ddata->module_va + wps, val,
!(val & 0x10), 100,
MAX_MODULE_SOFTRESET_WAIT);
if (error)
dev_warn(ddata->dev, "wdt disable wps failed\n");
}
static void sysc_init_module_quirks(struct sysc *ddata)
{
if (ddata->legacy_mode || !ddata->name)
return;
if (ddata->cfg.quirks & SYSC_MODULE_QUIRK_HDQ1W) {
ddata->clk_enable_quirk = sysc_clk_enable_quirk_hdq1w;
return;
}
if (ddata->cfg.quirks & SYSC_MODULE_QUIRK_I2C) {
ddata->clk_enable_quirk = sysc_clk_enable_quirk_i2c;
ddata->clk_disable_quirk = sysc_clk_disable_quirk_i2c;
return;
}
if (ddata->cfg.quirks & SYSC_MODULE_QUIRK_WDT)
ddata->reset_done_quirk = sysc_reset_done_quirk_wdt;
}
static int sysc_clockdomain_init(struct sysc *ddata)
{
struct ti_sysc_platform_data *pdata = dev_get_platdata(ddata->dev);
struct clk *fck = NULL, *ick = NULL;
int error;
if (!pdata || !pdata->init_clockdomain)
return 0;
switch (ddata->nr_clocks) {
case 2:
ick = ddata->clocks[SYSC_ICK];
/* fallthrough */
case 1:
fck = ddata->clocks[SYSC_FCK];
break;
case 0:
return 0;
}
error = pdata->init_clockdomain(ddata->dev, fck, ick, &ddata->cookie);
if (!error || error == -ENODEV)
return 0;
return error;
}
/*
* Note that pdata->init_module() typically does a reset first. After
* pdata->init_module() is done, PM runtime can be used for the interconnect
@ -1255,7 +1506,7 @@ static int sysc_legacy_init(struct sysc *ddata)
struct ti_sysc_platform_data *pdata = dev_get_platdata(ddata->dev);
int error;
if (!ddata->legacy_mode || !pdata || !pdata->init_module)
if (!pdata || !pdata->init_module)
return 0;
error = pdata->init_module(ddata->dev, ddata->mdata, &ddata->cookie);
@ -1280,7 +1531,7 @@ static int sysc_legacy_init(struct sysc *ddata)
*/
static int sysc_rstctrl_reset_deassert(struct sysc *ddata, bool reset)
{
int error;
int error, val;
if (!ddata->rsts)
return 0;
@ -1291,37 +1542,68 @@ static int sysc_rstctrl_reset_deassert(struct sysc *ddata, bool reset)
return error;
}
return reset_control_deassert(ddata->rsts);
error = reset_control_deassert(ddata->rsts);
if (error == -EEXIST)
return 0;
error = readx_poll_timeout(reset_control_status, ddata->rsts, val,
val == 0, 100, MAX_MODULE_SOFTRESET_WAIT);
return error;
}
/*
* Note that the caller must ensure the interconnect target module is enabled
* before calling reset. Otherwise reset will not complete.
*/
static int sysc_reset(struct sysc *ddata)
{
int offset = ddata->offsets[SYSC_SYSCONFIG];
int val;
int sysc_offset, syss_offset, sysc_val, rstval, quirks, error = 0;
u32 sysc_mask, syss_done;
if (ddata->legacy_mode || offset < 0 ||
sysc_offset = ddata->offsets[SYSC_SYSCONFIG];
syss_offset = ddata->offsets[SYSC_SYSSTATUS];
quirks = ddata->cfg.quirks;
if (ddata->legacy_mode || sysc_offset < 0 ||
ddata->cap->regbits->srst_shift < 0 ||
ddata->cfg.quirks & SYSC_QUIRK_NO_RESET_ON_INIT)
return 0;
/*
* Currently only support reset status in sysstatus.
* Warn and return error in all other cases
*/
if (!ddata->cfg.syss_mask) {
dev_err(ddata->dev, "No ti,syss-mask. Reset failed\n");
return -EINVAL;
}
sysc_mask = BIT(ddata->cap->regbits->srst_shift);
val = sysc_read(ddata, offset);
val |= (0x1 << ddata->cap->regbits->srst_shift);
sysc_write(ddata, offset, val);
if (ddata->cfg.quirks & SYSS_QUIRK_RESETDONE_INVERTED)
syss_done = 0;
else
syss_done = ddata->cfg.syss_mask;
if (ddata->clk_disable_quirk)
ddata->clk_disable_quirk(ddata);
sysc_val = sysc_read_sysconfig(ddata);
sysc_val |= sysc_mask;
sysc_write(ddata, sysc_offset, sysc_val);
if (ddata->clk_enable_quirk)
ddata->clk_enable_quirk(ddata);
/* Poll on reset status */
offset = ddata->offsets[SYSC_SYSSTATUS];
if (syss_offset >= 0) {
error = readx_poll_timeout(sysc_read_sysstatus, ddata, rstval,
(rstval & ddata->cfg.syss_mask) ==
syss_done,
100, MAX_MODULE_SOFTRESET_WAIT);
return readl_poll_timeout(ddata->module_va + offset, val,
(val & ddata->cfg.syss_mask) == 0x0,
100, MAX_MODULE_SOFTRESET_WAIT);
} else if (ddata->cfg.quirks & SYSC_QUIRK_RESET_STATUS) {
error = readx_poll_timeout(sysc_read_sysconfig, ddata, rstval,
!(rstval & sysc_mask),
100, MAX_MODULE_SOFTRESET_WAIT);
}
if (ddata->reset_done_quirk)
ddata->reset_done_quirk(ddata);
return error;
}
/*
@ -1334,12 +1616,8 @@ static int sysc_init_module(struct sysc *ddata)
{
int error = 0;
bool manage_clocks = true;
bool reset = true;
if (ddata->cfg.quirks & SYSC_QUIRK_NO_RESET_ON_INIT)
reset = false;
error = sysc_rstctrl_reset_deassert(ddata, reset);
error = sysc_rstctrl_reset_deassert(ddata, false);
if (error)
return error;
@ -1347,7 +1625,13 @@ static int sysc_init_module(struct sysc *ddata)
(SYSC_QUIRK_NO_IDLE | SYSC_QUIRK_NO_IDLE_ON_INIT))
manage_clocks = false;
error = sysc_clockdomain_init(ddata);
if (error)
return error;
if (manage_clocks) {
sysc_clkdm_deny_idle(ddata);
error = sysc_enable_opt_clocks(ddata);
if (error)
return error;
@ -1357,23 +1641,43 @@ static int sysc_init_module(struct sysc *ddata)
goto err_opt_clocks;
}
if (!(ddata->cfg.quirks & SYSC_QUIRK_NO_RESET_ON_INIT)) {
error = sysc_rstctrl_reset_deassert(ddata, true);
if (error)
goto err_main_clocks;
}
ddata->revision = sysc_read_revision(ddata);
sysc_init_revision_quirks(ddata);
sysc_init_module_quirks(ddata);
error = sysc_legacy_init(ddata);
if (error)
goto err_main_clocks;
if (ddata->legacy_mode) {
error = sysc_legacy_init(ddata);
if (error)
goto err_main_clocks;
}
if (!ddata->legacy_mode && manage_clocks) {
error = sysc_enable_module(ddata->dev);
if (error)
goto err_main_clocks;
}
error = sysc_reset(ddata);
if (error)
dev_err(ddata->dev, "Reset failed with %d\n", error);
if (!ddata->legacy_mode && manage_clocks)
sysc_disable_module(ddata->dev);
err_main_clocks:
if (manage_clocks)
sysc_disable_main_clocks(ddata);
err_opt_clocks:
if (manage_clocks)
if (manage_clocks) {
sysc_disable_opt_clocks(ddata);
sysc_clkdm_allow_idle(ddata);
}
return error;
}
@ -1663,9 +1967,6 @@ static struct dev_pm_domain sysc_child_pm_domain = {
*/
static void sysc_legacy_idle_quirk(struct sysc *ddata, struct device *child)
{
if (!ddata->legacy_mode)
return;
if (ddata->cfg.quirks & SYSC_QUIRK_LEGACY_IDLE)
dev_pm_domain_set(child, &sysc_child_pm_domain);
}
@ -2005,6 +2306,7 @@ static const struct sysc_capabilities sysc_dra7_mcan = {
.type = TI_SYSC_DRA7_MCAN,
.sysc_mask = SYSC_DRA7_MCAN_ENAWAKEUP | SYSC_OMAP4_SOFTRESET,
.regbits = &sysc_regbits_dra7_mcan,
.mod_quirks = SYSS_QUIRK_RESETDONE_INVERTED,
};
static int sysc_init_pdata(struct sysc *ddata)
@ -2012,20 +2314,22 @@ static int sysc_init_pdata(struct sysc *ddata)
struct ti_sysc_platform_data *pdata = dev_get_platdata(ddata->dev);
struct ti_sysc_module_data *mdata;
if (!pdata || !ddata->legacy_mode)
if (!pdata)
return 0;
mdata = devm_kzalloc(ddata->dev, sizeof(*mdata), GFP_KERNEL);
if (!mdata)
return -ENOMEM;
mdata->name = ddata->legacy_mode;
mdata->module_pa = ddata->module_pa;
mdata->module_size = ddata->module_size;
mdata->offsets = ddata->offsets;
mdata->nr_offsets = SYSC_MAX_REGS;
mdata->cap = ddata->cap;
mdata->cfg = &ddata->cfg;
if (ddata->legacy_mode) {
mdata->name = ddata->legacy_mode;
mdata->module_pa = ddata->module_pa;
mdata->module_size = ddata->module_size;
mdata->offsets = ddata->offsets;
mdata->nr_offsets = SYSC_MAX_REGS;
mdata->cap = ddata->cap;
mdata->cfg = &ddata->cfg;
}
ddata->mdata = mdata;
@ -2145,7 +2449,7 @@ static int sysc_probe(struct platform_device *pdev)
}
if (!of_get_available_child_count(ddata->dev->of_node))
reset_control_assert(ddata->rsts);
ddata->disable_on_idle = true;
return 0;

View File

@ -185,6 +185,8 @@ scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id,
if (rate_discrete)
clk->list.num_rates = tot_rate_cnt;
clk->rate_discrete = rate_discrete;
err:
scmi_xfer_put(handle, t);
return ret;

View File

@ -30,10 +30,12 @@ struct scmi_msg_resp_sensor_description {
__le32 id;
__le32 attributes_low;
#define SUPPORTS_ASYNC_READ(x) ((x) & BIT(31))
#define NUM_TRIP_POINTS(x) (((x) >> 4) & 0xff)
#define NUM_TRIP_POINTS(x) ((x) & 0xff)
__le32 attributes_high;
#define SENSOR_TYPE(x) ((x) & 0xff)
#define SENSOR_SCALE(x) (((x) >> 11) & 0x3f)
#define SENSOR_SCALE(x) (((x) >> 11) & 0x1f)
#define SENSOR_SCALE_SIGN BIT(4)
#define SENSOR_SCALE_EXTEND GENMASK(7, 5)
#define SENSOR_UPDATE_SCALE(x) (((x) >> 22) & 0x1f)
#define SENSOR_UPDATE_BASE(x) (((x) >> 27) & 0x1f)
u8 name[SCMI_MAX_STR_SIZE];
@ -140,6 +142,10 @@ static int scmi_sensor_description_get(const struct scmi_handle *handle,
s = &si->sensors[desc_index + cnt];
s->id = le32_to_cpu(buf->desc[cnt].id);
s->type = SENSOR_TYPE(attrh);
s->scale = SENSOR_SCALE(attrh);
/* Sign extend to a full s8 */
if (s->scale & SENSOR_SCALE_SIGN)
s->scale |= SENSOR_SCALE_EXTEND;
strlcpy(s->name, buf->desc[cnt].name, SCMI_MAX_STR_SIZE);
}

View File

@ -359,16 +359,16 @@ static int suspend_test_thread(void *arg)
for (;;) {
/* Needs to be set first to avoid missing a wakeup. */
set_current_state(TASK_INTERRUPTIBLE);
if (kthread_should_stop()) {
__set_current_state(TASK_RUNNING);
if (kthread_should_park())
break;
}
schedule();
}
pr_info("CPU %d suspend test results: success %d, shallow states %d, errors %d\n",
cpu, nb_suspend, nb_shallow_sleep, nb_err);
kthread_parkme();
return nb_err;
}
@ -433,8 +433,10 @@ static int suspend_tests(void)
/* Stop and destroy all threads, get return status. */
for (i = 0; i < nb_threads; ++i)
for (i = 0; i < nb_threads; ++i) {
err += kthread_park(threads[i]);
err += kthread_stop(threads[i]);
}
out:
cpuidle_resume_and_unlock();
kfree(threads);

View File

@ -803,7 +803,9 @@ static int __maybe_unused tegra_bpmp_resume(struct device *dev)
return 0;
}
static SIMPLE_DEV_PM_OPS(tegra_bpmp_pm_ops, NULL, tegra_bpmp_resume);
static const struct dev_pm_ops tegra_bpmp_pm_ops = {
.resume_early = tegra_bpmp_resume,
};
#if IS_ENABLED(CONFIG_ARCH_TEGRA_186_SOC) || \
IS_ENABLED(CONFIG_ARCH_TEGRA_194_SOC)

View File

@ -466,9 +466,9 @@ static int ti_sci_cmd_get_revision(struct ti_sci_info *info)
struct ti_sci_xfer *xfer;
int ret;
/* No need to setup flags since it is expected to respond */
xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_VERSION,
0x0, sizeof(struct ti_sci_msg_hdr),
TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
sizeof(struct ti_sci_msg_hdr),
sizeof(*rev_info));
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
@ -596,9 +596,9 @@ static int ti_sci_get_device_state(const struct ti_sci_handle *handle,
info = handle_to_ti_sci_info(handle);
dev = info->dev;
/* Response is expected, so need of any flags */
xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_GET_DEVICE_STATE,
0, sizeof(*req), sizeof(*resp));
TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
sizeof(*req), sizeof(*resp));
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
dev_err(dev, "Message alloc failed(%d)\n", ret);
@ -2057,6 +2057,823 @@ static int ti_sci_cmd_free_event_map(const struct ti_sci_handle *handle,
ia_id, vint, global_event, vint_status_bit, 0);
}
/**
* ti_sci_cmd_ring_config() - configure RA ring
* @handle: Pointer to TI SCI handle.
* @valid_params: Bitfield defining validity of ring configuration
* parameters
* @nav_id: Device ID of Navigator Subsystem from which the ring is
* allocated
* @index: Ring index
* @addr_lo: The ring base address lo 32 bits
* @addr_hi: The ring base address hi 32 bits
* @count: Number of ring elements
* @mode: The mode of the ring
* @size: The ring element size.
* @order_id: Specifies the ring's bus order ID
*
* Return: 0 if all went well, else returns appropriate error value.
*
* See @ti_sci_msg_rm_ring_cfg_req for more info.
*/
static int ti_sci_cmd_ring_config(const struct ti_sci_handle *handle,
u32 valid_params, u16 nav_id, u16 index,
u32 addr_lo, u32 addr_hi, u32 count,
u8 mode, u8 size, u8 order_id)
{
struct ti_sci_msg_rm_ring_cfg_req *req;
struct ti_sci_msg_hdr *resp;
struct ti_sci_xfer *xfer;
struct ti_sci_info *info;
struct device *dev;
int ret = 0;
if (IS_ERR_OR_NULL(handle))
return -EINVAL;
info = handle_to_ti_sci_info(handle);
dev = info->dev;
xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_RM_RING_CFG,
TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
sizeof(*req), sizeof(*resp));
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
dev_err(dev, "RM_RA:Message config failed(%d)\n", ret);
return ret;
}
req = (struct ti_sci_msg_rm_ring_cfg_req *)xfer->xfer_buf;
req->valid_params = valid_params;
req->nav_id = nav_id;
req->index = index;
req->addr_lo = addr_lo;
req->addr_hi = addr_hi;
req->count = count;
req->mode = mode;
req->size = size;
req->order_id = order_id;
ret = ti_sci_do_xfer(info, xfer);
if (ret) {
dev_err(dev, "RM_RA:Mbox config send fail %d\n", ret);
goto fail;
}
resp = (struct ti_sci_msg_hdr *)xfer->xfer_buf;
ret = ti_sci_is_response_ack(resp) ? 0 : -ENODEV;
fail:
ti_sci_put_one_xfer(&info->minfo, xfer);
dev_dbg(dev, "RM_RA:config ring %u ret:%d\n", index, ret);
return ret;
}
/**
* ti_sci_cmd_ring_get_config() - get RA ring configuration
* @handle: Pointer to TI SCI handle.
* @nav_id: Device ID of Navigator Subsystem from which the ring is
* allocated
* @index: Ring index
* @addr_lo: Returns ring's base address lo 32 bits
* @addr_hi: Returns ring's base address hi 32 bits
* @count: Returns number of ring elements
* @mode: Returns mode of the ring
* @size: Returns ring element size
* @order_id: Returns ring's bus order ID
*
* Return: 0 if all went well, else returns appropriate error value.
*
* See @ti_sci_msg_rm_ring_get_cfg_req for more info.
*/
static int ti_sci_cmd_ring_get_config(const struct ti_sci_handle *handle,
u32 nav_id, u32 index, u8 *mode,
u32 *addr_lo, u32 *addr_hi,
u32 *count, u8 *size, u8 *order_id)
{
struct ti_sci_msg_rm_ring_get_cfg_resp *resp;
struct ti_sci_msg_rm_ring_get_cfg_req *req;
struct ti_sci_xfer *xfer;
struct ti_sci_info *info;
struct device *dev;
int ret = 0;
if (IS_ERR_OR_NULL(handle))
return -EINVAL;
info = handle_to_ti_sci_info(handle);
dev = info->dev;
xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_RM_RING_GET_CFG,
TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
sizeof(*req), sizeof(*resp));
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
dev_err(dev,
"RM_RA:Message get config failed(%d)\n", ret);
return ret;
}
req = (struct ti_sci_msg_rm_ring_get_cfg_req *)xfer->xfer_buf;
req->nav_id = nav_id;
req->index = index;
ret = ti_sci_do_xfer(info, xfer);
if (ret) {
dev_err(dev, "RM_RA:Mbox get config send fail %d\n", ret);
goto fail;
}
resp = (struct ti_sci_msg_rm_ring_get_cfg_resp *)xfer->xfer_buf;
if (!ti_sci_is_response_ack(resp)) {
ret = -ENODEV;
} else {
if (mode)
*mode = resp->mode;
if (addr_lo)
*addr_lo = resp->addr_lo;
if (addr_hi)
*addr_hi = resp->addr_hi;
if (count)
*count = resp->count;
if (size)
*size = resp->size;
if (order_id)
*order_id = resp->order_id;
};
fail:
ti_sci_put_one_xfer(&info->minfo, xfer);
dev_dbg(dev, "RM_RA:get config ring %u ret:%d\n", index, ret);
return ret;
}
/**
* ti_sci_cmd_rm_psil_pair() - Pair PSI-L source to destination thread
* @handle: Pointer to TI SCI handle.
* @nav_id: Device ID of Navigator Subsystem which should be used for
* pairing
* @src_thread: Source PSI-L thread ID
* @dst_thread: Destination PSI-L thread ID
*
* Return: 0 if all went well, else returns appropriate error value.
*/
static int ti_sci_cmd_rm_psil_pair(const struct ti_sci_handle *handle,
u32 nav_id, u32 src_thread, u32 dst_thread)
{
struct ti_sci_msg_psil_pair *req;
struct ti_sci_msg_hdr *resp;
struct ti_sci_xfer *xfer;
struct ti_sci_info *info;
struct device *dev;
int ret = 0;
if (IS_ERR(handle))
return PTR_ERR(handle);
if (!handle)
return -EINVAL;
info = handle_to_ti_sci_info(handle);
dev = info->dev;
xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_RM_PSIL_PAIR,
TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
sizeof(*req), sizeof(*resp));
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
dev_err(dev, "RM_PSIL:Message reconfig failed(%d)\n", ret);
return ret;
}
req = (struct ti_sci_msg_psil_pair *)xfer->xfer_buf;
req->nav_id = nav_id;
req->src_thread = src_thread;
req->dst_thread = dst_thread;
ret = ti_sci_do_xfer(info, xfer);
if (ret) {
dev_err(dev, "RM_PSIL:Mbox send fail %d\n", ret);
goto fail;
}
resp = (struct ti_sci_msg_hdr *)xfer->xfer_buf;
ret = ti_sci_is_response_ack(resp) ? 0 : -EINVAL;
fail:
ti_sci_put_one_xfer(&info->minfo, xfer);
return ret;
}
/**
* ti_sci_cmd_rm_psil_unpair() - Unpair PSI-L source from destination thread
* @handle: Pointer to TI SCI handle.
* @nav_id: Device ID of Navigator Subsystem which should be used for
* unpairing
* @src_thread: Source PSI-L thread ID
* @dst_thread: Destination PSI-L thread ID
*
* Return: 0 if all went well, else returns appropriate error value.
*/
static int ti_sci_cmd_rm_psil_unpair(const struct ti_sci_handle *handle,
u32 nav_id, u32 src_thread, u32 dst_thread)
{
struct ti_sci_msg_psil_unpair *req;
struct ti_sci_msg_hdr *resp;
struct ti_sci_xfer *xfer;
struct ti_sci_info *info;
struct device *dev;
int ret = 0;
if (IS_ERR(handle))
return PTR_ERR(handle);
if (!handle)
return -EINVAL;
info = handle_to_ti_sci_info(handle);
dev = info->dev;
xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_RM_PSIL_UNPAIR,
TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
sizeof(*req), sizeof(*resp));
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
dev_err(dev, "RM_PSIL:Message reconfig failed(%d)\n", ret);
return ret;
}
req = (struct ti_sci_msg_psil_unpair *)xfer->xfer_buf;
req->nav_id = nav_id;
req->src_thread = src_thread;
req->dst_thread = dst_thread;
ret = ti_sci_do_xfer(info, xfer);
if (ret) {
dev_err(dev, "RM_PSIL:Mbox send fail %d\n", ret);
goto fail;
}
resp = (struct ti_sci_msg_hdr *)xfer->xfer_buf;
ret = ti_sci_is_response_ack(resp) ? 0 : -EINVAL;
fail:
ti_sci_put_one_xfer(&info->minfo, xfer);
return ret;
}
/**
* ti_sci_cmd_rm_udmap_tx_ch_cfg() - Configure a UDMAP TX channel
* @handle: Pointer to TI SCI handle.
* @params: Pointer to ti_sci_msg_rm_udmap_tx_ch_cfg TX channel config
* structure
*
* Return: 0 if all went well, else returns appropriate error value.
*
* See @ti_sci_msg_rm_udmap_tx_ch_cfg and @ti_sci_msg_rm_udmap_tx_ch_cfg_req for
* more info.
*/
static int ti_sci_cmd_rm_udmap_tx_ch_cfg(const struct ti_sci_handle *handle,
const struct ti_sci_msg_rm_udmap_tx_ch_cfg *params)
{
struct ti_sci_msg_rm_udmap_tx_ch_cfg_req *req;
struct ti_sci_msg_hdr *resp;
struct ti_sci_xfer *xfer;
struct ti_sci_info *info;
struct device *dev;
int ret = 0;
if (IS_ERR_OR_NULL(handle))
return -EINVAL;
info = handle_to_ti_sci_info(handle);
dev = info->dev;
xfer = ti_sci_get_one_xfer(info, TISCI_MSG_RM_UDMAP_TX_CH_CFG,
TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
sizeof(*req), sizeof(*resp));
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
dev_err(dev, "Message TX_CH_CFG alloc failed(%d)\n", ret);
return ret;
}
req = (struct ti_sci_msg_rm_udmap_tx_ch_cfg_req *)xfer->xfer_buf;
req->valid_params = params->valid_params;
req->nav_id = params->nav_id;
req->index = params->index;
req->tx_pause_on_err = params->tx_pause_on_err;
req->tx_filt_einfo = params->tx_filt_einfo;
req->tx_filt_pswords = params->tx_filt_pswords;
req->tx_atype = params->tx_atype;
req->tx_chan_type = params->tx_chan_type;
req->tx_supr_tdpkt = params->tx_supr_tdpkt;
req->tx_fetch_size = params->tx_fetch_size;
req->tx_credit_count = params->tx_credit_count;
req->txcq_qnum = params->txcq_qnum;
req->tx_priority = params->tx_priority;
req->tx_qos = params->tx_qos;
req->tx_orderid = params->tx_orderid;
req->fdepth = params->fdepth;
req->tx_sched_priority = params->tx_sched_priority;
req->tx_burst_size = params->tx_burst_size;
ret = ti_sci_do_xfer(info, xfer);
if (ret) {
dev_err(dev, "Mbox send TX_CH_CFG fail %d\n", ret);
goto fail;
}
resp = (struct ti_sci_msg_hdr *)xfer->xfer_buf;
ret = ti_sci_is_response_ack(resp) ? 0 : -EINVAL;
fail:
ti_sci_put_one_xfer(&info->minfo, xfer);
dev_dbg(dev, "TX_CH_CFG: chn %u ret:%u\n", params->index, ret);
return ret;
}
/**
* ti_sci_cmd_rm_udmap_rx_ch_cfg() - Configure a UDMAP RX channel
* @handle: Pointer to TI SCI handle.
* @params: Pointer to ti_sci_msg_rm_udmap_rx_ch_cfg RX channel config
* structure
*
* Return: 0 if all went well, else returns appropriate error value.
*
* See @ti_sci_msg_rm_udmap_rx_ch_cfg and @ti_sci_msg_rm_udmap_rx_ch_cfg_req for
* more info.
*/
static int ti_sci_cmd_rm_udmap_rx_ch_cfg(const struct ti_sci_handle *handle,
const struct ti_sci_msg_rm_udmap_rx_ch_cfg *params)
{
struct ti_sci_msg_rm_udmap_rx_ch_cfg_req *req;
struct ti_sci_msg_hdr *resp;
struct ti_sci_xfer *xfer;
struct ti_sci_info *info;
struct device *dev;
int ret = 0;
if (IS_ERR_OR_NULL(handle))
return -EINVAL;
info = handle_to_ti_sci_info(handle);
dev = info->dev;
xfer = ti_sci_get_one_xfer(info, TISCI_MSG_RM_UDMAP_RX_CH_CFG,
TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
sizeof(*req), sizeof(*resp));
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
dev_err(dev, "Message RX_CH_CFG alloc failed(%d)\n", ret);
return ret;
}
req = (struct ti_sci_msg_rm_udmap_rx_ch_cfg_req *)xfer->xfer_buf;
req->valid_params = params->valid_params;
req->nav_id = params->nav_id;
req->index = params->index;
req->rx_fetch_size = params->rx_fetch_size;
req->rxcq_qnum = params->rxcq_qnum;
req->rx_priority = params->rx_priority;
req->rx_qos = params->rx_qos;
req->rx_orderid = params->rx_orderid;
req->rx_sched_priority = params->rx_sched_priority;
req->flowid_start = params->flowid_start;
req->flowid_cnt = params->flowid_cnt;
req->rx_pause_on_err = params->rx_pause_on_err;
req->rx_atype = params->rx_atype;
req->rx_chan_type = params->rx_chan_type;
req->rx_ignore_short = params->rx_ignore_short;
req->rx_ignore_long = params->rx_ignore_long;
req->rx_burst_size = params->rx_burst_size;
ret = ti_sci_do_xfer(info, xfer);
if (ret) {
dev_err(dev, "Mbox send RX_CH_CFG fail %d\n", ret);
goto fail;
}
resp = (struct ti_sci_msg_hdr *)xfer->xfer_buf;
ret = ti_sci_is_response_ack(resp) ? 0 : -EINVAL;
fail:
ti_sci_put_one_xfer(&info->minfo, xfer);
dev_dbg(dev, "RX_CH_CFG: chn %u ret:%d\n", params->index, ret);
return ret;
}
/**
* ti_sci_cmd_rm_udmap_rx_flow_cfg() - Configure UDMAP RX FLOW
* @handle: Pointer to TI SCI handle.
* @params: Pointer to ti_sci_msg_rm_udmap_flow_cfg RX FLOW config
* structure
*
* Return: 0 if all went well, else returns appropriate error value.
*
* See @ti_sci_msg_rm_udmap_flow_cfg and @ti_sci_msg_rm_udmap_flow_cfg_req for
* more info.
*/
static int ti_sci_cmd_rm_udmap_rx_flow_cfg(const struct ti_sci_handle *handle,
const struct ti_sci_msg_rm_udmap_flow_cfg *params)
{
struct ti_sci_msg_rm_udmap_flow_cfg_req *req;
struct ti_sci_msg_hdr *resp;
struct ti_sci_xfer *xfer;
struct ti_sci_info *info;
struct device *dev;
int ret = 0;
if (IS_ERR_OR_NULL(handle))
return -EINVAL;
info = handle_to_ti_sci_info(handle);
dev = info->dev;
xfer = ti_sci_get_one_xfer(info, TISCI_MSG_RM_UDMAP_FLOW_CFG,
TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
sizeof(*req), sizeof(*resp));
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
dev_err(dev, "RX_FL_CFG: Message alloc failed(%d)\n", ret);
return ret;
}
req = (struct ti_sci_msg_rm_udmap_flow_cfg_req *)xfer->xfer_buf;
req->valid_params = params->valid_params;
req->nav_id = params->nav_id;
req->flow_index = params->flow_index;
req->rx_einfo_present = params->rx_einfo_present;
req->rx_psinfo_present = params->rx_psinfo_present;
req->rx_error_handling = params->rx_error_handling;
req->rx_desc_type = params->rx_desc_type;
req->rx_sop_offset = params->rx_sop_offset;
req->rx_dest_qnum = params->rx_dest_qnum;
req->rx_src_tag_hi = params->rx_src_tag_hi;
req->rx_src_tag_lo = params->rx_src_tag_lo;
req->rx_dest_tag_hi = params->rx_dest_tag_hi;
req->rx_dest_tag_lo = params->rx_dest_tag_lo;
req->rx_src_tag_hi_sel = params->rx_src_tag_hi_sel;
req->rx_src_tag_lo_sel = params->rx_src_tag_lo_sel;
req->rx_dest_tag_hi_sel = params->rx_dest_tag_hi_sel;
req->rx_dest_tag_lo_sel = params->rx_dest_tag_lo_sel;
req->rx_fdq0_sz0_qnum = params->rx_fdq0_sz0_qnum;
req->rx_fdq1_qnum = params->rx_fdq1_qnum;
req->rx_fdq2_qnum = params->rx_fdq2_qnum;
req->rx_fdq3_qnum = params->rx_fdq3_qnum;
req->rx_ps_location = params->rx_ps_location;
ret = ti_sci_do_xfer(info, xfer);
if (ret) {
dev_err(dev, "RX_FL_CFG: Mbox send fail %d\n", ret);
goto fail;
}
resp = (struct ti_sci_msg_hdr *)xfer->xfer_buf;
ret = ti_sci_is_response_ack(resp) ? 0 : -EINVAL;
fail:
ti_sci_put_one_xfer(&info->minfo, xfer);
dev_dbg(info->dev, "RX_FL_CFG: %u ret:%d\n", params->flow_index, ret);
return ret;
}
/**
* ti_sci_cmd_proc_request() - Command to request a physical processor control
* @handle: Pointer to TI SCI handle
* @proc_id: Processor ID this request is for
*
* Return: 0 if all went well, else returns appropriate error value.
*/
static int ti_sci_cmd_proc_request(const struct ti_sci_handle *handle,
u8 proc_id)
{
struct ti_sci_msg_req_proc_request *req;
struct ti_sci_msg_hdr *resp;
struct ti_sci_info *info;
struct ti_sci_xfer *xfer;
struct device *dev;
int ret = 0;
if (!handle)
return -EINVAL;
if (IS_ERR(handle))
return PTR_ERR(handle);
info = handle_to_ti_sci_info(handle);
dev = info->dev;
xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_PROC_REQUEST,
TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
sizeof(*req), sizeof(*resp));
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
dev_err(dev, "Message alloc failed(%d)\n", ret);
return ret;
}
req = (struct ti_sci_msg_req_proc_request *)xfer->xfer_buf;
req->processor_id = proc_id;
ret = ti_sci_do_xfer(info, xfer);
if (ret) {
dev_err(dev, "Mbox send fail %d\n", ret);
goto fail;
}
resp = (struct ti_sci_msg_hdr *)xfer->tx_message.buf;
ret = ti_sci_is_response_ack(resp) ? 0 : -ENODEV;
fail:
ti_sci_put_one_xfer(&info->minfo, xfer);
return ret;
}
/**
* ti_sci_cmd_proc_release() - Command to release a physical processor control
* @handle: Pointer to TI SCI handle
* @proc_id: Processor ID this request is for
*
* Return: 0 if all went well, else returns appropriate error value.
*/
static int ti_sci_cmd_proc_release(const struct ti_sci_handle *handle,
u8 proc_id)
{
struct ti_sci_msg_req_proc_release *req;
struct ti_sci_msg_hdr *resp;
struct ti_sci_info *info;
struct ti_sci_xfer *xfer;
struct device *dev;
int ret = 0;
if (!handle)
return -EINVAL;
if (IS_ERR(handle))
return PTR_ERR(handle);
info = handle_to_ti_sci_info(handle);
dev = info->dev;
xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_PROC_RELEASE,
TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
sizeof(*req), sizeof(*resp));
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
dev_err(dev, "Message alloc failed(%d)\n", ret);
return ret;
}
req = (struct ti_sci_msg_req_proc_release *)xfer->xfer_buf;
req->processor_id = proc_id;
ret = ti_sci_do_xfer(info, xfer);
if (ret) {
dev_err(dev, "Mbox send fail %d\n", ret);
goto fail;
}
resp = (struct ti_sci_msg_hdr *)xfer->tx_message.buf;
ret = ti_sci_is_response_ack(resp) ? 0 : -ENODEV;
fail:
ti_sci_put_one_xfer(&info->minfo, xfer);
return ret;
}
/**
* ti_sci_cmd_proc_handover() - Command to handover a physical processor
* control to a host in the processor's access
* control list.
* @handle: Pointer to TI SCI handle
* @proc_id: Processor ID this request is for
* @host_id: Host ID to get the control of the processor
*
* Return: 0 if all went well, else returns appropriate error value.
*/
static int ti_sci_cmd_proc_handover(const struct ti_sci_handle *handle,
u8 proc_id, u8 host_id)
{
struct ti_sci_msg_req_proc_handover *req;
struct ti_sci_msg_hdr *resp;
struct ti_sci_info *info;
struct ti_sci_xfer *xfer;
struct device *dev;
int ret = 0;
if (!handle)
return -EINVAL;
if (IS_ERR(handle))
return PTR_ERR(handle);
info = handle_to_ti_sci_info(handle);
dev = info->dev;
xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_PROC_HANDOVER,
TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
sizeof(*req), sizeof(*resp));
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
dev_err(dev, "Message alloc failed(%d)\n", ret);
return ret;
}
req = (struct ti_sci_msg_req_proc_handover *)xfer->xfer_buf;
req->processor_id = proc_id;
req->host_id = host_id;
ret = ti_sci_do_xfer(info, xfer);
if (ret) {
dev_err(dev, "Mbox send fail %d\n", ret);
goto fail;
}
resp = (struct ti_sci_msg_hdr *)xfer->tx_message.buf;
ret = ti_sci_is_response_ack(resp) ? 0 : -ENODEV;
fail:
ti_sci_put_one_xfer(&info->minfo, xfer);
return ret;
}
/**
* ti_sci_cmd_proc_set_config() - Command to set the processor boot
* configuration flags
* @handle: Pointer to TI SCI handle
* @proc_id: Processor ID this request is for
* @config_flags_set: Configuration flags to be set
* @config_flags_clear: Configuration flags to be cleared.
*
* Return: 0 if all went well, else returns appropriate error value.
*/
static int ti_sci_cmd_proc_set_config(const struct ti_sci_handle *handle,
u8 proc_id, u64 bootvector,
u32 config_flags_set,
u32 config_flags_clear)
{
struct ti_sci_msg_req_set_config *req;
struct ti_sci_msg_hdr *resp;
struct ti_sci_info *info;
struct ti_sci_xfer *xfer;
struct device *dev;
int ret = 0;
if (!handle)
return -EINVAL;
if (IS_ERR(handle))
return PTR_ERR(handle);
info = handle_to_ti_sci_info(handle);
dev = info->dev;
xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_SET_CONFIG,
TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
sizeof(*req), sizeof(*resp));
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
dev_err(dev, "Message alloc failed(%d)\n", ret);
return ret;
}
req = (struct ti_sci_msg_req_set_config *)xfer->xfer_buf;
req->processor_id = proc_id;
req->bootvector_low = bootvector & TI_SCI_ADDR_LOW_MASK;
req->bootvector_high = (bootvector & TI_SCI_ADDR_HIGH_MASK) >>
TI_SCI_ADDR_HIGH_SHIFT;
req->config_flags_set = config_flags_set;
req->config_flags_clear = config_flags_clear;
ret = ti_sci_do_xfer(info, xfer);
if (ret) {
dev_err(dev, "Mbox send fail %d\n", ret);
goto fail;
}
resp = (struct ti_sci_msg_hdr *)xfer->tx_message.buf;
ret = ti_sci_is_response_ack(resp) ? 0 : -ENODEV;
fail:
ti_sci_put_one_xfer(&info->minfo, xfer);
return ret;
}
/**
* ti_sci_cmd_proc_set_control() - Command to set the processor boot
* control flags
* @handle: Pointer to TI SCI handle
* @proc_id: Processor ID this request is for
* @control_flags_set: Control flags to be set
* @control_flags_clear: Control flags to be cleared
*
* Return: 0 if all went well, else returns appropriate error value.
*/
static int ti_sci_cmd_proc_set_control(const struct ti_sci_handle *handle,
u8 proc_id, u32 control_flags_set,
u32 control_flags_clear)
{
struct ti_sci_msg_req_set_ctrl *req;
struct ti_sci_msg_hdr *resp;
struct ti_sci_info *info;
struct ti_sci_xfer *xfer;
struct device *dev;
int ret = 0;
if (!handle)
return -EINVAL;
if (IS_ERR(handle))
return PTR_ERR(handle);
info = handle_to_ti_sci_info(handle);
dev = info->dev;
xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_SET_CTRL,
TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
sizeof(*req), sizeof(*resp));
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
dev_err(dev, "Message alloc failed(%d)\n", ret);
return ret;
}
req = (struct ti_sci_msg_req_set_ctrl *)xfer->xfer_buf;
req->processor_id = proc_id;
req->control_flags_set = control_flags_set;
req->control_flags_clear = control_flags_clear;
ret = ti_sci_do_xfer(info, xfer);
if (ret) {
dev_err(dev, "Mbox send fail %d\n", ret);
goto fail;
}
resp = (struct ti_sci_msg_hdr *)xfer->tx_message.buf;
ret = ti_sci_is_response_ack(resp) ? 0 : -ENODEV;
fail:
ti_sci_put_one_xfer(&info->minfo, xfer);
return ret;
}
/**
* ti_sci_cmd_get_boot_status() - Command to get the processor boot status
* @handle: Pointer to TI SCI handle
* @proc_id: Processor ID this request is for
*
* Return: 0 if all went well, else returns appropriate error value.
*/
static int ti_sci_cmd_proc_get_status(const struct ti_sci_handle *handle,
u8 proc_id, u64 *bv, u32 *cfg_flags,
u32 *ctrl_flags, u32 *sts_flags)
{
struct ti_sci_msg_resp_get_status *resp;
struct ti_sci_msg_req_get_status *req;
struct ti_sci_info *info;
struct ti_sci_xfer *xfer;
struct device *dev;
int ret = 0;
if (!handle)
return -EINVAL;
if (IS_ERR(handle))
return PTR_ERR(handle);
info = handle_to_ti_sci_info(handle);
dev = info->dev;
xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_GET_STATUS,
TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
sizeof(*req), sizeof(*resp));
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
dev_err(dev, "Message alloc failed(%d)\n", ret);
return ret;
}
req = (struct ti_sci_msg_req_get_status *)xfer->xfer_buf;
req->processor_id = proc_id;
ret = ti_sci_do_xfer(info, xfer);
if (ret) {
dev_err(dev, "Mbox send fail %d\n", ret);
goto fail;
}
resp = (struct ti_sci_msg_resp_get_status *)xfer->tx_message.buf;
if (!ti_sci_is_response_ack(resp)) {
ret = -ENODEV;
} else {
*bv = (resp->bootvector_low & TI_SCI_ADDR_LOW_MASK) |
(((u64)resp->bootvector_high << TI_SCI_ADDR_HIGH_SHIFT) &
TI_SCI_ADDR_HIGH_MASK);
*cfg_flags = resp->config_flags;
*ctrl_flags = resp->control_flags;
*sts_flags = resp->status_flags;
}
fail:
ti_sci_put_one_xfer(&info->minfo, xfer);
return ret;
}
/*
* ti_sci_setup_ops() - Setup the operations structures
* @info: pointer to TISCI pointer
@ -2069,6 +2886,10 @@ static void ti_sci_setup_ops(struct ti_sci_info *info)
struct ti_sci_clk_ops *cops = &ops->clk_ops;
struct ti_sci_rm_core_ops *rm_core_ops = &ops->rm_core_ops;
struct ti_sci_rm_irq_ops *iops = &ops->rm_irq_ops;
struct ti_sci_rm_ringacc_ops *rops = &ops->rm_ring_ops;
struct ti_sci_rm_psil_ops *psilops = &ops->rm_psil_ops;
struct ti_sci_rm_udmap_ops *udmap_ops = &ops->rm_udmap_ops;
struct ti_sci_proc_ops *pops = &ops->proc_ops;
core_ops->reboot_device = ti_sci_cmd_core_reboot;
@ -2108,6 +2929,23 @@ static void ti_sci_setup_ops(struct ti_sci_info *info)
iops->set_event_map = ti_sci_cmd_set_event_map;
iops->free_irq = ti_sci_cmd_free_irq;
iops->free_event_map = ti_sci_cmd_free_event_map;
rops->config = ti_sci_cmd_ring_config;
rops->get_config = ti_sci_cmd_ring_get_config;
psilops->pair = ti_sci_cmd_rm_psil_pair;
psilops->unpair = ti_sci_cmd_rm_psil_unpair;
udmap_ops->tx_ch_cfg = ti_sci_cmd_rm_udmap_tx_ch_cfg;
udmap_ops->rx_ch_cfg = ti_sci_cmd_rm_udmap_rx_ch_cfg;
udmap_ops->rx_flow_cfg = ti_sci_cmd_rm_udmap_rx_flow_cfg;
pops->request = ti_sci_cmd_proc_request;
pops->release = ti_sci_cmd_proc_release;
pops->handover = ti_sci_cmd_proc_handover;
pops->set_config = ti_sci_cmd_proc_set_config;
pops->set_control = ti_sci_cmd_proc_set_control;
pops->get_status = ti_sci_cmd_proc_get_status;
}
/**
@ -2395,6 +3233,7 @@ devm_ti_sci_get_of_resource(const struct ti_sci_handle *handle,
struct device *dev, u32 dev_id, char *of_prop)
{
struct ti_sci_resource *res;
bool valid_set = false;
u32 resource_subtype;
int i, ret;
@ -2426,15 +3265,18 @@ devm_ti_sci_get_of_resource(const struct ti_sci_handle *handle,
&res->desc[i].start,
&res->desc[i].num);
if (ret) {
dev_err(dev, "dev = %d subtype %d not allocated for this host\n",
dev_dbg(dev, "dev = %d subtype %d not allocated for this host\n",
dev_id, resource_subtype);
return ERR_PTR(ret);
res->desc[i].start = 0;
res->desc[i].num = 0;
continue;
}
dev_dbg(dev, "dev = %d, subtype = %d, start = %d, num = %d\n",
dev_id, resource_subtype, res->desc[i].start,
res->desc[i].num);
valid_set = true;
res->desc[i].res_map =
devm_kzalloc(dev, BITS_TO_LONGS(res->desc[i].num) *
sizeof(*res->desc[i].res_map), GFP_KERNEL);
@ -2443,7 +3285,10 @@ devm_ti_sci_get_of_resource(const struct ti_sci_handle *handle,
}
raw_spin_lock_init(&res->lock);
return res;
if (valid_set)
return res;
return ERR_PTR(-EINVAL);
}
static int tisci_reboot_handler(struct notifier_block *nb, unsigned long mode,

View File

@ -42,6 +42,43 @@
#define TI_SCI_MSG_SET_IRQ 0x1000
#define TI_SCI_MSG_FREE_IRQ 0x1001
/* NAVSS resource management */
/* Ringacc requests */
#define TI_SCI_MSG_RM_RING_ALLOCATE 0x1100
#define TI_SCI_MSG_RM_RING_FREE 0x1101
#define TI_SCI_MSG_RM_RING_RECONFIG 0x1102
#define TI_SCI_MSG_RM_RING_RESET 0x1103
#define TI_SCI_MSG_RM_RING_CFG 0x1110
#define TI_SCI_MSG_RM_RING_GET_CFG 0x1111
/* PSI-L requests */
#define TI_SCI_MSG_RM_PSIL_PAIR 0x1280
#define TI_SCI_MSG_RM_PSIL_UNPAIR 0x1281
#define TI_SCI_MSG_RM_UDMAP_TX_ALLOC 0x1200
#define TI_SCI_MSG_RM_UDMAP_TX_FREE 0x1201
#define TI_SCI_MSG_RM_UDMAP_RX_ALLOC 0x1210
#define TI_SCI_MSG_RM_UDMAP_RX_FREE 0x1211
#define TI_SCI_MSG_RM_UDMAP_FLOW_CFG 0x1220
#define TI_SCI_MSG_RM_UDMAP_OPT_FLOW_CFG 0x1221
#define TISCI_MSG_RM_UDMAP_TX_CH_CFG 0x1205
#define TISCI_MSG_RM_UDMAP_TX_CH_GET_CFG 0x1206
#define TISCI_MSG_RM_UDMAP_RX_CH_CFG 0x1215
#define TISCI_MSG_RM_UDMAP_RX_CH_GET_CFG 0x1216
#define TISCI_MSG_RM_UDMAP_FLOW_CFG 0x1230
#define TISCI_MSG_RM_UDMAP_FLOW_SIZE_THRESH_CFG 0x1231
#define TISCI_MSG_RM_UDMAP_FLOW_GET_CFG 0x1232
#define TISCI_MSG_RM_UDMAP_FLOW_SIZE_THRESH_GET_CFG 0x1233
/* Processor Control requests */
#define TI_SCI_MSG_PROC_REQUEST 0xc000
#define TI_SCI_MSG_PROC_RELEASE 0xc001
#define TI_SCI_MSG_PROC_HANDOVER 0xc005
#define TI_SCI_MSG_SET_CONFIG 0xc100
#define TI_SCI_MSG_SET_CTRL 0xc101
#define TI_SCI_MSG_GET_STATUS 0xc400
/**
* struct ti_sci_msg_hdr - Generic Message Header for All messages and responses
* @type: Type of messages: One of TI_SCI_MSG* values
@ -604,4 +641,777 @@ struct ti_sci_msg_req_manage_irq {
u8 secondary_host;
} __packed;
/**
* struct ti_sci_msg_rm_ring_cfg_req - Configure a Navigator Subsystem ring
*
* Configures the non-real-time registers of a Navigator Subsystem ring.
* @hdr: Generic Header
* @valid_params: Bitfield defining validity of ring configuration parameters.
* The ring configuration fields are not valid, and will not be used for
* ring configuration, if their corresponding valid bit is zero.
* Valid bit usage:
* 0 - Valid bit for @tisci_msg_rm_ring_cfg_req addr_lo
* 1 - Valid bit for @tisci_msg_rm_ring_cfg_req addr_hi
* 2 - Valid bit for @tisci_msg_rm_ring_cfg_req count
* 3 - Valid bit for @tisci_msg_rm_ring_cfg_req mode
* 4 - Valid bit for @tisci_msg_rm_ring_cfg_req size
* 5 - Valid bit for @tisci_msg_rm_ring_cfg_req order_id
* @nav_id: Device ID of Navigator Subsystem from which the ring is allocated
* @index: ring index to be configured.
* @addr_lo: 32 LSBs of ring base address to be programmed into the ring's
* RING_BA_LO register
* @addr_hi: 16 MSBs of ring base address to be programmed into the ring's
* RING_BA_HI register.
* @count: Number of ring elements. Must be even if mode is CREDENTIALS or QM
* modes.
* @mode: Specifies the mode the ring is to be configured.
* @size: Specifies encoded ring element size. To calculate the encoded size use
* the formula (log2(size_bytes) - 2), where size_bytes cannot be
* greater than 256.
* @order_id: Specifies the ring's bus order ID.
*/
struct ti_sci_msg_rm_ring_cfg_req {
struct ti_sci_msg_hdr hdr;
u32 valid_params;
u16 nav_id;
u16 index;
u32 addr_lo;
u32 addr_hi;
u32 count;
u8 mode;
u8 size;
u8 order_id;
} __packed;
/**
* struct ti_sci_msg_rm_ring_get_cfg_req - Get RA ring's configuration
*
* Gets the configuration of the non-real-time register fields of a ring. The
* host, or a supervisor of the host, who owns the ring must be the requesting
* host. The values of the non-real-time registers are returned in
* @ti_sci_msg_rm_ring_get_cfg_resp.
*
* @hdr: Generic Header
* @nav_id: Device ID of Navigator Subsystem from which the ring is allocated
* @index: ring index.
*/
struct ti_sci_msg_rm_ring_get_cfg_req {
struct ti_sci_msg_hdr hdr;
u16 nav_id;
u16 index;
} __packed;
/**
* struct ti_sci_msg_rm_ring_get_cfg_resp - Ring get configuration response
*
* Response received by host processor after RM has handled
* @ti_sci_msg_rm_ring_get_cfg_req. The response contains the ring's
* non-real-time register values.
*
* @hdr: Generic Header
* @addr_lo: Ring 32 LSBs of base address
* @addr_hi: Ring 16 MSBs of base address.
* @count: Ring number of elements.
* @mode: Ring mode.
* @size: encoded Ring element size
* @order_id: ing order ID.
*/
struct ti_sci_msg_rm_ring_get_cfg_resp {
struct ti_sci_msg_hdr hdr;
u32 addr_lo;
u32 addr_hi;
u32 count;
u8 mode;
u8 size;
u8 order_id;
} __packed;
/**
* struct ti_sci_msg_psil_pair - Pairs a PSI-L source thread to a destination
* thread
* @hdr: Generic Header
* @nav_id: SoC Navigator Subsystem device ID whose PSI-L config proxy is
* used to pair the source and destination threads.
* @src_thread: PSI-L source thread ID within the PSI-L System thread map.
*
* UDMAP transmit channels mapped to source threads will have their
* TCHAN_THRD_ID register programmed with the destination thread if the pairing
* is successful.
* @dst_thread: PSI-L destination thread ID within the PSI-L System thread map.
* PSI-L destination threads start at index 0x8000. The request is NACK'd if
* the destination thread is not greater than or equal to 0x8000.
*
* UDMAP receive channels mapped to destination threads will have their
* RCHAN_THRD_ID register programmed with the source thread if the pairing
* is successful.
*
* Request type is TI_SCI_MSG_RM_PSIL_PAIR, response is a generic ACK or NACK
* message.
*/
struct ti_sci_msg_psil_pair {
struct ti_sci_msg_hdr hdr;
u32 nav_id;
u32 src_thread;
u32 dst_thread;
} __packed;
/**
* struct ti_sci_msg_psil_unpair - Unpairs a PSI-L source thread from a
* destination thread
* @hdr: Generic Header
* @nav_id: SoC Navigator Subsystem device ID whose PSI-L config proxy is
* used to unpair the source and destination threads.
* @src_thread: PSI-L source thread ID within the PSI-L System thread map.
*
* UDMAP transmit channels mapped to source threads will have their
* TCHAN_THRD_ID register cleared if the unpairing is successful.
*
* @dst_thread: PSI-L destination thread ID within the PSI-L System thread map.
* PSI-L destination threads start at index 0x8000. The request is NACK'd if
* the destination thread is not greater than or equal to 0x8000.
*
* UDMAP receive channels mapped to destination threads will have their
* RCHAN_THRD_ID register cleared if the unpairing is successful.
*
* Request type is TI_SCI_MSG_RM_PSIL_UNPAIR, response is a generic ACK or NACK
* message.
*/
struct ti_sci_msg_psil_unpair {
struct ti_sci_msg_hdr hdr;
u32 nav_id;
u32 src_thread;
u32 dst_thread;
} __packed;
/**
* struct ti_sci_msg_udmap_rx_flow_cfg - UDMAP receive flow configuration
* message
* @hdr: Generic Header
* @nav_id: SoC Navigator Subsystem device ID from which the receive flow is
* allocated
* @flow_index: UDMAP receive flow index for non-optional configuration.
* @rx_ch_index: Specifies the index of the receive channel using the flow_index
* @rx_einfo_present: UDMAP receive flow extended packet info present.
* @rx_psinfo_present: UDMAP receive flow PS words present.
* @rx_error_handling: UDMAP receive flow error handling configuration. Valid
* values are TI_SCI_RM_UDMAP_RX_FLOW_ERR_DROP/RETRY.
* @rx_desc_type: UDMAP receive flow descriptor type. It can be one of
* TI_SCI_RM_UDMAP_RX_FLOW_DESC_HOST/MONO.
* @rx_sop_offset: UDMAP receive flow start of packet offset.
* @rx_dest_qnum: UDMAP receive flow destination queue number.
* @rx_ps_location: UDMAP receive flow PS words location.
* 0 - end of packet descriptor
* 1 - Beginning of the data buffer
* @rx_src_tag_hi: UDMAP receive flow source tag high byte constant
* @rx_src_tag_lo: UDMAP receive flow source tag low byte constant
* @rx_dest_tag_hi: UDMAP receive flow destination tag high byte constant
* @rx_dest_tag_lo: UDMAP receive flow destination tag low byte constant
* @rx_src_tag_hi_sel: UDMAP receive flow source tag high byte selector
* @rx_src_tag_lo_sel: UDMAP receive flow source tag low byte selector
* @rx_dest_tag_hi_sel: UDMAP receive flow destination tag high byte selector
* @rx_dest_tag_lo_sel: UDMAP receive flow destination tag low byte selector
* @rx_size_thresh_en: UDMAP receive flow packet size based free buffer queue
* enable. If enabled, the ti_sci_rm_udmap_rx_flow_opt_cfg also need to be
* configured and sent.
* @rx_fdq0_sz0_qnum: UDMAP receive flow free descriptor queue 0.
* @rx_fdq1_qnum: UDMAP receive flow free descriptor queue 1.
* @rx_fdq2_qnum: UDMAP receive flow free descriptor queue 2.
* @rx_fdq3_qnum: UDMAP receive flow free descriptor queue 3.
*
* For detailed information on the settings, see the UDMAP section of the TRM.
*/
struct ti_sci_msg_udmap_rx_flow_cfg {
struct ti_sci_msg_hdr hdr;
u32 nav_id;
u32 flow_index;
u32 rx_ch_index;
u8 rx_einfo_present;
u8 rx_psinfo_present;
u8 rx_error_handling;
u8 rx_desc_type;
u16 rx_sop_offset;
u16 rx_dest_qnum;
u8 rx_ps_location;
u8 rx_src_tag_hi;
u8 rx_src_tag_lo;
u8 rx_dest_tag_hi;
u8 rx_dest_tag_lo;
u8 rx_src_tag_hi_sel;
u8 rx_src_tag_lo_sel;
u8 rx_dest_tag_hi_sel;
u8 rx_dest_tag_lo_sel;
u8 rx_size_thresh_en;
u16 rx_fdq0_sz0_qnum;
u16 rx_fdq1_qnum;
u16 rx_fdq2_qnum;
u16 rx_fdq3_qnum;
} __packed;
/**
* struct rm_ti_sci_msg_udmap_rx_flow_opt_cfg - parameters for UDMAP receive
* flow optional configuration
* @hdr: Generic Header
* @nav_id: SoC Navigator Subsystem device ID from which the receive flow is
* allocated
* @flow_index: UDMAP receive flow index for optional configuration.
* @rx_ch_index: Specifies the index of the receive channel using the flow_index
* @rx_size_thresh0: UDMAP receive flow packet size threshold 0.
* @rx_size_thresh1: UDMAP receive flow packet size threshold 1.
* @rx_size_thresh2: UDMAP receive flow packet size threshold 2.
* @rx_fdq0_sz1_qnum: UDMAP receive flow free descriptor queue for size
* threshold 1.
* @rx_fdq0_sz2_qnum: UDMAP receive flow free descriptor queue for size
* threshold 2.
* @rx_fdq0_sz3_qnum: UDMAP receive flow free descriptor queue for size
* threshold 3.
*
* For detailed information on the settings, see the UDMAP section of the TRM.
*/
struct rm_ti_sci_msg_udmap_rx_flow_opt_cfg {
struct ti_sci_msg_hdr hdr;
u32 nav_id;
u32 flow_index;
u32 rx_ch_index;
u16 rx_size_thresh0;
u16 rx_size_thresh1;
u16 rx_size_thresh2;
u16 rx_fdq0_sz1_qnum;
u16 rx_fdq0_sz2_qnum;
u16 rx_fdq0_sz3_qnum;
} __packed;
/**
* Configures a Navigator Subsystem UDMAP transmit channel
*
* Configures the non-real-time registers of a Navigator Subsystem UDMAP
* transmit channel. The channel index must be assigned to the host defined
* in the TISCI header via the RM board configuration resource assignment
* range list.
*
* @hdr: Generic Header
*
* @valid_params: Bitfield defining validity of tx channel configuration
* parameters. The tx channel configuration fields are not valid, and will not
* be used for ch configuration, if their corresponding valid bit is zero.
* Valid bit usage:
* 0 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_pause_on_err
* 1 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_atype
* 2 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_chan_type
* 3 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_fetch_size
* 4 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::txcq_qnum
* 5 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_priority
* 6 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_qos
* 7 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_orderid
* 8 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_sched_priority
* 9 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_filt_einfo
* 10 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_filt_pswords
* 11 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_supr_tdpkt
* 12 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_credit_count
* 13 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::fdepth
* 14 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_burst_size
*
* @nav_id: SoC device ID of Navigator Subsystem where tx channel is located
*
* @index: UDMAP transmit channel index.
*
* @tx_pause_on_err: UDMAP transmit channel pause on error configuration to
* be programmed into the tx_pause_on_err field of the channel's TCHAN_TCFG
* register.
*
* @tx_filt_einfo: UDMAP transmit channel extended packet information passing
* configuration to be programmed into the tx_filt_einfo field of the
* channel's TCHAN_TCFG register.
*
* @tx_filt_pswords: UDMAP transmit channel protocol specific word passing
* configuration to be programmed into the tx_filt_pswords field of the
* channel's TCHAN_TCFG register.
*
* @tx_atype: UDMAP transmit channel non Ring Accelerator access pointer
* interpretation configuration to be programmed into the tx_atype field of
* the channel's TCHAN_TCFG register.
*
* @tx_chan_type: UDMAP transmit channel functional channel type and work
* passing mechanism configuration to be programmed into the tx_chan_type
* field of the channel's TCHAN_TCFG register.
*
* @tx_supr_tdpkt: UDMAP transmit channel teardown packet generation suppression
* configuration to be programmed into the tx_supr_tdpkt field of the channel's
* TCHAN_TCFG register.
*
* @tx_fetch_size: UDMAP transmit channel number of 32-bit descriptor words to
* fetch configuration to be programmed into the tx_fetch_size field of the
* channel's TCHAN_TCFG register. The user must make sure to set the maximum
* word count that can pass through the channel for any allowed descriptor type.
*
* @tx_credit_count: UDMAP transmit channel transfer request credit count
* configuration to be programmed into the count field of the TCHAN_TCREDIT
* register. Specifies how many credits for complete TRs are available.
*
* @txcq_qnum: UDMAP transmit channel completion queue configuration to be
* programmed into the txcq_qnum field of the TCHAN_TCQ register. The specified
* completion queue must be assigned to the host, or a subordinate of the host,
* requesting configuration of the transmit channel.
*
* @tx_priority: UDMAP transmit channel transmit priority value to be programmed
* into the priority field of the channel's TCHAN_TPRI_CTRL register.
*
* @tx_qos: UDMAP transmit channel transmit qos value to be programmed into the
* qos field of the channel's TCHAN_TPRI_CTRL register.
*
* @tx_orderid: UDMAP transmit channel bus order id value to be programmed into
* the orderid field of the channel's TCHAN_TPRI_CTRL register.
*
* @fdepth: UDMAP transmit channel FIFO depth configuration to be programmed
* into the fdepth field of the TCHAN_TFIFO_DEPTH register. Sets the number of
* Tx FIFO bytes which are allowed to be stored for the channel. Check the UDMAP
* section of the TRM for restrictions regarding this parameter.
*
* @tx_sched_priority: UDMAP transmit channel tx scheduling priority
* configuration to be programmed into the priority field of the channel's
* TCHAN_TST_SCHED register.
*
* @tx_burst_size: UDMAP transmit channel burst size configuration to be
* programmed into the tx_burst_size field of the TCHAN_TCFG register.
*/
struct ti_sci_msg_rm_udmap_tx_ch_cfg_req {
struct ti_sci_msg_hdr hdr;
u32 valid_params;
u16 nav_id;
u16 index;
u8 tx_pause_on_err;
u8 tx_filt_einfo;
u8 tx_filt_pswords;
u8 tx_atype;
u8 tx_chan_type;
u8 tx_supr_tdpkt;
u16 tx_fetch_size;
u8 tx_credit_count;
u16 txcq_qnum;
u8 tx_priority;
u8 tx_qos;
u8 tx_orderid;
u16 fdepth;
u8 tx_sched_priority;
u8 tx_burst_size;
} __packed;
/**
* Configures a Navigator Subsystem UDMAP receive channel
*
* Configures the non-real-time registers of a Navigator Subsystem UDMAP
* receive channel. The channel index must be assigned to the host defined
* in the TISCI header via the RM board configuration resource assignment
* range list.
*
* @hdr: Generic Header
*
* @valid_params: Bitfield defining validity of rx channel configuration
* parameters.
* The rx channel configuration fields are not valid, and will not be used for
* ch configuration, if their corresponding valid bit is zero.
* Valid bit usage:
* 0 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_pause_on_err
* 1 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_atype
* 2 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_chan_type
* 3 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_fetch_size
* 4 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rxcq_qnum
* 5 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_priority
* 6 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_qos
* 7 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_orderid
* 8 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_sched_priority
* 9 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::flowid_start
* 10 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::flowid_cnt
* 11 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_ignore_short
* 12 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_ignore_long
* 14 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_burst_size
*
* @nav_id: SoC device ID of Navigator Subsystem where rx channel is located
*
* @index: UDMAP receive channel index.
*
* @rx_fetch_size: UDMAP receive channel number of 32-bit descriptor words to
* fetch configuration to be programmed into the rx_fetch_size field of the
* channel's RCHAN_RCFG register.
*
* @rxcq_qnum: UDMAP receive channel completion queue configuration to be
* programmed into the rxcq_qnum field of the RCHAN_RCQ register.
* The specified completion queue must be assigned to the host, or a subordinate
* of the host, requesting configuration of the receive channel.
*
* @rx_priority: UDMAP receive channel receive priority value to be programmed
* into the priority field of the channel's RCHAN_RPRI_CTRL register.
*
* @rx_qos: UDMAP receive channel receive qos value to be programmed into the
* qos field of the channel's RCHAN_RPRI_CTRL register.
*
* @rx_orderid: UDMAP receive channel bus order id value to be programmed into
* the orderid field of the channel's RCHAN_RPRI_CTRL register.
*
* @rx_sched_priority: UDMAP receive channel rx scheduling priority
* configuration to be programmed into the priority field of the channel's
* RCHAN_RST_SCHED register.
*
* @flowid_start: UDMAP receive channel additional flows starting index
* configuration to program into the flow_start field of the RCHAN_RFLOW_RNG
* register. Specifies the starting index for flow IDs the receive channel is to
* make use of beyond the default flow. flowid_start and @ref flowid_cnt must be
* set as valid and configured together. The starting flow ID set by
* @ref flowid_cnt must be a flow index within the Navigator Subsystem's subset
* of flows beyond the default flows statically mapped to receive channels.
* The additional flows must be assigned to the host, or a subordinate of the
* host, requesting configuration of the receive channel.
*
* @flowid_cnt: UDMAP receive channel additional flows count configuration to
* program into the flowid_cnt field of the RCHAN_RFLOW_RNG register.
* This field specifies how many flow IDs are in the additional contiguous range
* of legal flow IDs for the channel. @ref flowid_start and flowid_cnt must be
* set as valid and configured together. Disabling the valid_params field bit
* for flowid_cnt indicates no flow IDs other than the default are to be
* allocated and used by the receive channel. @ref flowid_start plus flowid_cnt
* cannot be greater than the number of receive flows in the receive channel's
* Navigator Subsystem. The additional flows must be assigned to the host, or a
* subordinate of the host, requesting configuration of the receive channel.
*
* @rx_pause_on_err: UDMAP receive channel pause on error configuration to be
* programmed into the rx_pause_on_err field of the channel's RCHAN_RCFG
* register.
*
* @rx_atype: UDMAP receive channel non Ring Accelerator access pointer
* interpretation configuration to be programmed into the rx_atype field of the
* channel's RCHAN_RCFG register.
*
* @rx_chan_type: UDMAP receive channel functional channel type and work passing
* mechanism configuration to be programmed into the rx_chan_type field of the
* channel's RCHAN_RCFG register.
*
* @rx_ignore_short: UDMAP receive channel short packet treatment configuration
* to be programmed into the rx_ignore_short field of the RCHAN_RCFG register.
*
* @rx_ignore_long: UDMAP receive channel long packet treatment configuration to
* be programmed into the rx_ignore_long field of the RCHAN_RCFG register.
*
* @rx_burst_size: UDMAP receive channel burst size configuration to be
* programmed into the rx_burst_size field of the RCHAN_RCFG register.
*/
struct ti_sci_msg_rm_udmap_rx_ch_cfg_req {
struct ti_sci_msg_hdr hdr;
u32 valid_params;
u16 nav_id;
u16 index;
u16 rx_fetch_size;
u16 rxcq_qnum;
u8 rx_priority;
u8 rx_qos;
u8 rx_orderid;
u8 rx_sched_priority;
u16 flowid_start;
u16 flowid_cnt;
u8 rx_pause_on_err;
u8 rx_atype;
u8 rx_chan_type;
u8 rx_ignore_short;
u8 rx_ignore_long;
u8 rx_burst_size;
} __packed;
/**
* Configures a Navigator Subsystem UDMAP receive flow
*
* Configures a Navigator Subsystem UDMAP receive flow's registers.
* Configuration does not include the flow registers which handle size-based
* free descriptor queue routing.
*
* The flow index must be assigned to the host defined in the TISCI header via
* the RM board configuration resource assignment range list.
*
* @hdr: Standard TISCI header
*
* @valid_params
* Bitfield defining validity of rx flow configuration parameters. The
* rx flow configuration fields are not valid, and will not be used for flow
* configuration, if their corresponding valid bit is zero. Valid bit usage:
* 0 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_einfo_present
* 1 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_psinfo_present
* 2 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_error_handling
* 3 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_desc_type
* 4 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_sop_offset
* 5 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_dest_qnum
* 6 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_src_tag_hi
* 7 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_src_tag_lo
* 8 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_dest_tag_hi
* 9 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_dest_tag_lo
* 10 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_src_tag_hi_sel
* 11 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_src_tag_lo_sel
* 12 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_dest_tag_hi_sel
* 13 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_dest_tag_lo_sel
* 14 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_fdq0_sz0_qnum
* 15 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_fdq1_sz0_qnum
* 16 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_fdq2_sz0_qnum
* 17 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_fdq3_sz0_qnum
* 18 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_ps_location
*
* @nav_id: SoC device ID of Navigator Subsystem from which the receive flow is
* allocated
*
* @flow_index: UDMAP receive flow index for non-optional configuration.
*
* @rx_einfo_present:
* UDMAP receive flow extended packet info present configuration to be
* programmed into the rx_einfo_present field of the flow's RFLOW_RFA register.
*
* @rx_psinfo_present:
* UDMAP receive flow PS words present configuration to be programmed into the
* rx_psinfo_present field of the flow's RFLOW_RFA register.
*
* @rx_error_handling:
* UDMAP receive flow error handling configuration to be programmed into the
* rx_error_handling field of the flow's RFLOW_RFA register.
*
* @rx_desc_type:
* UDMAP receive flow descriptor type configuration to be programmed into the
* rx_desc_type field field of the flow's RFLOW_RFA register.
*
* @rx_sop_offset:
* UDMAP receive flow start of packet offset configuration to be programmed
* into the rx_sop_offset field of the RFLOW_RFA register. See the UDMAP
* section of the TRM for more information on this setting. Valid values for
* this field are 0-255 bytes.
*
* @rx_dest_qnum:
* UDMAP receive flow destination queue configuration to be programmed into the
* rx_dest_qnum field of the flow's RFLOW_RFA register. The specified
* destination queue must be valid within the Navigator Subsystem and must be
* owned by the host, or a subordinate of the host, requesting allocation and
* configuration of the receive flow.
*
* @rx_src_tag_hi:
* UDMAP receive flow source tag high byte constant configuration to be
* programmed into the rx_src_tag_hi field of the flow's RFLOW_RFB register.
* See the UDMAP section of the TRM for more information on this setting.
*
* @rx_src_tag_lo:
* UDMAP receive flow source tag low byte constant configuration to be
* programmed into the rx_src_tag_lo field of the flow's RFLOW_RFB register.
* See the UDMAP section of the TRM for more information on this setting.
*
* @rx_dest_tag_hi:
* UDMAP receive flow destination tag high byte constant configuration to be
* programmed into the rx_dest_tag_hi field of the flow's RFLOW_RFB register.
* See the UDMAP section of the TRM for more information on this setting.
*
* @rx_dest_tag_lo:
* UDMAP receive flow destination tag low byte constant configuration to be
* programmed into the rx_dest_tag_lo field of the flow's RFLOW_RFB register.
* See the UDMAP section of the TRM for more information on this setting.
*
* @rx_src_tag_hi_sel:
* UDMAP receive flow source tag high byte selector configuration to be
* programmed into the rx_src_tag_hi_sel field of the RFLOW_RFC register. See
* the UDMAP section of the TRM for more information on this setting.
*
* @rx_src_tag_lo_sel:
* UDMAP receive flow source tag low byte selector configuration to be
* programmed into the rx_src_tag_lo_sel field of the RFLOW_RFC register. See
* the UDMAP section of the TRM for more information on this setting.
*
* @rx_dest_tag_hi_sel:
* UDMAP receive flow destination tag high byte selector configuration to be
* programmed into the rx_dest_tag_hi_sel field of the RFLOW_RFC register. See
* the UDMAP section of the TRM for more information on this setting.
*
* @rx_dest_tag_lo_sel:
* UDMAP receive flow destination tag low byte selector configuration to be
* programmed into the rx_dest_tag_lo_sel field of the RFLOW_RFC register. See
* the UDMAP section of the TRM for more information on this setting.
*
* @rx_fdq0_sz0_qnum:
* UDMAP receive flow free descriptor queue 0 configuration to be programmed
* into the rx_fdq0_sz0_qnum field of the flow's RFLOW_RFD register. See the
* UDMAP section of the TRM for more information on this setting. The specified
* free queue must be valid within the Navigator Subsystem and must be owned
* by the host, or a subordinate of the host, requesting allocation and
* configuration of the receive flow.
*
* @rx_fdq1_qnum:
* UDMAP receive flow free descriptor queue 1 configuration to be programmed
* into the rx_fdq1_qnum field of the flow's RFLOW_RFD register. See the
* UDMAP section of the TRM for more information on this setting. The specified
* free queue must be valid within the Navigator Subsystem and must be owned
* by the host, or a subordinate of the host, requesting allocation and
* configuration of the receive flow.
*
* @rx_fdq2_qnum:
* UDMAP receive flow free descriptor queue 2 configuration to be programmed
* into the rx_fdq2_qnum field of the flow's RFLOW_RFE register. See the
* UDMAP section of the TRM for more information on this setting. The specified
* free queue must be valid within the Navigator Subsystem and must be owned
* by the host, or a subordinate of the host, requesting allocation and
* configuration of the receive flow.
*
* @rx_fdq3_qnum:
* UDMAP receive flow free descriptor queue 3 configuration to be programmed
* into the rx_fdq3_qnum field of the flow's RFLOW_RFE register. See the
* UDMAP section of the TRM for more information on this setting. The specified
* free queue must be valid within the Navigator Subsystem and must be owned
* by the host, or a subordinate of the host, requesting allocation and
* configuration of the receive flow.
*
* @rx_ps_location:
* UDMAP receive flow PS words location configuration to be programmed into the
* rx_ps_location field of the flow's RFLOW_RFA register.
*/
struct ti_sci_msg_rm_udmap_flow_cfg_req {
struct ti_sci_msg_hdr hdr;
u32 valid_params;
u16 nav_id;
u16 flow_index;
u8 rx_einfo_present;
u8 rx_psinfo_present;
u8 rx_error_handling;
u8 rx_desc_type;
u16 rx_sop_offset;
u16 rx_dest_qnum;
u8 rx_src_tag_hi;
u8 rx_src_tag_lo;
u8 rx_dest_tag_hi;
u8 rx_dest_tag_lo;
u8 rx_src_tag_hi_sel;
u8 rx_src_tag_lo_sel;
u8 rx_dest_tag_hi_sel;
u8 rx_dest_tag_lo_sel;
u16 rx_fdq0_sz0_qnum;
u16 rx_fdq1_qnum;
u16 rx_fdq2_qnum;
u16 rx_fdq3_qnum;
u8 rx_ps_location;
} __packed;
/**
* struct ti_sci_msg_req_proc_request - Request a processor
* @hdr: Generic Header
* @processor_id: ID of processor being requested
*
* Request type is TI_SCI_MSG_PROC_REQUEST, response is a generic ACK/NACK
* message.
*/
struct ti_sci_msg_req_proc_request {
struct ti_sci_msg_hdr hdr;
u8 processor_id;
} __packed;
/**
* struct ti_sci_msg_req_proc_release - Release a processor
* @hdr: Generic Header
* @processor_id: ID of processor being released
*
* Request type is TI_SCI_MSG_PROC_RELEASE, response is a generic ACK/NACK
* message.
*/
struct ti_sci_msg_req_proc_release {
struct ti_sci_msg_hdr hdr;
u8 processor_id;
} __packed;
/**
* struct ti_sci_msg_req_proc_handover - Handover a processor to a host
* @hdr: Generic Header
* @processor_id: ID of processor being handed over
* @host_id: Host ID the control needs to be transferred to
*
* Request type is TI_SCI_MSG_PROC_HANDOVER, response is a generic ACK/NACK
* message.
*/
struct ti_sci_msg_req_proc_handover {
struct ti_sci_msg_hdr hdr;
u8 processor_id;
u8 host_id;
} __packed;
/* Boot Vector masks */
#define TI_SCI_ADDR_LOW_MASK GENMASK_ULL(31, 0)
#define TI_SCI_ADDR_HIGH_MASK GENMASK_ULL(63, 32)
#define TI_SCI_ADDR_HIGH_SHIFT 32
/**
* struct ti_sci_msg_req_set_config - Set Processor boot configuration
* @hdr: Generic Header
* @processor_id: ID of processor being configured
* @bootvector_low: Lower 32 bit address (Little Endian) of boot vector
* @bootvector_high: Higher 32 bit address (Little Endian) of boot vector
* @config_flags_set: Optional Processor specific Config Flags to set.
* Setting a bit here implies the corresponding mode
* will be set
* @config_flags_clear: Optional Processor specific Config Flags to clear.
* Setting a bit here implies the corresponding mode
* will be cleared
*
* Request type is TI_SCI_MSG_PROC_HANDOVER, response is a generic ACK/NACK
* message.
*/
struct ti_sci_msg_req_set_config {
struct ti_sci_msg_hdr hdr;
u8 processor_id;
u32 bootvector_low;
u32 bootvector_high;
u32 config_flags_set;
u32 config_flags_clear;
} __packed;
/**
* struct ti_sci_msg_req_set_ctrl - Set Processor boot control flags
* @hdr: Generic Header
* @processor_id: ID of processor being configured
* @control_flags_set: Optional Processor specific Control Flags to set.
* Setting a bit here implies the corresponding mode
* will be set
* @control_flags_clear:Optional Processor specific Control Flags to clear.
* Setting a bit here implies the corresponding mode
* will be cleared
*
* Request type is TI_SCI_MSG_SET_CTRL, response is a generic ACK/NACK
* message.
*/
struct ti_sci_msg_req_set_ctrl {
struct ti_sci_msg_hdr hdr;
u8 processor_id;
u32 control_flags_set;
u32 control_flags_clear;
} __packed;
/**
* struct ti_sci_msg_req_get_status - Processor boot status request
* @hdr: Generic Header
* @processor_id: ID of processor whose status is being requested
*
* Request type is TI_SCI_MSG_GET_STATUS, response is an appropriate
* message, or NACK in case of inability to satisfy request.
*/
struct ti_sci_msg_req_get_status {
struct ti_sci_msg_hdr hdr;
u8 processor_id;
} __packed;
/**
* struct ti_sci_msg_resp_get_status - Processor boot status response
* @hdr: Generic Header
* @processor_id: ID of processor whose status is returned
* @bootvector_low: Lower 32 bit address (Little Endian) of boot vector
* @bootvector_high: Higher 32 bit address (Little Endian) of boot vector
* @config_flags: Optional Processor specific Config Flags set currently
* @control_flags: Optional Processor specific Control Flags set currently
* @status_flags: Optional Processor specific Status Flags set currently
*
* Response structure to a TI_SCI_MSG_GET_STATUS request.
*/
struct ti_sci_msg_resp_get_status {
struct ti_sci_msg_hdr hdr;
u8 processor_id;
u32 bootvector_low;
u32 bootvector_high;
u32 config_flags;
u32 control_flags;
u32 status_flags;
} __packed;
#endif /* __TI_SCI_H */

View File

@ -18,6 +18,50 @@ struct scmi_sensors {
const struct scmi_sensor_info **info[hwmon_max];
};
static inline u64 __pow10(u8 x)
{
u64 r = 1;
while (x--)
r *= 10;
return r;
}
static int scmi_hwmon_scale(const struct scmi_sensor_info *sensor, u64 *value)
{
s8 scale = sensor->scale;
u64 f;
switch (sensor->type) {
case TEMPERATURE_C:
case VOLTAGE:
case CURRENT:
scale += 3;
break;
case POWER:
case ENERGY:
scale += 6;
break;
default:
break;
}
if (scale == 0)
return 0;
if (abs(scale) > 19)
return -E2BIG;
f = __pow10(abs(scale));
if (scale > 0)
*value *= f;
else
*value = div64_u64(*value, f);
return 0;
}
static int scmi_hwmon_read(struct device *dev, enum hwmon_sensor_types type,
u32 attr, int channel, long *val)
{
@ -29,6 +73,10 @@ static int scmi_hwmon_read(struct device *dev, enum hwmon_sensor_types type,
sensor = *(scmi_sensors->info[type] + channel);
ret = h->sensor_ops->reading_get(h, sensor->id, false, &value);
if (ret)
return ret;
ret = scmi_hwmon_scale(sensor, &value);
if (!ret)
*val = value;

View File

@ -8,6 +8,14 @@ menuconfig MEMORY
if MEMORY
config DDR
bool
help
Data from JEDEC specs for DDR SDRAM memories,
particularly the AC timing parameters and addressing
information. This data is useful for drivers handling
DDR SDRAM controllers.
config ARM_PL172_MPMC
tristate "ARM PL172 MPMC driver"
depends on ARM_AMBA && OF

View File

@ -3,6 +3,7 @@
# Makefile for memory devices
#
obj-$(CONFIG_DDR) += jedec_ddr_data.o
ifeq ($(CONFIG_DDR),y)
obj-$(CONFIG_OF) += of_memory.o
endif

View File

@ -33,10 +33,10 @@
#include <linux/io.h>
#include <linux/module.h>
#include <linux/of_address.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#define DRVNAME "brcmstb-dpfe"
#define FIRMWARE_NAME "dpfe.bin"
/* DCPU register offsets */
#define REG_DCPU_RESET 0x0
@ -59,6 +59,7 @@
#define DRAM_INFO_MR4 0x4
#define DRAM_INFO_ERROR 0x8
#define DRAM_INFO_MR4_MASK 0xff
#define DRAM_INFO_MR4_SHIFT 24 /* We need to look at byte 3 */
/* DRAM MR4 Offsets & Masks */
#define DRAM_MR4_REFRESH 0x0 /* Refresh rate */
@ -73,13 +74,23 @@
#define DRAM_MR4_TH_OFFS_MASK 0x3
#define DRAM_MR4_TUF_MASK 0x1
/* DRAM Vendor Offsets & Masks */
/* DRAM Vendor Offsets & Masks (API v2) */
#define DRAM_VENDOR_MR5 0x0
#define DRAM_VENDOR_MR6 0x4
#define DRAM_VENDOR_MR7 0x8
#define DRAM_VENDOR_MR8 0xc
#define DRAM_VENDOR_ERROR 0x10
#define DRAM_VENDOR_MASK 0xff
#define DRAM_VENDOR_SHIFT 24 /* We need to look at byte 3 */
/* DRAM Information Offsets & Masks (API v3) */
#define DRAM_DDR_INFO_MR4 0x0
#define DRAM_DDR_INFO_MR5 0x4
#define DRAM_DDR_INFO_MR6 0x8
#define DRAM_DDR_INFO_MR7 0xc
#define DRAM_DDR_INFO_MR8 0x10
#define DRAM_DDR_INFO_ERROR 0x14
#define DRAM_DDR_INFO_MASK 0xff
/* Reset register bits & masks */
#define DCPU_RESET_SHIFT 0x0
@ -109,7 +120,7 @@
#define DPFE_MSG_TYPE_COMMAND 1
#define DPFE_MSG_TYPE_RESPONSE 2
#define DELAY_LOOP_MAX 200000
#define DELAY_LOOP_MAX 1000
enum dpfe_msg_fields {
MSG_HEADER,
@ -117,7 +128,7 @@ enum dpfe_msg_fields {
MSG_ARG_COUNT,
MSG_ARG0,
MSG_CHKSUM,
MSG_FIELD_MAX /* Last entry */
MSG_FIELD_MAX = 16 /* Max number of arguments */
};
enum dpfe_commands {
@ -127,14 +138,6 @@ enum dpfe_commands {
DPFE_CMD_MAX /* Last entry */
};
struct dpfe_msg {
u32 header;
u32 command;
u32 arg_count;
u32 arg0;
u32 chksum; /* This is the sum of all other entries. */
};
/*
* Format of the binary firmware file:
*
@ -168,12 +171,21 @@ struct init_data {
bool is_big_endian;
};
/* API version and corresponding commands */
struct dpfe_api {
int version;
const char *fw_name;
const struct attribute_group **sysfs_attrs;
u32 command[DPFE_CMD_MAX][MSG_FIELD_MAX];
};
/* Things we need for as long as we are active. */
struct private_data {
void __iomem *regs;
void __iomem *dmem;
void __iomem *imem;
struct device *dev;
const struct dpfe_api *dpfe_api;
struct mutex lock;
};
@ -182,28 +194,99 @@ static const char *error_text[] = {
"Incorrect checksum", "Malformed command", "Timed out",
};
/* List of supported firmware commands */
static const u32 dpfe_commands[DPFE_CMD_MAX][MSG_FIELD_MAX] = {
[DPFE_CMD_GET_INFO] = {
[MSG_HEADER] = DPFE_MSG_TYPE_COMMAND,
[MSG_COMMAND] = 1,
[MSG_ARG_COUNT] = 1,
[MSG_ARG0] = 1,
[MSG_CHKSUM] = 4,
},
[DPFE_CMD_GET_REFRESH] = {
[MSG_HEADER] = DPFE_MSG_TYPE_COMMAND,
[MSG_COMMAND] = 2,
[MSG_ARG_COUNT] = 1,
[MSG_ARG0] = 1,
[MSG_CHKSUM] = 5,
},
[DPFE_CMD_GET_VENDOR] = {
[MSG_HEADER] = DPFE_MSG_TYPE_COMMAND,
[MSG_COMMAND] = 2,
[MSG_ARG_COUNT] = 1,
[MSG_ARG0] = 2,
[MSG_CHKSUM] = 6,
/*
* Forward declaration of our sysfs attribute functions, so we can declare the
* attribute data structures early.
*/
static ssize_t show_info(struct device *, struct device_attribute *, char *);
static ssize_t show_refresh(struct device *, struct device_attribute *, char *);
static ssize_t store_refresh(struct device *, struct device_attribute *,
const char *, size_t);
static ssize_t show_vendor(struct device *, struct device_attribute *, char *);
static ssize_t show_dram(struct device *, struct device_attribute *, char *);
/*
* Declare our attributes early, so they can be referenced in the API data
* structure. We need to do this, because the attributes depend on the API
* version.
*/
static DEVICE_ATTR(dpfe_info, 0444, show_info, NULL);
static DEVICE_ATTR(dpfe_refresh, 0644, show_refresh, store_refresh);
static DEVICE_ATTR(dpfe_vendor, 0444, show_vendor, NULL);
static DEVICE_ATTR(dpfe_dram, 0444, show_dram, NULL);
/* API v2 sysfs attributes */
static struct attribute *dpfe_v2_attrs[] = {
&dev_attr_dpfe_info.attr,
&dev_attr_dpfe_refresh.attr,
&dev_attr_dpfe_vendor.attr,
NULL
};
ATTRIBUTE_GROUPS(dpfe_v2);
/* API v3 sysfs attributes */
static struct attribute *dpfe_v3_attrs[] = {
&dev_attr_dpfe_info.attr,
&dev_attr_dpfe_dram.attr,
NULL
};
ATTRIBUTE_GROUPS(dpfe_v3);
/* API v2 firmware commands */
static const struct dpfe_api dpfe_api_v2 = {
.version = 2,
.fw_name = "dpfe.bin",
.sysfs_attrs = dpfe_v2_groups,
.command = {
[DPFE_CMD_GET_INFO] = {
[MSG_HEADER] = DPFE_MSG_TYPE_COMMAND,
[MSG_COMMAND] = 1,
[MSG_ARG_COUNT] = 1,
[MSG_ARG0] = 1,
[MSG_CHKSUM] = 4,
},
[DPFE_CMD_GET_REFRESH] = {
[MSG_HEADER] = DPFE_MSG_TYPE_COMMAND,
[MSG_COMMAND] = 2,
[MSG_ARG_COUNT] = 1,
[MSG_ARG0] = 1,
[MSG_CHKSUM] = 5,
},
[DPFE_CMD_GET_VENDOR] = {
[MSG_HEADER] = DPFE_MSG_TYPE_COMMAND,
[MSG_COMMAND] = 2,
[MSG_ARG_COUNT] = 1,
[MSG_ARG0] = 2,
[MSG_CHKSUM] = 6,
},
}
};
/* API v3 firmware commands */
static const struct dpfe_api dpfe_api_v3 = {
.version = 3,
.fw_name = NULL, /* We expect the firmware to have been downloaded! */
.sysfs_attrs = dpfe_v3_groups,
.command = {
[DPFE_CMD_GET_INFO] = {
[MSG_HEADER] = DPFE_MSG_TYPE_COMMAND,
[MSG_COMMAND] = 0x0101,
[MSG_ARG_COUNT] = 1,
[MSG_ARG0] = 1,
[MSG_CHKSUM] = 0x104,
},
[DPFE_CMD_GET_REFRESH] = {
[MSG_HEADER] = DPFE_MSG_TYPE_COMMAND,
[MSG_COMMAND] = 0x0202,
[MSG_ARG_COUNT] = 0,
/*
* This is a bit ugly. Without arguments, the checksum
* follows right after the argument count and not at
* offset MSG_CHKSUM.
*/
[MSG_ARG0] = 0x203,
},
/* There's no GET_VENDOR command in API v3. */
},
};
@ -248,13 +331,13 @@ static void __enable_dcpu(void __iomem *regs)
writel_relaxed(val, regs + REG_DCPU_RESET);
}
static unsigned int get_msg_chksum(const u32 msg[])
static unsigned int get_msg_chksum(const u32 msg[], unsigned int max)
{
unsigned int sum = 0;
unsigned int i;
/* Don't include the last field in the checksum. */
for (i = 0; i < MSG_FIELD_MAX - 1; i++)
for (i = 0; i < max; i++)
sum += msg[i];
return sum;
@ -267,6 +350,11 @@ static void __iomem *get_msg_ptr(struct private_data *priv, u32 response,
unsigned int offset;
void __iomem *ptr = NULL;
/* There is no need to use this function for API v3 or later. */
if (unlikely(priv->dpfe_api->version >= 3)) {
return NULL;
}
msg_type = (response >> DRAM_MSG_TYPE_OFFSET) & DRAM_MSG_TYPE_MASK;
offset = (response >> DRAM_MSG_ADDR_OFFSET) & DRAM_MSG_ADDR_MASK;
@ -294,12 +382,25 @@ static void __iomem *get_msg_ptr(struct private_data *priv, u32 response,
return ptr;
}
static void __finalize_command(struct private_data *priv)
{
unsigned int release_mbox;
/*
* It depends on the API version which MBOX register we have to write to
* to signal we are done.
*/
release_mbox = (priv->dpfe_api->version < 3)
? REG_TO_HOST_MBOX : REG_TO_DCPU_MBOX;
writel_relaxed(0, priv->regs + release_mbox);
}
static int __send_command(struct private_data *priv, unsigned int cmd,
u32 result[])
{
const u32 *msg = dpfe_commands[cmd];
const u32 *msg = priv->dpfe_api->command[cmd];
void __iomem *regs = priv->regs;
unsigned int i, chksum;
unsigned int i, chksum, chksum_idx;
int ret = 0;
u32 resp;
@ -308,6 +409,18 @@ static int __send_command(struct private_data *priv, unsigned int cmd,
mutex_lock(&priv->lock);
/* Wait for DCPU to become ready */
for (i = 0; i < DELAY_LOOP_MAX; i++) {
resp = readl_relaxed(regs + REG_TO_HOST_MBOX);
if (resp == 0)
break;
msleep(1);
}
if (resp != 0) {
mutex_unlock(&priv->lock);
return -ETIMEDOUT;
}
/* Write command and arguments to message area */
for (i = 0; i < MSG_FIELD_MAX; i++)
writel_relaxed(msg[i], regs + DCPU_MSG_RAM(i));
@ -321,7 +434,7 @@ static int __send_command(struct private_data *priv, unsigned int cmd,
resp = readl_relaxed(regs + REG_TO_HOST_MBOX);
if (resp > 0)
break;
udelay(5);
msleep(1);
}
if (i == DELAY_LOOP_MAX) {
@ -331,10 +444,11 @@ static int __send_command(struct private_data *priv, unsigned int cmd,
/* Read response data */
for (i = 0; i < MSG_FIELD_MAX; i++)
result[i] = readl_relaxed(regs + DCPU_MSG_RAM(i));
chksum_idx = result[MSG_ARG_COUNT] + MSG_ARG_COUNT + 1;
}
/* Tell DCPU we are done */
writel_relaxed(0, regs + REG_TO_HOST_MBOX);
__finalize_command(priv);
mutex_unlock(&priv->lock);
@ -342,8 +456,8 @@ static int __send_command(struct private_data *priv, unsigned int cmd,
return ret;
/* Verify response */
chksum = get_msg_chksum(result);
if (chksum != result[MSG_CHKSUM])
chksum = get_msg_chksum(result, chksum_idx);
if (chksum != result[chksum_idx])
resp = DCPU_RET_ERR_CHKSUM;
if (resp != DCPU_RET_SUCCESS) {
@ -484,7 +598,15 @@ static int brcmstb_dpfe_download_firmware(struct platform_device *pdev,
return 0;
}
ret = request_firmware(&fw, FIRMWARE_NAME, dev);
/*
* If the firmware filename is NULL it means the boot firmware has to
* download the DCPU firmware for us. If that didn't work, we have to
* bail, since downloading it ourselves wouldn't work either.
*/
if (!priv->dpfe_api->fw_name)
return -ENODEV;
ret = request_firmware(&fw, priv->dpfe_api->fw_name, dev);
/* request_firmware() prints its own error messages. */
if (ret)
return ret;
@ -525,12 +647,10 @@ static int brcmstb_dpfe_download_firmware(struct platform_device *pdev,
}
static ssize_t generic_show(unsigned int command, u32 response[],
struct device *dev, char *buf)
struct private_data *priv, char *buf)
{
struct private_data *priv;
int ret;
priv = dev_get_drvdata(dev);
if (!priv)
return sprintf(buf, "ERROR: driver private data not set\n");
@ -545,10 +665,12 @@ static ssize_t show_info(struct device *dev, struct device_attribute *devattr,
char *buf)
{
u32 response[MSG_FIELD_MAX];
struct private_data *priv;
unsigned int info;
ssize_t ret;
ret = generic_show(DPFE_CMD_GET_INFO, response, dev, buf);
priv = dev_get_drvdata(dev);
ret = generic_show(DPFE_CMD_GET_INFO, response, priv, buf);
if (ret)
return ret;
@ -571,17 +693,17 @@ static ssize_t show_refresh(struct device *dev,
u32 mr4;
ssize_t ret;
ret = generic_show(DPFE_CMD_GET_REFRESH, response, dev, buf);
priv = dev_get_drvdata(dev);
ret = generic_show(DPFE_CMD_GET_REFRESH, response, priv, buf);
if (ret)
return ret;
priv = dev_get_drvdata(dev);
info = get_msg_ptr(priv, response[MSG_ARG0], buf, &ret);
if (!info)
return ret;
mr4 = readl_relaxed(info + DRAM_INFO_MR4) & DRAM_INFO_MR4_MASK;
mr4 = (readl_relaxed(info + DRAM_INFO_MR4) >> DRAM_INFO_MR4_SHIFT) &
DRAM_INFO_MR4_MASK;
refresh = (mr4 >> DRAM_MR4_REFRESH) & DRAM_MR4_REFRESH_MASK;
sr_abort = (mr4 >> DRAM_MR4_SR_ABORT) & DRAM_MR4_SR_ABORT_MASK;
@ -608,7 +730,6 @@ static ssize_t store_refresh(struct device *dev, struct device_attribute *attr,
return -EINVAL;
priv = dev_get_drvdata(dev);
ret = __send_command(priv, DPFE_CMD_GET_REFRESH, response);
if (ret)
return ret;
@ -623,30 +744,58 @@ static ssize_t store_refresh(struct device *dev, struct device_attribute *attr,
}
static ssize_t show_vendor(struct device *dev, struct device_attribute *devattr,
char *buf)
char *buf)
{
u32 response[MSG_FIELD_MAX];
struct private_data *priv;
void __iomem *info;
ssize_t ret;
ret = generic_show(DPFE_CMD_GET_VENDOR, response, dev, buf);
if (ret)
return ret;
u32 mr5, mr6, mr7, mr8, err;
priv = dev_get_drvdata(dev);
ret = generic_show(DPFE_CMD_GET_VENDOR, response, priv, buf);
if (ret)
return ret;
info = get_msg_ptr(priv, response[MSG_ARG0], buf, &ret);
if (!info)
return ret;
return sprintf(buf, "%#x %#x %#x %#x %#x\n",
readl_relaxed(info + DRAM_VENDOR_MR5) & DRAM_VENDOR_MASK,
readl_relaxed(info + DRAM_VENDOR_MR6) & DRAM_VENDOR_MASK,
readl_relaxed(info + DRAM_VENDOR_MR7) & DRAM_VENDOR_MASK,
readl_relaxed(info + DRAM_VENDOR_MR8) & DRAM_VENDOR_MASK,
readl_relaxed(info + DRAM_VENDOR_ERROR) &
DRAM_VENDOR_MASK);
mr5 = (readl_relaxed(info + DRAM_VENDOR_MR5) >> DRAM_VENDOR_SHIFT) &
DRAM_VENDOR_MASK;
mr6 = (readl_relaxed(info + DRAM_VENDOR_MR6) >> DRAM_VENDOR_SHIFT) &
DRAM_VENDOR_MASK;
mr7 = (readl_relaxed(info + DRAM_VENDOR_MR7) >> DRAM_VENDOR_SHIFT) &
DRAM_VENDOR_MASK;
mr8 = (readl_relaxed(info + DRAM_VENDOR_MR8) >> DRAM_VENDOR_SHIFT) &
DRAM_VENDOR_MASK;
err = readl_relaxed(info + DRAM_VENDOR_ERROR) & DRAM_VENDOR_MASK;
return sprintf(buf, "%#x %#x %#x %#x %#x\n", mr5, mr6, mr7, mr8, err);
}
static ssize_t show_dram(struct device *dev, struct device_attribute *devattr,
char *buf)
{
u32 response[MSG_FIELD_MAX];
struct private_data *priv;
ssize_t ret;
u32 mr4, mr5, mr6, mr7, mr8, err;
priv = dev_get_drvdata(dev);
ret = generic_show(DPFE_CMD_GET_REFRESH, response, priv, buf);
if (ret)
return ret;
mr4 = response[MSG_ARG0 + 0] & DRAM_INFO_MR4_MASK;
mr5 = response[MSG_ARG0 + 1] & DRAM_DDR_INFO_MASK;
mr6 = response[MSG_ARG0 + 2] & DRAM_DDR_INFO_MASK;
mr7 = response[MSG_ARG0 + 3] & DRAM_DDR_INFO_MASK;
mr8 = response[MSG_ARG0 + 4] & DRAM_DDR_INFO_MASK;
err = response[MSG_ARG0 + 5] & DRAM_DDR_INFO_MASK;
return sprintf(buf, "%#x %#x %#x %#x %#x %#x\n", mr4, mr5, mr6, mr7,
mr8, err);
}
static int brcmstb_dpfe_resume(struct platform_device *pdev)
@ -656,17 +805,6 @@ static int brcmstb_dpfe_resume(struct platform_device *pdev)
return brcmstb_dpfe_download_firmware(pdev, &init);
}
static DEVICE_ATTR(dpfe_info, 0444, show_info, NULL);
static DEVICE_ATTR(dpfe_refresh, 0644, show_refresh, store_refresh);
static DEVICE_ATTR(dpfe_vendor, 0444, show_vendor, NULL);
static struct attribute *dpfe_attrs[] = {
&dev_attr_dpfe_info.attr,
&dev_attr_dpfe_refresh.attr,
&dev_attr_dpfe_vendor.attr,
NULL
};
ATTRIBUTE_GROUPS(dpfe);
static int brcmstb_dpfe_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
@ -703,26 +841,47 @@ static int brcmstb_dpfe_probe(struct platform_device *pdev)
return -ENOENT;
}
ret = brcmstb_dpfe_download_firmware(pdev, &init);
if (ret)
return ret;
priv->dpfe_api = of_device_get_match_data(dev);
if (unlikely(!priv->dpfe_api)) {
/*
* It should be impossible to end up here, but to be safe we
* check anyway.
*/
dev_err(dev, "Couldn't determine API\n");
return -ENOENT;
}
ret = sysfs_create_groups(&pdev->dev.kobj, dpfe_groups);
ret = brcmstb_dpfe_download_firmware(pdev, &init);
if (ret) {
dev_err(dev, "Couldn't download firmware -- %d\n", ret);
return ret;
}
ret = sysfs_create_groups(&pdev->dev.kobj, priv->dpfe_api->sysfs_attrs);
if (!ret)
dev_info(dev, "registered.\n");
dev_info(dev, "registered with API v%d.\n",
priv->dpfe_api->version);
return ret;
}
static int brcmstb_dpfe_remove(struct platform_device *pdev)
{
sysfs_remove_groups(&pdev->dev.kobj, dpfe_groups);
struct private_data *priv = dev_get_drvdata(&pdev->dev);
sysfs_remove_groups(&pdev->dev.kobj, priv->dpfe_api->sysfs_attrs);
return 0;
}
static const struct of_device_id brcmstb_dpfe_of_match[] = {
{ .compatible = "brcm,dpfe-cpu", },
/* Use legacy API v2 for a select number of chips */
{ .compatible = "brcm,bcm7268-dpfe-cpu", .data = &dpfe_api_v2 },
{ .compatible = "brcm,bcm7271-dpfe-cpu", .data = &dpfe_api_v2 },
{ .compatible = "brcm,bcm7278-dpfe-cpu", .data = &dpfe_api_v2 },
{ .compatible = "brcm,bcm7211-dpfe-cpu", .data = &dpfe_api_v2 },
/* API v3 is the default going forward */
{ .compatible = "brcm,dpfe-cpu", .data = &dpfe_api_v3 },
{}
};
MODULE_DEVICE_TABLE(of, brcmstb_dpfe_of_match);

View File

@ -23,8 +23,9 @@
#include <linux/list.h>
#include <linux/spinlock.h>
#include <linux/pm.h>
#include <memory/jedec_ddr.h>
#include "emif.h"
#include "jedec_ddr.h"
#include "of_memory.h"
/**

View File

@ -6,8 +6,8 @@
*
* Aneesh V <aneesh@ti.com>
*/
#ifndef __LINUX_JEDEC_DDR_H
#define __LINUX_JEDEC_DDR_H
#ifndef __JEDEC_DDR_H
#define __JEDEC_DDR_H
#include <linux/types.h>
@ -169,4 +169,4 @@ extern const struct lpddr2_timings
lpddr2_jedec_timings[NUM_DDR_TIMING_TABLE_ENTRIES];
extern const struct lpddr2_min_tck lpddr2_jedec_min_tck;
#endif /* __LINUX_JEDEC_DDR_H */
#endif /* __JEDEC_DDR_H */

View File

@ -7,8 +7,9 @@
* Aneesh V <aneesh@ti.com>
*/
#include <memory/jedec_ddr.h>
#include <linux/module.h>
#include <linux/export.h>
#include "jedec_ddr.h"
/* LPDDR2 addressing details from JESD209-2 section 2.4 */
const struct lpddr2_addressing

View File

@ -10,8 +10,9 @@
#include <linux/list.h>
#include <linux/of.h>
#include <linux/gfp.h>
#include <memory/jedec_ddr.h>
#include <linux/export.h>
#include "jedec_ddr.h"
#include "of_memory.h"
/**

View File

@ -30,28 +30,6 @@
#define MC_EMEM_ARB_MISC1 0xdc
#define MC_EMEM_ARB_RING1_THROTTLE 0xe0
static const unsigned long tegra124_mc_emem_regs[] = {
MC_EMEM_ARB_CFG,
MC_EMEM_ARB_OUTSTANDING_REQ,
MC_EMEM_ARB_TIMING_RCD,
MC_EMEM_ARB_TIMING_RP,
MC_EMEM_ARB_TIMING_RC,
MC_EMEM_ARB_TIMING_RAS,
MC_EMEM_ARB_TIMING_FAW,
MC_EMEM_ARB_TIMING_RRD,
MC_EMEM_ARB_TIMING_RAP2PRE,
MC_EMEM_ARB_TIMING_WAP2PRE,
MC_EMEM_ARB_TIMING_R2R,
MC_EMEM_ARB_TIMING_W2W,
MC_EMEM_ARB_TIMING_R2W,
MC_EMEM_ARB_TIMING_W2R,
MC_EMEM_ARB_DA_TURNS,
MC_EMEM_ARB_DA_COVERS,
MC_EMEM_ARB_MISC0,
MC_EMEM_ARB_MISC1,
MC_EMEM_ARB_RING1_THROTTLE
};
static const struct tegra_mc_client tegra124_mc_clients[] = {
{
.id = 0x00,
@ -1046,6 +1024,28 @@ static const struct tegra_mc_reset tegra124_mc_resets[] = {
};
#ifdef CONFIG_ARCH_TEGRA_124_SOC
static const unsigned long tegra124_mc_emem_regs[] = {
MC_EMEM_ARB_CFG,
MC_EMEM_ARB_OUTSTANDING_REQ,
MC_EMEM_ARB_TIMING_RCD,
MC_EMEM_ARB_TIMING_RP,
MC_EMEM_ARB_TIMING_RC,
MC_EMEM_ARB_TIMING_RAS,
MC_EMEM_ARB_TIMING_FAW,
MC_EMEM_ARB_TIMING_RRD,
MC_EMEM_ARB_TIMING_RAP2PRE,
MC_EMEM_ARB_TIMING_WAP2PRE,
MC_EMEM_ARB_TIMING_R2R,
MC_EMEM_ARB_TIMING_W2W,
MC_EMEM_ARB_TIMING_R2W,
MC_EMEM_ARB_TIMING_W2R,
MC_EMEM_ARB_DA_TURNS,
MC_EMEM_ARB_DA_COVERS,
MC_EMEM_ARB_MISC0,
MC_EMEM_ARB_MISC1,
MC_EMEM_ARB_RING1_THROTTLE
};
static const struct tegra_smmu_soc tegra124_smmu_soc = {
.clients = tegra124_mc_clients,
.num_clients = ARRAY_SIZE(tegra124_mc_clients),

View File

@ -118,7 +118,7 @@ config RESET_QCOM_PDC
config RESET_SIMPLE
bool "Simple Reset Controller Driver" if COMPILE_TEST
default ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARCH_ZX || ARCH_ASPEED
default ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARCH_ZX || ARCH_ASPEED || ARCH_BITMAIN
help
This enables a simple reset controller driver for reset lines that
that can be asserted and deasserted by toggling bits in a contiguous,
@ -130,6 +130,7 @@ config RESET_SIMPLE
- RCC reset controller in STM32 MCUs
- Allwinner SoCs
- ZTE's zx2967 family
- Bitmain BM1880 SoC
config RESET_STM32MP157
bool "STM32MP157 Reset Driver" if COMPILE_TEST

View File

@ -690,9 +690,6 @@ __reset_control_get_from_lookup(struct device *dev, const char *con_id,
const char *dev_id = dev_name(dev);
struct reset_control *rstc = NULL;
if (!dev)
return ERR_PTR(-EINVAL);
mutex_lock(&reset_lookup_mutex);
list_for_each_entry(lookup, &reset_lookup_list, list) {

View File

@ -125,6 +125,8 @@ static const struct of_device_id reset_simple_dt_ids[] = {
.data = &reset_simple_active_low },
{ .compatible = "aspeed,ast2400-lpc-reset" },
{ .compatible = "aspeed,ast2500-lpc-reset" },
{ .compatible = "bitmain,bm1880-reset",
.data = &reset_simple_active_low },
{ /* sentinel */ },
};

View File

@ -35,6 +35,7 @@ struct meson_canvas {
void __iomem *reg_base;
spinlock_t lock; /* canvas device lock */
u8 used[NUM_CANVAS];
bool supports_endianness;
};
static void canvas_write(struct meson_canvas *canvas, u32 reg, u32 val)
@ -86,6 +87,12 @@ int meson_canvas_config(struct meson_canvas *canvas, u8 canvas_index,
{
unsigned long flags;
if (endian && !canvas->supports_endianness) {
dev_err(canvas->dev,
"Endianness is not supported on this SoC\n");
return -EINVAL;
}
spin_lock_irqsave(&canvas->lock, flags);
if (!canvas->used[canvas_index]) {
dev_err(canvas->dev,
@ -172,6 +179,8 @@ static int meson_canvas_probe(struct platform_device *pdev)
if (IS_ERR(canvas->reg_base))
return PTR_ERR(canvas->reg_base);
canvas->supports_endianness = of_device_get_match_data(dev);
canvas->dev = dev;
spin_lock_init(&canvas->lock);
dev_set_drvdata(dev, canvas);
@ -180,7 +189,10 @@ static int meson_canvas_probe(struct platform_device *pdev)
}
static const struct of_device_id canvas_dt_match[] = {
{ .compatible = "amlogic,canvas" },
{ .compatible = "amlogic,meson8-canvas", .data = (void *)false, },
{ .compatible = "amlogic,meson8b-canvas", .data = (void *)false, },
{ .compatible = "amlogic,meson8m2-canvas", .data = (void *)false, },
{ .compatible = "amlogic,canvas", .data = (void *)true, },
{}
};
MODULE_DEVICE_TABLE(of, canvas_dt_match);

View File

@ -64,6 +64,7 @@ static long aspeed_lpc_ctrl_ioctl(struct file *file, unsigned int cmd,
unsigned long param)
{
struct aspeed_lpc_ctrl *lpc_ctrl = file_aspeed_lpc_ctrl(file);
struct device *dev = file->private_data;
void __user *p = (void __user *)param;
struct aspeed_lpc_ctrl_mapping map;
u32 addr;
@ -86,6 +87,12 @@ static long aspeed_lpc_ctrl_ioctl(struct file *file, unsigned int cmd,
if (map.window_id != 0)
return -EINVAL;
/* If memory-region is not described in device tree */
if (!lpc_ctrl->mem_size) {
dev_dbg(dev, "Didn't find reserved memory\n");
return -ENXIO;
}
map.size = lpc_ctrl->mem_size;
return copy_to_user(p, &map, sizeof(map)) ? -EFAULT : 0;
@ -122,9 +129,18 @@ static long aspeed_lpc_ctrl_ioctl(struct file *file, unsigned int cmd,
return -EINVAL;
if (map.window_type == ASPEED_LPC_CTRL_WINDOW_FLASH) {
if (!lpc_ctrl->pnor_size) {
dev_dbg(dev, "Didn't find host pnor flash\n");
return -ENXIO;
}
addr = lpc_ctrl->pnor_base;
size = lpc_ctrl->pnor_size;
} else if (map.window_type == ASPEED_LPC_CTRL_WINDOW_MEMORY) {
/* If memory-region is not described in device tree */
if (!lpc_ctrl->mem_size) {
dev_dbg(dev, "Didn't find reserved memory\n");
return -ENXIO;
}
addr = lpc_ctrl->mem_base;
size = lpc_ctrl->mem_size;
} else {
@ -192,40 +208,41 @@ static int aspeed_lpc_ctrl_probe(struct platform_device *pdev)
if (!lpc_ctrl)
return -ENOMEM;
/* If flash is described in device tree then store */
node = of_parse_phandle(dev->of_node, "flash", 0);
if (!node) {
dev_err(dev, "Didn't find host pnor flash node\n");
return -ENODEV;
dev_dbg(dev, "Didn't find host pnor flash node\n");
} else {
rc = of_address_to_resource(node, 1, &resm);
of_node_put(node);
if (rc) {
dev_err(dev, "Couldn't address to resource for flash\n");
return rc;
}
lpc_ctrl->pnor_size = resource_size(&resm);
lpc_ctrl->pnor_base = resm.start;
}
rc = of_address_to_resource(node, 1, &resm);
of_node_put(node);
if (rc) {
dev_err(dev, "Couldn't address to resource for flash\n");
return rc;
}
lpc_ctrl->pnor_size = resource_size(&resm);
lpc_ctrl->pnor_base = resm.start;
dev_set_drvdata(&pdev->dev, lpc_ctrl);
/* If memory-region is described in device tree then store */
node = of_parse_phandle(dev->of_node, "memory-region", 0);
if (!node) {
dev_err(dev, "Didn't find reserved memory\n");
return -EINVAL;
}
dev_dbg(dev, "Didn't find reserved memory\n");
} else {
rc = of_address_to_resource(node, 0, &resm);
of_node_put(node);
if (rc) {
dev_err(dev, "Couldn't address to resource for reserved memory\n");
return -ENXIO;
}
rc = of_address_to_resource(node, 0, &resm);
of_node_put(node);
if (rc) {
dev_err(dev, "Couldn't address to resource for reserved memory\n");
return -ENOMEM;
lpc_ctrl->mem_size = resource_size(&resm);
lpc_ctrl->mem_base = resm.start;
}
lpc_ctrl->mem_size = resource_size(&resm);
lpc_ctrl->mem_base = resm.start;
lpc_ctrl->regmap = syscon_node_to_regmap(
pdev->dev.parent->of_node);
if (IS_ERR(lpc_ctrl->regmap)) {
@ -254,8 +271,6 @@ static int aspeed_lpc_ctrl_probe(struct platform_device *pdev)
goto err;
}
dev_info(dev, "Loaded at %pr\n", &resm);
return 0;
err:

View File

@ -30,4 +30,14 @@ config FSL_MC_DPIO
other DPAA2 objects. This driver does not expose the DPIO
objects individually, but groups them under a service layer
API.
config DPAA2_CONSOLE
tristate "QorIQ DPAA2 console driver"
depends on OF && (ARCH_LAYERSCAPE || COMPILE_TEST)
default y
help
Console driver for DPAA2 platforms. Exports 2 char devices,
/dev/dpaa2_mc_console and /dev/dpaa2_aiop_console,
which can be used to dump the Management Complex and AIOP
firmware logs.
endmenu

View File

@ -8,3 +8,4 @@ obj-$(CONFIG_QUICC_ENGINE) += qe/
obj-$(CONFIG_CPM) += qe/
obj-$(CONFIG_FSL_GUTS) += guts.o
obj-$(CONFIG_FSL_MC_DPIO) += dpio/
obj-$(CONFIG_DPAA2_CONSOLE) += dpaa2-console.o

View File

@ -0,0 +1,329 @@
// SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause)
/*
* Freescale DPAA2 Platforms Console Driver
*
* Copyright 2015-2016 Freescale Semiconductor Inc.
* Copyright 2018 NXP
*/
#define pr_fmt(fmt) "dpaa2-console: " fmt
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/of_address.h>
#include <linux/miscdevice.h>
#include <linux/uaccess.h>
#include <linux/slab.h>
#include <linux/fs.h>
#include <linux/io.h>
/* MC firmware base low/high registers indexes */
#define MCFBALR_OFFSET 0
#define MCFBAHR_OFFSET 1
/* Bit masks used to get the most/least significant part of the MC base addr */
#define MC_FW_ADDR_MASK_HIGH 0x1FFFF
#define MC_FW_ADDR_MASK_LOW 0xE0000000
#define MC_BUFFER_OFFSET 0x01000000
#define MC_BUFFER_SIZE (1024 * 1024 * 16)
#define MC_OFFSET_DELTA MC_BUFFER_OFFSET
#define AIOP_BUFFER_OFFSET 0x06000000
#define AIOP_BUFFER_SIZE (1024 * 1024 * 16)
#define AIOP_OFFSET_DELTA 0
#define LOG_HEADER_FLAG_BUFFER_WRAPAROUND 0x80000000
#define LAST_BYTE(a) ((a) & ~(LOG_HEADER_FLAG_BUFFER_WRAPAROUND))
/* MC and AIOP Magic words */
#define MAGIC_MC 0x4d430100
#define MAGIC_AIOP 0x41494F50
struct log_header {
__le32 magic_word;
char reserved[4];
__le32 buf_start;
__le32 buf_length;
__le32 last_byte;
};
struct console_data {
void __iomem *map_addr;
struct log_header __iomem *hdr;
void __iomem *start_addr;
void __iomem *end_addr;
void __iomem *end_of_data;
void __iomem *cur_ptr;
};
static struct resource mc_base_addr;
static inline void adjust_end(struct console_data *cd)
{
u32 last_byte = readl(&cd->hdr->last_byte);
cd->end_of_data = cd->start_addr + LAST_BYTE(last_byte);
}
static u64 get_mc_fw_base_address(void)
{
u64 mcfwbase = 0ULL;
u32 __iomem *mcfbaregs;
mcfbaregs = ioremap(mc_base_addr.start, resource_size(&mc_base_addr));
if (!mcfbaregs) {
pr_err("could not map MC Firmaware Base registers\n");
return 0;
}
mcfwbase = readl(mcfbaregs + MCFBAHR_OFFSET) &
MC_FW_ADDR_MASK_HIGH;
mcfwbase <<= 32;
mcfwbase |= readl(mcfbaregs + MCFBALR_OFFSET) & MC_FW_ADDR_MASK_LOW;
iounmap(mcfbaregs);
pr_debug("MC base address at 0x%016llx\n", mcfwbase);
return mcfwbase;
}
static ssize_t dpaa2_console_size(struct console_data *cd)
{
ssize_t size;
if (cd->cur_ptr <= cd->end_of_data)
size = cd->end_of_data - cd->cur_ptr;
else
size = (cd->end_addr - cd->cur_ptr) +
(cd->end_of_data - cd->start_addr);
return size;
}
static int dpaa2_generic_console_open(struct inode *node, struct file *fp,
u64 offset, u64 size,
u32 expected_magic,
u32 offset_delta)
{
u32 read_magic, wrapped, last_byte, buf_start, buf_length;
struct console_data *cd;
u64 base_addr;
int err;
cd = kmalloc(sizeof(*cd), GFP_KERNEL);
if (!cd)
return -ENOMEM;
base_addr = get_mc_fw_base_address();
if (!base_addr) {
err = -EIO;
goto err_fwba;
}
cd->map_addr = ioremap(base_addr + offset, size);
if (!cd->map_addr) {
pr_err("cannot map console log memory\n");
err = -EIO;
goto err_ioremap;
}
cd->hdr = (struct log_header __iomem *)cd->map_addr;
read_magic = readl(&cd->hdr->magic_word);
last_byte = readl(&cd->hdr->last_byte);
buf_start = readl(&cd->hdr->buf_start);
buf_length = readl(&cd->hdr->buf_length);
if (read_magic != expected_magic) {
pr_warn("expected = %08x, read = %08x\n",
expected_magic, read_magic);
err = -EIO;
goto err_magic;
}
cd->start_addr = cd->map_addr + buf_start - offset_delta;
cd->end_addr = cd->start_addr + buf_length;
wrapped = last_byte & LOG_HEADER_FLAG_BUFFER_WRAPAROUND;
adjust_end(cd);
if (wrapped && cd->end_of_data != cd->end_addr)
cd->cur_ptr = cd->end_of_data + 1;
else
cd->cur_ptr = cd->start_addr;
fp->private_data = cd;
return 0;
err_magic:
iounmap(cd->map_addr);
err_ioremap:
err_fwba:
kfree(cd);
return err;
}
static int dpaa2_mc_console_open(struct inode *node, struct file *fp)
{
return dpaa2_generic_console_open(node, fp,
MC_BUFFER_OFFSET, MC_BUFFER_SIZE,
MAGIC_MC, MC_OFFSET_DELTA);
}
static int dpaa2_aiop_console_open(struct inode *node, struct file *fp)
{
return dpaa2_generic_console_open(node, fp,
AIOP_BUFFER_OFFSET, AIOP_BUFFER_SIZE,
MAGIC_AIOP, AIOP_OFFSET_DELTA);
}
static int dpaa2_console_close(struct inode *node, struct file *fp)
{
struct console_data *cd = fp->private_data;
iounmap(cd->map_addr);
kfree(cd);
return 0;
}
static ssize_t dpaa2_console_read(struct file *fp, char __user *buf,
size_t count, loff_t *f_pos)
{
struct console_data *cd = fp->private_data;
size_t bytes = dpaa2_console_size(cd);
size_t bytes_end = cd->end_addr - cd->cur_ptr;
size_t written = 0;
void *kbuf;
int err;
/* Check if we need to adjust the end of data addr */
adjust_end(cd);
if (cd->end_of_data == cd->cur_ptr)
return 0;
if (count < bytes)
bytes = count;
kbuf = kmalloc(bytes, GFP_KERNEL);
if (!kbuf)
return -ENOMEM;
if (bytes > bytes_end) {
memcpy_fromio(kbuf, cd->cur_ptr, bytes_end);
if (copy_to_user(buf, kbuf, bytes_end)) {
err = -EFAULT;
goto err_free_buf;
}
buf += bytes_end;
cd->cur_ptr = cd->start_addr;
bytes -= bytes_end;
written += bytes_end;
}
memcpy_fromio(kbuf, cd->cur_ptr, bytes);
if (copy_to_user(buf, kbuf, bytes)) {
err = -EFAULT;
goto err_free_buf;
}
cd->cur_ptr += bytes;
written += bytes;
return written;
err_free_buf:
kfree(kbuf);
return err;
}
static const struct file_operations dpaa2_mc_console_fops = {
.owner = THIS_MODULE,
.open = dpaa2_mc_console_open,
.release = dpaa2_console_close,
.read = dpaa2_console_read,
};
static struct miscdevice dpaa2_mc_console_dev = {
.minor = MISC_DYNAMIC_MINOR,
.name = "dpaa2_mc_console",
.fops = &dpaa2_mc_console_fops
};
static const struct file_operations dpaa2_aiop_console_fops = {
.owner = THIS_MODULE,
.open = dpaa2_aiop_console_open,
.release = dpaa2_console_close,
.read = dpaa2_console_read,
};
static struct miscdevice dpaa2_aiop_console_dev = {
.minor = MISC_DYNAMIC_MINOR,
.name = "dpaa2_aiop_console",
.fops = &dpaa2_aiop_console_fops
};
static int dpaa2_console_probe(struct platform_device *pdev)
{
int error;
error = of_address_to_resource(pdev->dev.of_node, 0, &mc_base_addr);
if (error < 0) {
pr_err("of_address_to_resource() failed for %pOF with %d\n",
pdev->dev.of_node, error);
return error;
}
error = misc_register(&dpaa2_mc_console_dev);
if (error) {
pr_err("cannot register device %s\n",
dpaa2_mc_console_dev.name);
goto err_register_mc;
}
error = misc_register(&dpaa2_aiop_console_dev);
if (error) {
pr_err("cannot register device %s\n",
dpaa2_aiop_console_dev.name);
goto err_register_aiop;
}
return 0;
err_register_aiop:
misc_deregister(&dpaa2_mc_console_dev);
err_register_mc:
return error;
}
static int dpaa2_console_remove(struct platform_device *pdev)
{
misc_deregister(&dpaa2_mc_console_dev);
misc_deregister(&dpaa2_aiop_console_dev);
return 0;
}
static const struct of_device_id dpaa2_console_match_table[] = {
{ .compatible = "fsl,dpaa2-console",},
{},
};
MODULE_DEVICE_TABLE(of, dpaa2_console_match_table);
static struct platform_driver dpaa2_console_driver = {
.driver = {
.name = "dpaa2-console",
.pm = NULL,
.of_match_table = dpaa2_console_match_table,
},
.probe = dpaa2_console_probe,
.remove = dpaa2_console_remove,
};
module_platform_driver(dpaa2_console_driver);
MODULE_LICENSE("Dual BSD/GPL");
MODULE_AUTHOR("Roy Pledge <roy.pledge@nxp.com>");
MODULE_DESCRIPTION("DPAA2 console driver");

View File

@ -197,13 +197,22 @@ static int dpaa2_dpio_probe(struct fsl_mc_device *dpio_dev)
desc.cpu);
}
/*
* Set the CENA regs to be the cache inhibited area of the portal to
* avoid coherency issues if a user migrates to another core.
*/
desc.regs_cena = devm_memremap(dev, dpio_dev->regions[1].start,
resource_size(&dpio_dev->regions[1]),
MEMREMAP_WC);
if (dpio_dev->obj_desc.region_count < 3) {
/* No support for DDR backed portals, use classic mapping */
/*
* Set the CENA regs to be the cache inhibited area of the
* portal to avoid coherency issues if a user migrates to
* another core.
*/
desc.regs_cena = devm_memremap(dev, dpio_dev->regions[1].start,
resource_size(&dpio_dev->regions[1]),
MEMREMAP_WC);
} else {
desc.regs_cena = devm_memremap(dev, dpio_dev->regions[2].start,
resource_size(&dpio_dev->regions[2]),
MEMREMAP_WB);
}
if (IS_ERR(desc.regs_cena)) {
dev_err(dev, "devm_memremap failed\n");
err = PTR_ERR(desc.regs_cena);

View File

@ -15,6 +15,8 @@
#define QMAN_REV_4000 0x04000000
#define QMAN_REV_4100 0x04010000
#define QMAN_REV_4101 0x04010001
#define QMAN_REV_5000 0x05000000
#define QMAN_REV_MASK 0xffff0000
/* All QBMan command and result structures use this "valid bit" encoding */
@ -25,10 +27,17 @@
#define QBMAN_WQCHAN_CONFIGURE 0x46
/* CINH register offsets */
#define QBMAN_CINH_SWP_EQCR_PI 0x800
#define QBMAN_CINH_SWP_EQAR 0x8c0
#define QBMAN_CINH_SWP_CR_RT 0x900
#define QBMAN_CINH_SWP_VDQCR_RT 0x940
#define QBMAN_CINH_SWP_EQCR_AM_RT 0x980
#define QBMAN_CINH_SWP_RCR_AM_RT 0x9c0
#define QBMAN_CINH_SWP_DQPI 0xa00
#define QBMAN_CINH_SWP_DCAP 0xac0
#define QBMAN_CINH_SWP_SDQCR 0xb00
#define QBMAN_CINH_SWP_EQCR_AM_RT2 0xb40
#define QBMAN_CINH_SWP_RCR_PI 0xc00
#define QBMAN_CINH_SWP_RAR 0xcc0
#define QBMAN_CINH_SWP_ISR 0xe00
#define QBMAN_CINH_SWP_IER 0xe40
@ -43,6 +52,13 @@
#define QBMAN_CENA_SWP_RR(vb) (0x700 + ((u32)(vb) >> 1))
#define QBMAN_CENA_SWP_VDQCR 0x780
/* CENA register offsets in memory-backed mode */
#define QBMAN_CENA_SWP_DQRR_MEM(n) (0x800 + ((u32)(n) << 6))
#define QBMAN_CENA_SWP_RCR_MEM(n) (0x1400 + ((u32)(n) << 6))
#define QBMAN_CENA_SWP_CR_MEM 0x1600
#define QBMAN_CENA_SWP_RR_MEM 0x1680
#define QBMAN_CENA_SWP_VDQCR_MEM 0x1780
/* Reverse mapping of QBMAN_CENA_SWP_DQRR() */
#define QBMAN_IDX_FROM_DQRR(p) (((unsigned long)(p) & 0x1ff) >> 6)
@ -96,10 +112,13 @@ static inline void *qbman_get_cmd(struct qbman_swp *p, u32 offset)
#define SWP_CFG_DQRR_MF_SHIFT 20
#define SWP_CFG_EST_SHIFT 16
#define SWP_CFG_CPBS_SHIFT 15
#define SWP_CFG_WN_SHIFT 14
#define SWP_CFG_RPM_SHIFT 12
#define SWP_CFG_DCM_SHIFT 10
#define SWP_CFG_EPM_SHIFT 8
#define SWP_CFG_VPM_SHIFT 7
#define SWP_CFG_CPM_SHIFT 6
#define SWP_CFG_SD_SHIFT 5
#define SWP_CFG_SP_SHIFT 4
#define SWP_CFG_SE_SHIFT 3
@ -125,6 +144,8 @@ static inline u32 qbman_set_swp_cfg(u8 max_fill, u8 wn, u8 est, u8 rpm, u8 dcm,
ep << SWP_CFG_EP_SHIFT);
}
#define QMAN_RT_MODE 0x00000100
/**
* qbman_swp_init() - Create a functional object representing the given
* QBMan portal descriptor.
@ -146,6 +167,8 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d)
p->sdq |= qbman_sdqcr_dct_prio_ics << QB_SDQCR_DCT_SHIFT;
p->sdq |= qbman_sdqcr_fc_up_to_3 << QB_SDQCR_FC_SHIFT;
p->sdq |= QMAN_SDQCR_TOKEN << QB_SDQCR_TOK_SHIFT;
if ((p->desc->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000)
p->mr.valid_bit = QB_VALID_BIT;
atomic_set(&p->vdq.available, 1);
p->vdq.valid_bit = QB_VALID_BIT;
@ -163,6 +186,9 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d)
p->addr_cena = d->cena_bar;
p->addr_cinh = d->cinh_bar;
if ((p->desc->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000)
memset(p->addr_cena, 0, 64 * 1024);
reg = qbman_set_swp_cfg(p->dqrr.dqrr_size,
1, /* Writes Non-cacheable */
0, /* EQCR_CI stashing threshold */
@ -175,6 +201,10 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d)
1, /* dequeue stashing priority == TRUE */
0, /* dequeue stashing enable == FALSE */
0); /* EQCR_CI stashing priority == FALSE */
if ((p->desc->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000)
reg |= 1 << SWP_CFG_CPBS_SHIFT | /* memory-backed mode */
1 << SWP_CFG_VPM_SHIFT | /* VDQCR read triggered mode */
1 << SWP_CFG_CPM_SHIFT; /* CR read triggered mode */
qbman_write_register(p, QBMAN_CINH_SWP_CFG, reg);
reg = qbman_read_register(p, QBMAN_CINH_SWP_CFG);
@ -184,6 +214,10 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d)
return NULL;
}
if ((p->desc->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000) {
qbman_write_register(p, QBMAN_CINH_SWP_EQCR_PI, QMAN_RT_MODE);
qbman_write_register(p, QBMAN_CINH_SWP_RCR_PI, QMAN_RT_MODE);
}
/*
* SDQCR needs to be initialized to 0 when no channels are
* being dequeued from or else the QMan HW will indicate an
@ -278,7 +312,10 @@ void qbman_swp_interrupt_set_inhibit(struct qbman_swp *p, int inhibit)
*/
void *qbman_swp_mc_start(struct qbman_swp *p)
{
return qbman_get_cmd(p, QBMAN_CENA_SWP_CR);
if ((p->desc->qman_version & QMAN_REV_MASK) < QMAN_REV_5000)
return qbman_get_cmd(p, QBMAN_CENA_SWP_CR);
else
return qbman_get_cmd(p, QBMAN_CENA_SWP_CR_MEM);
}
/*
@ -289,8 +326,14 @@ void qbman_swp_mc_submit(struct qbman_swp *p, void *cmd, u8 cmd_verb)
{
u8 *v = cmd;
dma_wmb();
*v = cmd_verb | p->mc.valid_bit;
if ((p->desc->qman_version & QMAN_REV_MASK) < QMAN_REV_5000) {
dma_wmb();
*v = cmd_verb | p->mc.valid_bit;
} else {
*v = cmd_verb | p->mc.valid_bit;
dma_wmb();
qbman_write_register(p, QBMAN_CINH_SWP_CR_RT, QMAN_RT_MODE);
}
}
/*
@ -301,13 +344,27 @@ void *qbman_swp_mc_result(struct qbman_swp *p)
{
u32 *ret, verb;
ret = qbman_get_cmd(p, QBMAN_CENA_SWP_RR(p->mc.valid_bit));
if ((p->desc->qman_version & QMAN_REV_MASK) < QMAN_REV_5000) {
ret = qbman_get_cmd(p, QBMAN_CENA_SWP_RR(p->mc.valid_bit));
/* Remove the valid-bit - command completed if the rest
* is non-zero.
*/
verb = ret[0] & ~QB_VALID_BIT;
if (!verb)
return NULL;
p->mc.valid_bit ^= QB_VALID_BIT;
} else {
ret = qbman_get_cmd(p, QBMAN_CENA_SWP_RR_MEM);
/* Command completed if the valid bit is toggled */
if (p->mr.valid_bit != (ret[0] & QB_VALID_BIT))
return NULL;
/* Command completed if the rest is non-zero */
verb = ret[0] & ~QB_VALID_BIT;
if (!verb)
return NULL;
p->mr.valid_bit ^= QB_VALID_BIT;
}
/* Remove the valid-bit - command completed if the rest is non-zero */
verb = ret[0] & ~QB_VALID_BIT;
if (!verb)
return NULL;
p->mc.valid_bit ^= QB_VALID_BIT;
return ret;
}
@ -384,6 +441,18 @@ void qbman_eq_desc_set_qd(struct qbman_eq_desc *d, u32 qdid,
#define EQAR_VB(eqar) ((eqar) & 0x80)
#define EQAR_SUCCESS(eqar) ((eqar) & 0x100)
static inline void qbman_write_eqcr_am_rt_register(struct qbman_swp *p,
u8 idx)
{
if (idx < 16)
qbman_write_register(p, QBMAN_CINH_SWP_EQCR_AM_RT + idx * 4,
QMAN_RT_MODE);
else
qbman_write_register(p, QBMAN_CINH_SWP_EQCR_AM_RT2 +
(idx - 16) * 4,
QMAN_RT_MODE);
}
/**
* qbman_swp_enqueue() - Issue an enqueue command
* @s: the software portal used for enqueue
@ -408,9 +477,15 @@ int qbman_swp_enqueue(struct qbman_swp *s, const struct qbman_eq_desc *d,
memcpy(&p->dca, &d->dca, 31);
memcpy(&p->fd, fd, sizeof(*fd));
/* Set the verb byte, have to substitute in the valid-bit */
dma_wmb();
p->verb = d->verb | EQAR_VB(eqar);
if ((s->desc->qman_version & QMAN_REV_MASK) < QMAN_REV_5000) {
/* Set the verb byte, have to substitute in the valid-bit */
dma_wmb();
p->verb = d->verb | EQAR_VB(eqar);
} else {
p->verb = d->verb | EQAR_VB(eqar);
dma_wmb();
qbman_write_eqcr_am_rt_register(s, EQAR_IDX(eqar));
}
return 0;
}
@ -587,17 +662,27 @@ int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d)
return -EBUSY;
}
s->vdq.storage = (void *)(uintptr_t)d->rsp_addr_virt;
p = qbman_get_cmd(s, QBMAN_CENA_SWP_VDQCR);
if ((s->desc->qman_version & QMAN_REV_MASK) < QMAN_REV_5000)
p = qbman_get_cmd(s, QBMAN_CENA_SWP_VDQCR);
else
p = qbman_get_cmd(s, QBMAN_CENA_SWP_VDQCR_MEM);
p->numf = d->numf;
p->tok = QMAN_DQ_TOKEN_VALID;
p->dq_src = d->dq_src;
p->rsp_addr = d->rsp_addr;
p->rsp_addr_virt = d->rsp_addr_virt;
dma_wmb();
/* Set the verb byte, have to substitute in the valid-bit */
p->verb = d->verb | s->vdq.valid_bit;
s->vdq.valid_bit ^= QB_VALID_BIT;
if ((s->desc->qman_version & QMAN_REV_MASK) < QMAN_REV_5000) {
dma_wmb();
/* Set the verb byte, have to substitute in the valid-bit */
p->verb = d->verb | s->vdq.valid_bit;
s->vdq.valid_bit ^= QB_VALID_BIT;
} else {
p->verb = d->verb | s->vdq.valid_bit;
s->vdq.valid_bit ^= QB_VALID_BIT;
dma_wmb();
qbman_write_register(s, QBMAN_CINH_SWP_VDQCR_RT, QMAN_RT_MODE);
}
return 0;
}
@ -655,7 +740,10 @@ const struct dpaa2_dq *qbman_swp_dqrr_next(struct qbman_swp *s)
QBMAN_CENA_SWP_DQRR(s->dqrr.next_idx)));
}
p = qbman_get_cmd(s, QBMAN_CENA_SWP_DQRR(s->dqrr.next_idx));
if ((s->desc->qman_version & QMAN_REV_MASK) < QMAN_REV_5000)
p = qbman_get_cmd(s, QBMAN_CENA_SWP_DQRR(s->dqrr.next_idx));
else
p = qbman_get_cmd(s, QBMAN_CENA_SWP_DQRR_MEM(s->dqrr.next_idx));
verb = p->dq.verb;
/*
@ -807,18 +895,28 @@ int qbman_swp_release(struct qbman_swp *s, const struct qbman_release_desc *d,
return -EBUSY;
/* Start the release command */
p = qbman_get_cmd(s, QBMAN_CENA_SWP_RCR(RAR_IDX(rar)));
if ((s->desc->qman_version & QMAN_REV_MASK) < QMAN_REV_5000)
p = qbman_get_cmd(s, QBMAN_CENA_SWP_RCR(RAR_IDX(rar)));
else
p = qbman_get_cmd(s, QBMAN_CENA_SWP_RCR_MEM(RAR_IDX(rar)));
/* Copy the caller's buffer pointers to the command */
for (i = 0; i < num_buffers; i++)
p->buf[i] = cpu_to_le64(buffers[i]);
p->bpid = d->bpid;
/*
* Set the verb byte, have to substitute in the valid-bit and the number
* of buffers.
*/
dma_wmb();
p->verb = d->verb | RAR_VB(rar) | num_buffers;
if ((s->desc->qman_version & QMAN_REV_MASK) < QMAN_REV_5000) {
/*
* Set the verb byte, have to substitute in the valid-bit
* and the number of buffers.
*/
dma_wmb();
p->verb = d->verb | RAR_VB(rar) | num_buffers;
} else {
p->verb = d->verb | RAR_VB(rar) | num_buffers;
dma_wmb();
qbman_write_register(s, QBMAN_CINH_SWP_RCR_AM_RT +
RAR_IDX(rar) * 4, QMAN_RT_MODE);
}
return 0;
}

View File

@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) */
/*
* Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
* Copyright 2016 NXP
* Copyright 2016-2019 NXP
*
*/
#ifndef __FSL_QBMAN_PORTAL_H
@ -110,6 +110,11 @@ struct qbman_swp {
u32 valid_bit; /* 0x00 or 0x80 */
} mc;
/* Management response */
struct {
u32 valid_bit; /* 0x00 or 0x80 */
} mr;
/* Push dequeues */
u32 sdq;
@ -428,7 +433,7 @@ static inline int qbman_swp_CDAN_set_context_enable(struct qbman_swp *s,
static inline void *qbman_swp_mc_complete(struct qbman_swp *swp, void *cmd,
u8 cmd_verb)
{
int loopvar = 1000;
int loopvar = 2000;
qbman_swp_mc_submit(swp, cmd, cmd_verb);

View File

@ -97,6 +97,11 @@ static const struct fsl_soc_die_attr fsl_soc_die[] = {
.svr = 0x87000000,
.mask = 0xfff70000,
},
/* Die: LX2160A, SoC: LX2160A/LX2120A/LX2080A */
{ .die = "LX2160A",
.svr = 0x87360000,
.mask = 0xff3f0000,
},
{ },
};
@ -218,6 +223,7 @@ static const struct of_device_id fsl_guts_of_match[] = {
{ .compatible = "fsl,ls1088a-dcfg", },
{ .compatible = "fsl,ls1012a-dcfg", },
{ .compatible = "fsl,ls1046a-dcfg", },
{ .compatible = "fsl,lx2160a-dcfg", },
{}
};
MODULE_DEVICE_TABLE(of, fsl_guts_of_match);

View File

@ -32,6 +32,7 @@
static struct bman_portal *affine_bportals[NR_CPUS];
static struct cpumask portal_cpus;
static int __bman_portals_probed;
/* protect bman global registers and global data shared among portals */
static DEFINE_SPINLOCK(bman_lock);
@ -87,6 +88,12 @@ static int bman_online_cpu(unsigned int cpu)
return 0;
}
int bman_portals_probed(void)
{
return __bman_portals_probed;
}
EXPORT_SYMBOL_GPL(bman_portals_probed);
static int bman_portal_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
@ -104,8 +111,10 @@ static int bman_portal_probe(struct platform_device *pdev)
}
pcfg = devm_kmalloc(dev, sizeof(*pcfg), GFP_KERNEL);
if (!pcfg)
if (!pcfg) {
__bman_portals_probed = -1;
return -ENOMEM;
}
pcfg->dev = dev;
@ -113,14 +122,14 @@ static int bman_portal_probe(struct platform_device *pdev)
DPAA_PORTAL_CE);
if (!addr_phys[0]) {
dev_err(dev, "Can't get %pOF property 'reg::CE'\n", node);
return -ENXIO;
goto err_ioremap1;
}
addr_phys[1] = platform_get_resource(pdev, IORESOURCE_MEM,
DPAA_PORTAL_CI);
if (!addr_phys[1]) {
dev_err(dev, "Can't get %pOF property 'reg::CI'\n", node);
return -ENXIO;
goto err_ioremap1;
}
pcfg->cpu = -1;
@ -128,7 +137,7 @@ static int bman_portal_probe(struct platform_device *pdev)
irq = platform_get_irq(pdev, 0);
if (irq <= 0) {
dev_err(dev, "Can't get %pOF IRQ'\n", node);
return -ENXIO;
goto err_ioremap1;
}
pcfg->irq = irq;
@ -150,6 +159,7 @@ static int bman_portal_probe(struct platform_device *pdev)
spin_lock(&bman_lock);
cpu = cpumask_next_zero(-1, &portal_cpus);
if (cpu >= nr_cpu_ids) {
__bman_portals_probed = 1;
/* unassigned portal, skip init */
spin_unlock(&bman_lock);
return 0;
@ -175,6 +185,8 @@ static int bman_portal_probe(struct platform_device *pdev)
err_ioremap2:
memunmap(pcfg->addr_virt_ce);
err_ioremap1:
__bman_portals_probed = -1;
return -ENXIO;
}

View File

@ -596,7 +596,7 @@ static int qman_init_ccsr(struct device *dev)
}
#define LIO_CFG_LIODN_MASK 0x0fff0000
void qman_liodn_fixup(u16 channel)
void __qman_liodn_fixup(u16 channel)
{
static int done;
static u32 liodn_offset;

View File

@ -38,6 +38,7 @@ EXPORT_SYMBOL(qman_dma_portal);
#define CONFIG_FSL_DPA_PIRQ_FAST 1
static struct cpumask portal_cpus;
static int __qman_portals_probed;
/* protect qman global registers and global data shared among portals */
static DEFINE_SPINLOCK(qman_lock);
@ -220,6 +221,12 @@ static int qman_online_cpu(unsigned int cpu)
return 0;
}
int qman_portals_probed(void)
{
return __qman_portals_probed;
}
EXPORT_SYMBOL_GPL(qman_portals_probed);
static int qman_portal_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
@ -238,8 +245,10 @@ static int qman_portal_probe(struct platform_device *pdev)
}
pcfg = devm_kmalloc(dev, sizeof(*pcfg), GFP_KERNEL);
if (!pcfg)
if (!pcfg) {
__qman_portals_probed = -1;
return -ENOMEM;
}
pcfg->dev = dev;
@ -247,19 +256,20 @@ static int qman_portal_probe(struct platform_device *pdev)
DPAA_PORTAL_CE);
if (!addr_phys[0]) {
dev_err(dev, "Can't get %pOF property 'reg::CE'\n", node);
return -ENXIO;
goto err_ioremap1;
}
addr_phys[1] = platform_get_resource(pdev, IORESOURCE_MEM,
DPAA_PORTAL_CI);
if (!addr_phys[1]) {
dev_err(dev, "Can't get %pOF property 'reg::CI'\n", node);
return -ENXIO;
goto err_ioremap1;
}
err = of_property_read_u32(node, "cell-index", &val);
if (err) {
dev_err(dev, "Can't get %pOF property 'cell-index'\n", node);
__qman_portals_probed = -1;
return err;
}
pcfg->channel = val;
@ -267,7 +277,7 @@ static int qman_portal_probe(struct platform_device *pdev)
irq = platform_get_irq(pdev, 0);
if (irq <= 0) {
dev_err(dev, "Can't get %pOF IRQ\n", node);
return -ENXIO;
goto err_ioremap1;
}
pcfg->irq = irq;
@ -291,6 +301,7 @@ static int qman_portal_probe(struct platform_device *pdev)
spin_lock(&qman_lock);
cpu = cpumask_next_zero(-1, &portal_cpus);
if (cpu >= nr_cpu_ids) {
__qman_portals_probed = 1;
/* unassigned portal, skip init */
spin_unlock(&qman_lock);
return 0;
@ -321,6 +332,8 @@ static int qman_portal_probe(struct platform_device *pdev)
err_ioremap2:
memunmap(pcfg->addr_virt_ce);
err_ioremap1:
__qman_portals_probed = -1;
return -ENXIO;
}

View File

@ -193,7 +193,14 @@ extern struct gen_pool *qm_cgralloc; /* CGR ID allocator */
u32 qm_get_pools_sdqcr(void);
int qman_wq_alloc(void);
void qman_liodn_fixup(u16 channel);
#ifdef CONFIG_FSL_PAMU
#define qman_liodn_fixup __qman_liodn_fixup
#else
static inline void qman_liodn_fixup(u16 channel)
{
}
#endif
void __qman_liodn_fixup(u16 channel);
void qman_set_sdest(u16 channel, unsigned int cpu_idx);
struct qman_portal *qman_create_affine_portal(

View File

@ -8,4 +8,13 @@ config IMX_GPCV2_PM_DOMAINS
select PM_GENERIC_DOMAINS
default y if SOC_IMX7D
config IMX_SCU_SOC
bool "i.MX System Controller Unit SoC info support"
depends on IMX_SCU
select SOC_BUS
help
If you say yes here you get support for the NXP i.MX System
Controller Unit SoC info module, it will provide the SoC info
like SoC family, ID and revision etc.
endmenu

View File

@ -2,3 +2,4 @@
obj-$(CONFIG_HAVE_IMX_GPC) += gpc.o
obj-$(CONFIG_IMX_GPCV2_PM_DOMAINS) += gpcv2.o
obj-$(CONFIG_ARCH_MXC) += soc-imx8.o
obj-$(CONFIG_IMX_SCU_SOC) += soc-imx-scu.o

View File

@ -0,0 +1,144 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright 2019 NXP.
*/
#include <dt-bindings/firmware/imx/rsrc.h>
#include <linux/firmware/imx/sci.h>
#include <linux/slab.h>
#include <linux/sys_soc.h>
#include <linux/platform_device.h>
#include <linux/of.h>
#define IMX_SCU_SOC_DRIVER_NAME "imx-scu-soc"
static struct imx_sc_ipc *soc_ipc_handle;
struct imx_sc_msg_misc_get_soc_id {
struct imx_sc_rpc_msg hdr;
union {
struct {
u32 control;
u16 resource;
} __packed req;
struct {
u32 id;
} resp;
} data;
} __packed;
static int imx_scu_soc_id(void)
{
struct imx_sc_msg_misc_get_soc_id msg;
struct imx_sc_rpc_msg *hdr = &msg.hdr;
int ret;
hdr->ver = IMX_SC_RPC_VERSION;
hdr->svc = IMX_SC_RPC_SVC_MISC;
hdr->func = IMX_SC_MISC_FUNC_GET_CONTROL;
hdr->size = 3;
msg.data.req.control = IMX_SC_C_ID;
msg.data.req.resource = IMX_SC_R_SYSTEM;
ret = imx_scu_call_rpc(soc_ipc_handle, &msg, true);
if (ret) {
pr_err("%s: get soc info failed, ret %d\n", __func__, ret);
return ret;
}
return msg.data.resp.id;
}
static int imx_scu_soc_probe(struct platform_device *pdev)
{
struct soc_device_attribute *soc_dev_attr;
struct soc_device *soc_dev;
int id, ret;
u32 val;
ret = imx_scu_get_handle(&soc_ipc_handle);
if (ret)
return ret;
soc_dev_attr = devm_kzalloc(&pdev->dev, sizeof(*soc_dev_attr),
GFP_KERNEL);
if (!soc_dev_attr)
return -ENOMEM;
soc_dev_attr->family = "Freescale i.MX";
ret = of_property_read_string(of_root,
"model",
&soc_dev_attr->machine);
if (ret)
return ret;
id = imx_scu_soc_id();
if (id < 0)
return -EINVAL;
/* format soc_id value passed from SCU firmware */
val = id & 0x1f;
soc_dev_attr->soc_id = kasprintf(GFP_KERNEL, "0x%x", val);
if (!soc_dev_attr->soc_id)
return -ENOMEM;
/* format revision value passed from SCU firmware */
val = (id >> 5) & 0xf;
val = (((val >> 2) + 1) << 4) | (val & 0x3);
soc_dev_attr->revision = kasprintf(GFP_KERNEL,
"%d.%d",
(val >> 4) & 0xf,
val & 0xf);
if (!soc_dev_attr->revision) {
ret = -ENOMEM;
goto free_soc_id;
}
soc_dev = soc_device_register(soc_dev_attr);
if (IS_ERR(soc_dev)) {
ret = PTR_ERR(soc_dev);
goto free_revision;
}
return 0;
free_revision:
kfree(soc_dev_attr->revision);
free_soc_id:
kfree(soc_dev_attr->soc_id);
return ret;
}
static struct platform_driver imx_scu_soc_driver = {
.driver = {
.name = IMX_SCU_SOC_DRIVER_NAME,
},
.probe = imx_scu_soc_probe,
};
static int __init imx_scu_soc_init(void)
{
struct platform_device *pdev;
struct device_node *np;
int ret;
np = of_find_compatible_node(NULL, NULL, "fsl,imx-scu");
if (!np)
return -ENODEV;
of_node_put(np);
ret = platform_driver_register(&imx_scu_soc_driver);
if (ret)
return ret;
pdev = platform_device_register_simple(IMX_SCU_SOC_DRIVER_NAME,
-1, NULL, 0);
if (IS_ERR(pdev))
platform_driver_unregister(&imx_scu_soc_driver);
return PTR_ERR_OR_ZERO(pdev);
}
device_initcall(imx_scu_soc_init);

View File

@ -16,6 +16,9 @@
#define IMX8MQ_SW_INFO_B1 0x40
#define IMX8MQ_SW_MAGIC_B1 0xff0055aa
/* Same as ANADIG_DIGPROG_IMX7D */
#define ANADIG_DIGPROG_IMX8MM 0x800
struct imx8_soc_data {
char *name;
u32 (*soc_revision)(void);
@ -46,13 +49,45 @@ static u32 __init imx8mq_soc_revision(void)
return rev;
}
static u32 __init imx8mm_soc_revision(void)
{
struct device_node *np;
void __iomem *anatop_base;
u32 rev;
np = of_find_compatible_node(NULL, NULL, "fsl,imx8mm-anatop");
if (!np)
return 0;
anatop_base = of_iomap(np, 0);
WARN_ON(!anatop_base);
rev = readl_relaxed(anatop_base + ANADIG_DIGPROG_IMX8MM);
iounmap(anatop_base);
of_node_put(np);
return rev;
}
static const struct imx8_soc_data imx8mq_soc_data = {
.name = "i.MX8MQ",
.soc_revision = imx8mq_soc_revision,
};
static const struct imx8_soc_data imx8mm_soc_data = {
.name = "i.MX8MM",
.soc_revision = imx8mm_soc_revision,
};
static const struct imx8_soc_data imx8mn_soc_data = {
.name = "i.MX8MN",
.soc_revision = imx8mm_soc_revision,
};
static const struct of_device_id imx8_soc_match[] = {
{ .compatible = "fsl,imx8mq", .data = &imx8mq_soc_data, },
{ .compatible = "fsl,imx8mm", .data = &imx8mm_soc_data, },
{ .compatible = "fsl,imx8mn", .data = &imx8mn_soc_data, },
{ }
};
@ -65,7 +100,6 @@ static int __init imx8_soc_init(void)
{
struct soc_device_attribute *soc_dev_attr;
struct soc_device *soc_dev;
struct device_node *root;
const struct of_device_id *id;
u32 soc_rev = 0;
const struct imx8_soc_data *data;
@ -73,20 +107,19 @@ static int __init imx8_soc_init(void)
soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL);
if (!soc_dev_attr)
return -ENODEV;
return -ENOMEM;
soc_dev_attr->family = "Freescale i.MX";
root = of_find_node_by_path("/");
ret = of_property_read_string(root, "model", &soc_dev_attr->machine);
ret = of_property_read_string(of_root, "model", &soc_dev_attr->machine);
if (ret)
goto free_soc;
id = of_match_node(imx8_soc_match, root);
if (!id)
id = of_match_node(imx8_soc_match, of_root);
if (!id) {
ret = -ENODEV;
goto free_soc;
of_node_put(root);
}
data = id->data;
if (data) {
@ -96,12 +129,16 @@ static int __init imx8_soc_init(void)
}
soc_dev_attr->revision = imx8_revision(soc_rev);
if (!soc_dev_attr->revision)
if (!soc_dev_attr->revision) {
ret = -ENOMEM;
goto free_soc;
}
soc_dev = soc_device_register(soc_dev_attr);
if (IS_ERR(soc_dev))
if (IS_ERR(soc_dev)) {
ret = PTR_ERR(soc_dev);
goto free_rev;
}
if (IS_ENABLED(CONFIG_ARM_IMX_CPUFREQ_DT))
platform_device_register_simple("imx-cpufreq-dt", -1, NULL, 0);
@ -109,10 +146,10 @@ static int __init imx8_soc_init(void)
return 0;
free_rev:
kfree(soc_dev_attr->revision);
if (strcmp(soc_dev_attr->revision, "unknown"))
kfree(soc_dev_attr->revision);
free_soc:
kfree(soc_dev_attr);
of_node_put(root);
return -ENODEV;
return ret;
}
device_initcall(imx8_soc_init);

View File

@ -4,6 +4,18 @@
#
menu "Qualcomm SoC drivers"
config QCOM_AOSS_QMP
tristate "Qualcomm AOSS Driver"
depends on ARCH_QCOM || COMPILE_TEST
depends on MAILBOX
depends on COMMON_CLK && PM
select PM_GENERIC_DOMAINS
help
This driver provides the means of communicating with and controlling
the low-power state for resources related to the remoteproc
subsystems as well as controlling the debug clocks exposed by the Always On
Subsystem (AOSS) using Qualcomm Messaging Protocol (QMP).
config QCOM_COMMAND_DB
bool "Qualcomm Command DB"
depends on ARCH_QCOM || COMPILE_TEST

View File

@ -1,5 +1,6 @@
# SPDX-License-Identifier: GPL-2.0
CFLAGS_rpmh-rsc.o := -I$(src)
obj-$(CONFIG_QCOM_AOSS_QMP) += qcom_aoss.o
obj-$(CONFIG_QCOM_GENI_SE) += qcom-geni-se.o
obj-$(CONFIG_QCOM_COMMAND_DB) += cmd-db.o
obj-$(CONFIG_QCOM_GLINK_SSR) += glink_ssr.o

View File

@ -8,6 +8,7 @@
#include <linux/spinlock.h>
#include <linux/idr.h>
#include <linux/slab.h>
#include <linux/workqueue.h>
#include <linux/of_device.h>
#include <linux/soc/qcom/apr.h>
#include <linux/rpmsg.h>
@ -17,8 +18,18 @@ struct apr {
struct rpmsg_endpoint *ch;
struct device *dev;
spinlock_t svcs_lock;
spinlock_t rx_lock;
struct idr svcs_idr;
int dest_domain_id;
struct workqueue_struct *rxwq;
struct work_struct rx_work;
struct list_head rx_list;
};
struct apr_rx_buf {
struct list_head node;
int len;
uint8_t buf[];
};
/**
@ -62,11 +73,7 @@ static int apr_callback(struct rpmsg_device *rpdev, void *buf,
int len, void *priv, u32 addr)
{
struct apr *apr = dev_get_drvdata(&rpdev->dev);
uint16_t hdr_size, msg_type, ver, svc_id;
struct apr_device *svc = NULL;
struct apr_driver *adrv = NULL;
struct apr_resp_pkt resp;
struct apr_hdr *hdr;
struct apr_rx_buf *abuf;
unsigned long flags;
if (len <= APR_HDR_SIZE) {
@ -75,6 +82,34 @@ static int apr_callback(struct rpmsg_device *rpdev, void *buf,
return -EINVAL;
}
abuf = kzalloc(sizeof(*abuf) + len, GFP_ATOMIC);
if (!abuf)
return -ENOMEM;
abuf->len = len;
memcpy(abuf->buf, buf, len);
spin_lock_irqsave(&apr->rx_lock, flags);
list_add_tail(&abuf->node, &apr->rx_list);
spin_unlock_irqrestore(&apr->rx_lock, flags);
queue_work(apr->rxwq, &apr->rx_work);
return 0;
}
static int apr_do_rx_callback(struct apr *apr, struct apr_rx_buf *abuf)
{
uint16_t hdr_size, msg_type, ver, svc_id;
struct apr_device *svc = NULL;
struct apr_driver *adrv = NULL;
struct apr_resp_pkt resp;
struct apr_hdr *hdr;
unsigned long flags;
void *buf = abuf->buf;
int len = abuf->len;
hdr = buf;
ver = APR_HDR_FIELD_VER(hdr->hdr_field);
if (ver > APR_PKT_VER + 1)
@ -132,6 +167,23 @@ static int apr_callback(struct rpmsg_device *rpdev, void *buf,
return 0;
}
static void apr_rxwq(struct work_struct *work)
{
struct apr *apr = container_of(work, struct apr, rx_work);
struct apr_rx_buf *abuf, *b;
unsigned long flags;
if (!list_empty(&apr->rx_list)) {
list_for_each_entry_safe(abuf, b, &apr->rx_list, node) {
apr_do_rx_callback(apr, abuf);
spin_lock_irqsave(&apr->rx_lock, flags);
list_del(&abuf->node);
spin_unlock_irqrestore(&apr->rx_lock, flags);
kfree(abuf);
}
}
}
static int apr_device_match(struct device *dev, struct device_driver *drv)
{
struct apr_device *adev = to_apr_device(dev);
@ -276,7 +328,7 @@ static int apr_probe(struct rpmsg_device *rpdev)
if (!apr)
return -ENOMEM;
ret = of_property_read_u32(dev->of_node, "reg", &apr->dest_domain_id);
ret = of_property_read_u32(dev->of_node, "qcom,apr-domain", &apr->dest_domain_id);
if (ret) {
dev_err(dev, "APR Domain ID not specified in DT\n");
return ret;
@ -285,6 +337,14 @@ static int apr_probe(struct rpmsg_device *rpdev)
dev_set_drvdata(dev, apr);
apr->ch = rpdev->ept;
apr->dev = dev;
apr->rxwq = create_singlethread_workqueue("qcom_apr_rx");
if (!apr->rxwq) {
dev_err(apr->dev, "Failed to start Rx WQ\n");
return -ENOMEM;
}
INIT_WORK(&apr->rx_work, apr_rxwq);
INIT_LIST_HEAD(&apr->rx_list);
spin_lock_init(&apr->rx_lock);
spin_lock_init(&apr->svcs_lock);
idr_init(&apr->svcs_idr);
of_register_apr_devices(dev);
@ -303,7 +363,11 @@ static int apr_remove_device(struct device *dev, void *null)
static void apr_remove(struct rpmsg_device *rpdev)
{
struct apr *apr = dev_get_drvdata(&rpdev->dev);
device_for_each_child(&rpdev->dev, NULL, apr_remove_device);
flush_workqueue(apr->rxwq);
destroy_workqueue(apr->rxwq);
}
/*

View File

@ -0,0 +1,480 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2019, Linaro Ltd
*/
#include <dt-bindings/power/qcom-aoss-qmp.h>
#include <linux/clk-provider.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/mailbox_client.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/pm_domain.h>
#define QMP_DESC_MAGIC 0x0
#define QMP_DESC_VERSION 0x4
#define QMP_DESC_FEATURES 0x8
/* AOP-side offsets */
#define QMP_DESC_UCORE_LINK_STATE 0xc
#define QMP_DESC_UCORE_LINK_STATE_ACK 0x10
#define QMP_DESC_UCORE_CH_STATE 0x14
#define QMP_DESC_UCORE_CH_STATE_ACK 0x18
#define QMP_DESC_UCORE_MBOX_SIZE 0x1c
#define QMP_DESC_UCORE_MBOX_OFFSET 0x20
/* Linux-side offsets */
#define QMP_DESC_MCORE_LINK_STATE 0x24
#define QMP_DESC_MCORE_LINK_STATE_ACK 0x28
#define QMP_DESC_MCORE_CH_STATE 0x2c
#define QMP_DESC_MCORE_CH_STATE_ACK 0x30
#define QMP_DESC_MCORE_MBOX_SIZE 0x34
#define QMP_DESC_MCORE_MBOX_OFFSET 0x38
#define QMP_STATE_UP GENMASK(15, 0)
#define QMP_STATE_DOWN GENMASK(31, 16)
#define QMP_MAGIC 0x4d41494c /* mail */
#define QMP_VERSION 1
/* 64 bytes is enough to store the requests and provides padding to 4 bytes */
#define QMP_MSG_LEN 64
/**
* struct qmp - driver state for QMP implementation
* @msgram: iomem referencing the message RAM used for communication
* @dev: reference to QMP device
* @mbox_client: mailbox client used to ring the doorbell on transmit
* @mbox_chan: mailbox channel used to ring the doorbell on transmit
* @offset: offset within @msgram where messages should be written
* @size: maximum size of the messages to be transmitted
* @event: wait_queue for synchronization with the IRQ
* @tx_lock: provides synchronization between multiple callers of qmp_send()
* @qdss_clk: QDSS clock hw struct
* @pd_data: genpd data
*/
struct qmp {
void __iomem *msgram;
struct device *dev;
struct mbox_client mbox_client;
struct mbox_chan *mbox_chan;
size_t offset;
size_t size;
wait_queue_head_t event;
struct mutex tx_lock;
struct clk_hw qdss_clk;
struct genpd_onecell_data pd_data;
};
struct qmp_pd {
struct qmp *qmp;
struct generic_pm_domain pd;
};
#define to_qmp_pd_resource(res) container_of(res, struct qmp_pd, pd)
static void qmp_kick(struct qmp *qmp)
{
mbox_send_message(qmp->mbox_chan, NULL);
mbox_client_txdone(qmp->mbox_chan, 0);
}
static bool qmp_magic_valid(struct qmp *qmp)
{
return readl(qmp->msgram + QMP_DESC_MAGIC) == QMP_MAGIC;
}
static bool qmp_link_acked(struct qmp *qmp)
{
return readl(qmp->msgram + QMP_DESC_MCORE_LINK_STATE_ACK) == QMP_STATE_UP;
}
static bool qmp_mcore_channel_acked(struct qmp *qmp)
{
return readl(qmp->msgram + QMP_DESC_MCORE_CH_STATE_ACK) == QMP_STATE_UP;
}
static bool qmp_ucore_channel_up(struct qmp *qmp)
{
return readl(qmp->msgram + QMP_DESC_UCORE_CH_STATE) == QMP_STATE_UP;
}
static int qmp_open(struct qmp *qmp)
{
int ret;
u32 val;
if (!qmp_magic_valid(qmp)) {
dev_err(qmp->dev, "QMP magic doesn't match\n");
return -EINVAL;
}
val = readl(qmp->msgram + QMP_DESC_VERSION);
if (val != QMP_VERSION) {
dev_err(qmp->dev, "unsupported QMP version %d\n", val);
return -EINVAL;
}
qmp->offset = readl(qmp->msgram + QMP_DESC_MCORE_MBOX_OFFSET);
qmp->size = readl(qmp->msgram + QMP_DESC_MCORE_MBOX_SIZE);
if (!qmp->size) {
dev_err(qmp->dev, "invalid mailbox size\n");
return -EINVAL;
}
/* Ack remote core's link state */
val = readl(qmp->msgram + QMP_DESC_UCORE_LINK_STATE);
writel(val, qmp->msgram + QMP_DESC_UCORE_LINK_STATE_ACK);
/* Set local core's link state to up */
writel(QMP_STATE_UP, qmp->msgram + QMP_DESC_MCORE_LINK_STATE);
qmp_kick(qmp);
ret = wait_event_timeout(qmp->event, qmp_link_acked(qmp), HZ);
if (!ret) {
dev_err(qmp->dev, "ucore didn't ack link\n");
goto timeout_close_link;
}
writel(QMP_STATE_UP, qmp->msgram + QMP_DESC_MCORE_CH_STATE);
qmp_kick(qmp);
ret = wait_event_timeout(qmp->event, qmp_ucore_channel_up(qmp), HZ);
if (!ret) {
dev_err(qmp->dev, "ucore didn't open channel\n");
goto timeout_close_channel;
}
/* Ack remote core's channel state */
writel(QMP_STATE_UP, qmp->msgram + QMP_DESC_UCORE_CH_STATE_ACK);
qmp_kick(qmp);
ret = wait_event_timeout(qmp->event, qmp_mcore_channel_acked(qmp), HZ);
if (!ret) {
dev_err(qmp->dev, "ucore didn't ack channel\n");
goto timeout_close_channel;
}
return 0;
timeout_close_channel:
writel(QMP_STATE_DOWN, qmp->msgram + QMP_DESC_MCORE_CH_STATE);
timeout_close_link:
writel(QMP_STATE_DOWN, qmp->msgram + QMP_DESC_MCORE_LINK_STATE);
qmp_kick(qmp);
return -ETIMEDOUT;
}
static void qmp_close(struct qmp *qmp)
{
writel(QMP_STATE_DOWN, qmp->msgram + QMP_DESC_MCORE_CH_STATE);
writel(QMP_STATE_DOWN, qmp->msgram + QMP_DESC_MCORE_LINK_STATE);
qmp_kick(qmp);
}
static irqreturn_t qmp_intr(int irq, void *data)
{
struct qmp *qmp = data;
wake_up_interruptible_all(&qmp->event);
return IRQ_HANDLED;
}
static bool qmp_message_empty(struct qmp *qmp)
{
return readl(qmp->msgram + qmp->offset) == 0;
}
/**
* qmp_send() - send a message to the AOSS
* @qmp: qmp context
* @data: message to be sent
* @len: length of the message
*
* Transmit @data to AOSS and wait for the AOSS to acknowledge the message.
* @len must be a multiple of 4 and not longer than the mailbox size. Access is
* synchronized by this implementation.
*
* Return: 0 on success, negative errno on failure
*/
static int qmp_send(struct qmp *qmp, const void *data, size_t len)
{
long time_left;
int ret;
if (WARN_ON(len + sizeof(u32) > qmp->size))
return -EINVAL;
if (WARN_ON(len % sizeof(u32)))
return -EINVAL;
mutex_lock(&qmp->tx_lock);
/* The message RAM only implements 32-bit accesses */
__iowrite32_copy(qmp->msgram + qmp->offset + sizeof(u32),
data, len / sizeof(u32));
writel(len, qmp->msgram + qmp->offset);
qmp_kick(qmp);
time_left = wait_event_interruptible_timeout(qmp->event,
qmp_message_empty(qmp), HZ);
if (!time_left) {
dev_err(qmp->dev, "ucore did not ack channel\n");
ret = -ETIMEDOUT;
/* Clear message from buffer */
writel(0, qmp->msgram + qmp->offset);
} else {
ret = 0;
}
mutex_unlock(&qmp->tx_lock);
return ret;
}
static int qmp_qdss_clk_prepare(struct clk_hw *hw)
{
static const char buf[QMP_MSG_LEN] = "{class: clock, res: qdss, val: 1}";
struct qmp *qmp = container_of(hw, struct qmp, qdss_clk);
return qmp_send(qmp, buf, sizeof(buf));
}
static void qmp_qdss_clk_unprepare(struct clk_hw *hw)
{
static const char buf[QMP_MSG_LEN] = "{class: clock, res: qdss, val: 0}";
struct qmp *qmp = container_of(hw, struct qmp, qdss_clk);
qmp_send(qmp, buf, sizeof(buf));
}
static const struct clk_ops qmp_qdss_clk_ops = {
.prepare = qmp_qdss_clk_prepare,
.unprepare = qmp_qdss_clk_unprepare,
};
static int qmp_qdss_clk_add(struct qmp *qmp)
{
static const struct clk_init_data qdss_init = {
.ops = &qmp_qdss_clk_ops,
.name = "qdss",
};
int ret;
qmp->qdss_clk.init = &qdss_init;
ret = clk_hw_register(qmp->dev, &qmp->qdss_clk);
if (ret < 0) {
dev_err(qmp->dev, "failed to register qdss clock\n");
return ret;
}
ret = of_clk_add_hw_provider(qmp->dev->of_node, of_clk_hw_simple_get,
&qmp->qdss_clk);
if (ret < 0) {
dev_err(qmp->dev, "unable to register of clk hw provider\n");
clk_hw_unregister(&qmp->qdss_clk);
}
return ret;
}
static void qmp_qdss_clk_remove(struct qmp *qmp)
{
of_clk_del_provider(qmp->dev->of_node);
clk_hw_unregister(&qmp->qdss_clk);
}
static int qmp_pd_power_toggle(struct qmp_pd *res, bool enable)
{
char buf[QMP_MSG_LEN] = {};
snprintf(buf, sizeof(buf),
"{class: image, res: load_state, name: %s, val: %s}",
res->pd.name, enable ? "on" : "off");
return qmp_send(res->qmp, buf, sizeof(buf));
}
static int qmp_pd_power_on(struct generic_pm_domain *domain)
{
return qmp_pd_power_toggle(to_qmp_pd_resource(domain), true);
}
static int qmp_pd_power_off(struct generic_pm_domain *domain)
{
return qmp_pd_power_toggle(to_qmp_pd_resource(domain), false);
}
static const char * const sdm845_resources[] = {
[AOSS_QMP_LS_CDSP] = "cdsp",
[AOSS_QMP_LS_LPASS] = "adsp",
[AOSS_QMP_LS_MODEM] = "modem",
[AOSS_QMP_LS_SLPI] = "slpi",
[AOSS_QMP_LS_SPSS] = "spss",
[AOSS_QMP_LS_VENUS] = "venus",
};
static int qmp_pd_add(struct qmp *qmp)
{
struct genpd_onecell_data *data = &qmp->pd_data;
struct device *dev = qmp->dev;
struct qmp_pd *res;
size_t num = ARRAY_SIZE(sdm845_resources);
int ret;
int i;
res = devm_kcalloc(dev, num, sizeof(*res), GFP_KERNEL);
if (!res)
return -ENOMEM;
data->domains = devm_kcalloc(dev, num, sizeof(*data->domains),
GFP_KERNEL);
if (!data->domains)
return -ENOMEM;
for (i = 0; i < num; i++) {
res[i].qmp = qmp;
res[i].pd.name = sdm845_resources[i];
res[i].pd.power_on = qmp_pd_power_on;
res[i].pd.power_off = qmp_pd_power_off;
ret = pm_genpd_init(&res[i].pd, NULL, true);
if (ret < 0) {
dev_err(dev, "failed to init genpd\n");
goto unroll_genpds;
}
data->domains[i] = &res[i].pd;
}
data->num_domains = i;
ret = of_genpd_add_provider_onecell(dev->of_node, data);
if (ret < 0)
goto unroll_genpds;
return 0;
unroll_genpds:
for (i--; i >= 0; i--)
pm_genpd_remove(data->domains[i]);
return ret;
}
static void qmp_pd_remove(struct qmp *qmp)
{
struct genpd_onecell_data *data = &qmp->pd_data;
struct device *dev = qmp->dev;
int i;
of_genpd_del_provider(dev->of_node);
for (i = 0; i < data->num_domains; i++)
pm_genpd_remove(data->domains[i]);
}
static int qmp_probe(struct platform_device *pdev)
{
struct resource *res;
struct qmp *qmp;
int irq;
int ret;
qmp = devm_kzalloc(&pdev->dev, sizeof(*qmp), GFP_KERNEL);
if (!qmp)
return -ENOMEM;
qmp->dev = &pdev->dev;
init_waitqueue_head(&qmp->event);
mutex_init(&qmp->tx_lock);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
qmp->msgram = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(qmp->msgram))
return PTR_ERR(qmp->msgram);
qmp->mbox_client.dev = &pdev->dev;
qmp->mbox_client.knows_txdone = true;
qmp->mbox_chan = mbox_request_channel(&qmp->mbox_client, 0);
if (IS_ERR(qmp->mbox_chan)) {
dev_err(&pdev->dev, "failed to acquire ipc mailbox\n");
return PTR_ERR(qmp->mbox_chan);
}
irq = platform_get_irq(pdev, 0);
ret = devm_request_irq(&pdev->dev, irq, qmp_intr, IRQF_ONESHOT,
"aoss-qmp", qmp);
if (ret < 0) {
dev_err(&pdev->dev, "failed to request interrupt\n");
goto err_free_mbox;
}
ret = qmp_open(qmp);
if (ret < 0)
goto err_free_mbox;
ret = qmp_qdss_clk_add(qmp);
if (ret)
goto err_close_qmp;
ret = qmp_pd_add(qmp);
if (ret)
goto err_remove_qdss_clk;
platform_set_drvdata(pdev, qmp);
return 0;
err_remove_qdss_clk:
qmp_qdss_clk_remove(qmp);
err_close_qmp:
qmp_close(qmp);
err_free_mbox:
mbox_free_channel(qmp->mbox_chan);
return ret;
}
static int qmp_remove(struct platform_device *pdev)
{
struct qmp *qmp = platform_get_drvdata(pdev);
qmp_qdss_clk_remove(qmp);
qmp_pd_remove(qmp);
qmp_close(qmp);
mbox_free_channel(qmp->mbox_chan);
return 0;
}
static const struct of_device_id qmp_dt_match[] = {
{ .compatible = "qcom,sdm845-aoss-qmp", },
{}
};
MODULE_DEVICE_TABLE(of, qmp_dt_match);
static struct platform_driver qmp_driver = {
.driver = {
.name = "qcom_aoss_qmp",
.of_match_table = qmp_dt_match,
},
.probe = qmp_probe,
.remove = qmp_remove,
};
module_platform_driver(qmp_driver);
MODULE_DESCRIPTION("Qualcomm AOSS QMP driver");
MODULE_LICENSE("GPL v2");

View File

@ -16,56 +16,76 @@
#define domain_to_rpmpd(domain) container_of(domain, struct rpmpd, pd)
/* Resource types */
/* Resource types:
* RPMPD_X is X encoded as a little-endian, lower-case, ASCII string */
#define RPMPD_SMPA 0x61706d73
#define RPMPD_LDOA 0x616f646c
#define RPMPD_RWCX 0x78637772
#define RPMPD_RWMX 0x786d7772
#define RPMPD_RWLC 0x636c7772
#define RPMPD_RWLM 0x6d6c7772
#define RPMPD_RWSC 0x63737772
#define RPMPD_RWSM 0x6d737772
/* Operation Keys */
#define KEY_CORNER 0x6e726f63 /* corn */
#define KEY_ENABLE 0x6e657773 /* swen */
#define KEY_FLOOR_CORNER 0x636676 /* vfc */
#define KEY_FLOOR_LEVEL 0x6c6676 /* vfl */
#define KEY_LEVEL 0x6c766c76 /* vlvl */
#define MAX_RPMPD_STATE 6
#define MAX_8996_RPMPD_STATE 6
#define DEFINE_RPMPD_CORNER_SMPA(_platform, _name, _active, r_id) \
#define DEFINE_RPMPD_PAIR(_platform, _name, _active, r_type, r_key, \
r_id) \
static struct rpmpd _platform##_##_active; \
static struct rpmpd _platform##_##_name = { \
.pd = { .name = #_name, }, \
.peer = &_platform##_##_active, \
.res_type = RPMPD_SMPA, \
.res_type = RPMPD_##r_type, \
.res_id = r_id, \
.key = KEY_CORNER, \
.key = KEY_##r_key, \
}; \
static struct rpmpd _platform##_##_active = { \
.pd = { .name = #_active, }, \
.peer = &_platform##_##_name, \
.active_only = true, \
.res_type = RPMPD_SMPA, \
.res_type = RPMPD_##r_type, \
.res_id = r_id, \
.key = KEY_##r_key, \
}
#define DEFINE_RPMPD_CORNER(_platform, _name, r_type, r_id) \
static struct rpmpd _platform##_##_name = { \
.pd = { .name = #_name, }, \
.res_type = RPMPD_##r_type, \
.res_id = r_id, \
.key = KEY_CORNER, \
}
#define DEFINE_RPMPD_CORNER_LDOA(_platform, _name, r_id) \
#define DEFINE_RPMPD_LEVEL(_platform, _name, r_type, r_id) \
static struct rpmpd _platform##_##_name = { \
.pd = { .name = #_name, }, \
.res_type = RPMPD_LDOA, \
.res_type = RPMPD_##r_type, \
.res_id = r_id, \
.key = KEY_CORNER, \
.key = KEY_LEVEL, \
}
#define DEFINE_RPMPD_VFC(_platform, _name, r_id, r_type) \
#define DEFINE_RPMPD_VFC(_platform, _name, r_type, r_id) \
static struct rpmpd _platform##_##_name = { \
.pd = { .name = #_name, }, \
.res_type = r_type, \
.res_type = RPMPD_##r_type, \
.res_id = r_id, \
.key = KEY_FLOOR_CORNER, \
}
#define DEFINE_RPMPD_VFC_SMPA(_platform, _name, r_id) \
DEFINE_RPMPD_VFC(_platform, _name, r_id, RPMPD_SMPA)
#define DEFINE_RPMPD_VFC_LDOA(_platform, _name, r_id) \
DEFINE_RPMPD_VFC(_platform, _name, r_id, RPMPD_LDOA)
#define DEFINE_RPMPD_VFL(_platform, _name, r_type, r_id) \
static struct rpmpd _platform##_##_name = { \
.pd = { .name = #_name, }, \
.res_type = RPMPD_##r_type, \
.res_id = r_id, \
.key = KEY_FLOOR_LEVEL, \
}
struct rpmpd_req {
__le32 key;
@ -83,23 +103,25 @@ struct rpmpd {
const int res_type;
const int res_id;
struct qcom_smd_rpm *rpm;
unsigned int max_state;
__le32 key;
};
struct rpmpd_desc {
struct rpmpd **rpmpds;
size_t num_pds;
unsigned int max_state;
};
static DEFINE_MUTEX(rpmpd_lock);
/* msm8996 RPM Power domains */
DEFINE_RPMPD_CORNER_SMPA(msm8996, vddcx, vddcx_ao, 1);
DEFINE_RPMPD_CORNER_SMPA(msm8996, vddmx, vddmx_ao, 2);
DEFINE_RPMPD_CORNER_LDOA(msm8996, vddsscx, 26);
DEFINE_RPMPD_PAIR(msm8996, vddcx, vddcx_ao, SMPA, CORNER, 1);
DEFINE_RPMPD_PAIR(msm8996, vddmx, vddmx_ao, SMPA, CORNER, 2);
DEFINE_RPMPD_CORNER(msm8996, vddsscx, LDOA, 26);
DEFINE_RPMPD_VFC_SMPA(msm8996, vddcx_vfc, 1);
DEFINE_RPMPD_VFC_LDOA(msm8996, vddsscx_vfc, 26);
DEFINE_RPMPD_VFC(msm8996, vddcx_vfc, SMPA, 1);
DEFINE_RPMPD_VFC(msm8996, vddsscx_vfc, LDOA, 26);
static struct rpmpd *msm8996_rpmpds[] = {
[MSM8996_VDDCX] = &msm8996_vddcx,
@ -114,10 +136,71 @@ static struct rpmpd *msm8996_rpmpds[] = {
static const struct rpmpd_desc msm8996_desc = {
.rpmpds = msm8996_rpmpds,
.num_pds = ARRAY_SIZE(msm8996_rpmpds),
.max_state = MAX_8996_RPMPD_STATE,
};
/* msm8998 RPM Power domains */
DEFINE_RPMPD_PAIR(msm8998, vddcx, vddcx_ao, RWCX, LEVEL, 0);
DEFINE_RPMPD_VFL(msm8998, vddcx_vfl, RWCX, 0);
DEFINE_RPMPD_PAIR(msm8998, vddmx, vddmx_ao, RWMX, LEVEL, 0);
DEFINE_RPMPD_VFL(msm8998, vddmx_vfl, RWMX, 0);
DEFINE_RPMPD_LEVEL(msm8998, vdd_ssccx, RWSC, 0);
DEFINE_RPMPD_VFL(msm8998, vdd_ssccx_vfl, RWSC, 0);
DEFINE_RPMPD_LEVEL(msm8998, vdd_sscmx, RWSM, 0);
DEFINE_RPMPD_VFL(msm8998, vdd_sscmx_vfl, RWSM, 0);
static struct rpmpd *msm8998_rpmpds[] = {
[MSM8998_VDDCX] = &msm8998_vddcx,
[MSM8998_VDDCX_AO] = &msm8998_vddcx_ao,
[MSM8998_VDDCX_VFL] = &msm8998_vddcx_vfl,
[MSM8998_VDDMX] = &msm8998_vddmx,
[MSM8998_VDDMX_AO] = &msm8998_vddmx_ao,
[MSM8998_VDDMX_VFL] = &msm8998_vddmx_vfl,
[MSM8998_SSCCX] = &msm8998_vdd_ssccx,
[MSM8998_SSCCX_VFL] = &msm8998_vdd_ssccx_vfl,
[MSM8998_SSCMX] = &msm8998_vdd_sscmx,
[MSM8998_SSCMX_VFL] = &msm8998_vdd_sscmx_vfl,
};
static const struct rpmpd_desc msm8998_desc = {
.rpmpds = msm8998_rpmpds,
.num_pds = ARRAY_SIZE(msm8998_rpmpds),
.max_state = RPM_SMD_LEVEL_BINNING,
};
/* qcs404 RPM Power domains */
DEFINE_RPMPD_PAIR(qcs404, vddmx, vddmx_ao, RWMX, LEVEL, 0);
DEFINE_RPMPD_VFL(qcs404, vddmx_vfl, RWMX, 0);
DEFINE_RPMPD_LEVEL(qcs404, vdd_lpicx, RWLC, 0);
DEFINE_RPMPD_VFL(qcs404, vdd_lpicx_vfl, RWLC, 0);
DEFINE_RPMPD_LEVEL(qcs404, vdd_lpimx, RWLM, 0);
DEFINE_RPMPD_VFL(qcs404, vdd_lpimx_vfl, RWLM, 0);
static struct rpmpd *qcs404_rpmpds[] = {
[QCS404_VDDMX] = &qcs404_vddmx,
[QCS404_VDDMX_AO] = &qcs404_vddmx_ao,
[QCS404_VDDMX_VFL] = &qcs404_vddmx_vfl,
[QCS404_LPICX] = &qcs404_vdd_lpicx,
[QCS404_LPICX_VFL] = &qcs404_vdd_lpicx_vfl,
[QCS404_LPIMX] = &qcs404_vdd_lpimx,
[QCS404_LPIMX_VFL] = &qcs404_vdd_lpimx_vfl,
};
static const struct rpmpd_desc qcs404_desc = {
.rpmpds = qcs404_rpmpds,
.num_pds = ARRAY_SIZE(qcs404_rpmpds),
.max_state = RPM_SMD_LEVEL_BINNING,
};
static const struct of_device_id rpmpd_match_table[] = {
{ .compatible = "qcom,msm8996-rpmpd", .data = &msm8996_desc },
{ .compatible = "qcom,msm8998-rpmpd", .data = &msm8998_desc },
{ .compatible = "qcom,qcs404-rpmpd", .data = &qcs404_desc },
{ }
};
@ -225,14 +308,16 @@ static int rpmpd_set_performance(struct generic_pm_domain *domain,
int ret = 0;
struct rpmpd *pd = domain_to_rpmpd(domain);
if (state > MAX_RPMPD_STATE)
goto out;
if (state > pd->max_state)
state = pd->max_state;
mutex_lock(&rpmpd_lock);
pd->corner = state;
if (!pd->enabled && pd->key != KEY_FLOOR_CORNER)
/* Always send updates for vfc and vfl */
if (!pd->enabled && pd->key != KEY_FLOOR_CORNER &&
pd->key != KEY_FLOOR_LEVEL)
goto out;
ret = rpmpd_aggregate_corner(pd);
@ -287,6 +372,7 @@ static int rpmpd_probe(struct platform_device *pdev)
}
rpmpds[i]->rpm = rpm;
rpmpds[i]->max_state = desc->max_state;
rpmpds[i]->pd.power_off = rpmpd_power_off;
rpmpds[i]->pd.power_on = rpmpd_power_on;
rpmpds[i]->pd.set_performance_state = rpmpd_set_performance;

View File

@ -86,47 +86,47 @@ struct rockchip_pmu {
#define to_rockchip_pd(gpd) container_of(gpd, struct rockchip_pm_domain, genpd)
#define DOMAIN(pwr, status, req, idle, ack, wakeup) \
{ \
.pwr_mask = (pwr >= 0) ? BIT(pwr) : 0, \
.status_mask = (status >= 0) ? BIT(status) : 0, \
.req_mask = (req >= 0) ? BIT(req) : 0, \
.idle_mask = (idle >= 0) ? BIT(idle) : 0, \
.ack_mask = (ack >= 0) ? BIT(ack) : 0, \
.active_wakeup = wakeup, \
{ \
.pwr_mask = (pwr), \
.status_mask = (status), \
.req_mask = (req), \
.idle_mask = (idle), \
.ack_mask = (ack), \
.active_wakeup = (wakeup), \
}
#define DOMAIN_M(pwr, status, req, idle, ack, wakeup) \
{ \
.pwr_w_mask = (pwr >= 0) ? BIT(pwr + 16) : 0, \
.pwr_mask = (pwr >= 0) ? BIT(pwr) : 0, \
.status_mask = (status >= 0) ? BIT(status) : 0, \
.req_w_mask = (req >= 0) ? BIT(req + 16) : 0, \
.req_mask = (req >= 0) ? BIT(req) : 0, \
.idle_mask = (idle >= 0) ? BIT(idle) : 0, \
.ack_mask = (ack >= 0) ? BIT(ack) : 0, \
.pwr_w_mask = (pwr) << 16, \
.pwr_mask = (pwr), \
.status_mask = (status), \
.req_w_mask = (req) << 16, \
.req_mask = (req), \
.idle_mask = (idle), \
.ack_mask = (ack), \
.active_wakeup = wakeup, \
}
#define DOMAIN_RK3036(req, ack, idle, wakeup) \
{ \
.req_mask = (req >= 0) ? BIT(req) : 0, \
.req_w_mask = (req >= 0) ? BIT(req + 16) : 0, \
.ack_mask = (ack >= 0) ? BIT(ack) : 0, \
.idle_mask = (idle >= 0) ? BIT(idle) : 0, \
.req_mask = (req), \
.req_w_mask = (req) << 16, \
.ack_mask = (ack), \
.idle_mask = (idle), \
.active_wakeup = wakeup, \
}
#define DOMAIN_PX30(pwr, status, req, wakeup) \
DOMAIN_M(pwr, status, req, (req) + 16, req, wakeup)
DOMAIN_M(pwr, status, req, (req) << 16, req, wakeup)
#define DOMAIN_RK3288(pwr, status, req, wakeup) \
DOMAIN(pwr, status, req, req, (req) + 16, wakeup)
DOMAIN(pwr, status, req, req, (req) << 16, wakeup)
#define DOMAIN_RK3328(pwr, status, req, wakeup) \
DOMAIN_M(pwr, pwr, req, (req) + 10, req, wakeup)
DOMAIN_M(pwr, pwr, req, (req) << 10, req, wakeup)
#define DOMAIN_RK3368(pwr, status, req, wakeup) \
DOMAIN(pwr, status, req, (req) + 16, req, wakeup)
DOMAIN(pwr, status, req, (req) << 16, req, wakeup)
#define DOMAIN_RK3399(pwr, status, req, wakeup) \
DOMAIN(pwr, status, req, req, req, wakeup)
@ -716,129 +716,129 @@ static int rockchip_pm_domain_probe(struct platform_device *pdev)
}
static const struct rockchip_domain_info px30_pm_domains[] = {
[PX30_PD_USB] = DOMAIN_PX30(5, 5, 10, false),
[PX30_PD_SDCARD] = DOMAIN_PX30(8, 8, 9, false),
[PX30_PD_GMAC] = DOMAIN_PX30(10, 10, 6, false),
[PX30_PD_MMC_NAND] = DOMAIN_PX30(11, 11, 5, false),
[PX30_PD_VPU] = DOMAIN_PX30(12, 12, 14, false),
[PX30_PD_VO] = DOMAIN_PX30(13, 13, 7, false),
[PX30_PD_VI] = DOMAIN_PX30(14, 14, 8, false),
[PX30_PD_GPU] = DOMAIN_PX30(15, 15, 2, false),
[PX30_PD_USB] = DOMAIN_PX30(BIT(5), BIT(5), BIT(10), false),
[PX30_PD_SDCARD] = DOMAIN_PX30(BIT(8), BIT(8), BIT(9), false),
[PX30_PD_GMAC] = DOMAIN_PX30(BIT(10), BIT(10), BIT(6), false),
[PX30_PD_MMC_NAND] = DOMAIN_PX30(BIT(11), BIT(11), BIT(5), false),
[PX30_PD_VPU] = DOMAIN_PX30(BIT(12), BIT(12), BIT(14), false),
[PX30_PD_VO] = DOMAIN_PX30(BIT(13), BIT(13), BIT(7), false),
[PX30_PD_VI] = DOMAIN_PX30(BIT(14), BIT(14), BIT(8), false),
[PX30_PD_GPU] = DOMAIN_PX30(BIT(15), BIT(15), BIT(2), false),
};
static const struct rockchip_domain_info rk3036_pm_domains[] = {
[RK3036_PD_MSCH] = DOMAIN_RK3036(14, 23, 30, true),
[RK3036_PD_CORE] = DOMAIN_RK3036(13, 17, 24, false),
[RK3036_PD_PERI] = DOMAIN_RK3036(12, 18, 25, false),
[RK3036_PD_VIO] = DOMAIN_RK3036(11, 19, 26, false),
[RK3036_PD_VPU] = DOMAIN_RK3036(10, 20, 27, false),
[RK3036_PD_GPU] = DOMAIN_RK3036(9, 21, 28, false),
[RK3036_PD_SYS] = DOMAIN_RK3036(8, 22, 29, false),
[RK3036_PD_MSCH] = DOMAIN_RK3036(BIT(14), BIT(23), BIT(30), true),
[RK3036_PD_CORE] = DOMAIN_RK3036(BIT(13), BIT(17), BIT(24), false),
[RK3036_PD_PERI] = DOMAIN_RK3036(BIT(12), BIT(18), BIT(25), false),
[RK3036_PD_VIO] = DOMAIN_RK3036(BIT(11), BIT(19), BIT(26), false),
[RK3036_PD_VPU] = DOMAIN_RK3036(BIT(10), BIT(20), BIT(27), false),
[RK3036_PD_GPU] = DOMAIN_RK3036(BIT(9), BIT(21), BIT(28), false),
[RK3036_PD_SYS] = DOMAIN_RK3036(BIT(8), BIT(22), BIT(29), false),
};
static const struct rockchip_domain_info rk3066_pm_domains[] = {
[RK3066_PD_GPU] = DOMAIN(9, 9, 3, 24, 29, false),
[RK3066_PD_VIDEO] = DOMAIN(8, 8, 4, 23, 28, false),
[RK3066_PD_VIO] = DOMAIN(7, 7, 5, 22, 27, false),
[RK3066_PD_PERI] = DOMAIN(6, 6, 2, 25, 30, false),
[RK3066_PD_CPU] = DOMAIN(-1, 5, 1, 26, 31, false),
[RK3066_PD_GPU] = DOMAIN(BIT(9), BIT(9), BIT(3), BIT(24), BIT(29), false),
[RK3066_PD_VIDEO] = DOMAIN(BIT(8), BIT(8), BIT(4), BIT(23), BIT(28), false),
[RK3066_PD_VIO] = DOMAIN(BIT(7), BIT(7), BIT(5), BIT(22), BIT(27), false),
[RK3066_PD_PERI] = DOMAIN(BIT(6), BIT(6), BIT(2), BIT(25), BIT(30), false),
[RK3066_PD_CPU] = DOMAIN(0, BIT(5), BIT(1), BIT(26), BIT(31), false),
};
static const struct rockchip_domain_info rk3128_pm_domains[] = {
[RK3128_PD_CORE] = DOMAIN_RK3288(0, 0, 4, false),
[RK3128_PD_MSCH] = DOMAIN_RK3288(-1, -1, 6, true),
[RK3128_PD_VIO] = DOMAIN_RK3288(3, 3, 2, false),
[RK3128_PD_VIDEO] = DOMAIN_RK3288(2, 2, 1, false),
[RK3128_PD_GPU] = DOMAIN_RK3288(1, 1, 3, false),
[RK3128_PD_CORE] = DOMAIN_RK3288(BIT(0), BIT(0), BIT(4), false),
[RK3128_PD_MSCH] = DOMAIN_RK3288(0, 0, BIT(6), true),
[RK3128_PD_VIO] = DOMAIN_RK3288(BIT(3), BIT(3), BIT(2), false),
[RK3128_PD_VIDEO] = DOMAIN_RK3288(BIT(2), BIT(2), BIT(1), false),
[RK3128_PD_GPU] = DOMAIN_RK3288(BIT(1), BIT(1), BIT(3), false),
};
static const struct rockchip_domain_info rk3188_pm_domains[] = {
[RK3188_PD_GPU] = DOMAIN(9, 9, 3, 24, 29, false),
[RK3188_PD_VIDEO] = DOMAIN(8, 8, 4, 23, 28, false),
[RK3188_PD_VIO] = DOMAIN(7, 7, 5, 22, 27, false),
[RK3188_PD_PERI] = DOMAIN(6, 6, 2, 25, 30, false),
[RK3188_PD_CPU] = DOMAIN(5, 5, 1, 26, 31, false),
[RK3188_PD_GPU] = DOMAIN(BIT(9), BIT(9), BIT(3), BIT(24), BIT(29), false),
[RK3188_PD_VIDEO] = DOMAIN(BIT(8), BIT(8), BIT(4), BIT(23), BIT(28), false),
[RK3188_PD_VIO] = DOMAIN(BIT(7), BIT(7), BIT(5), BIT(22), BIT(27), false),
[RK3188_PD_PERI] = DOMAIN(BIT(6), BIT(6), BIT(2), BIT(25), BIT(30), false),
[RK3188_PD_CPU] = DOMAIN(BIT(5), BIT(5), BIT(1), BIT(26), BIT(31), false),
};
static const struct rockchip_domain_info rk3228_pm_domains[] = {
[RK3228_PD_CORE] = DOMAIN_RK3036(0, 0, 16, true),
[RK3228_PD_MSCH] = DOMAIN_RK3036(1, 1, 17, true),
[RK3228_PD_BUS] = DOMAIN_RK3036(2, 2, 18, true),
[RK3228_PD_SYS] = DOMAIN_RK3036(3, 3, 19, true),
[RK3228_PD_VIO] = DOMAIN_RK3036(4, 4, 20, false),
[RK3228_PD_VOP] = DOMAIN_RK3036(5, 5, 21, false),
[RK3228_PD_VPU] = DOMAIN_RK3036(6, 6, 22, false),
[RK3228_PD_RKVDEC] = DOMAIN_RK3036(7, 7, 23, false),
[RK3228_PD_GPU] = DOMAIN_RK3036(8, 8, 24, false),
[RK3228_PD_PERI] = DOMAIN_RK3036(9, 9, 25, true),
[RK3228_PD_GMAC] = DOMAIN_RK3036(10, 10, 26, false),
[RK3228_PD_CORE] = DOMAIN_RK3036(BIT(0), BIT(0), BIT(16), true),
[RK3228_PD_MSCH] = DOMAIN_RK3036(BIT(1), BIT(1), BIT(17), true),
[RK3228_PD_BUS] = DOMAIN_RK3036(BIT(2), BIT(2), BIT(18), true),
[RK3228_PD_SYS] = DOMAIN_RK3036(BIT(3), BIT(3), BIT(19), true),
[RK3228_PD_VIO] = DOMAIN_RK3036(BIT(4), BIT(4), BIT(20), false),
[RK3228_PD_VOP] = DOMAIN_RK3036(BIT(5), BIT(5), BIT(21), false),
[RK3228_PD_VPU] = DOMAIN_RK3036(BIT(6), BIT(6), BIT(22), false),
[RK3228_PD_RKVDEC] = DOMAIN_RK3036(BIT(7), BIT(7), BIT(23), false),
[RK3228_PD_GPU] = DOMAIN_RK3036(BIT(8), BIT(8), BIT(24), false),
[RK3228_PD_PERI] = DOMAIN_RK3036(BIT(9), BIT(9), BIT(25), true),
[RK3228_PD_GMAC] = DOMAIN_RK3036(BIT(10), BIT(10), BIT(26), false),
};
static const struct rockchip_domain_info rk3288_pm_domains[] = {
[RK3288_PD_VIO] = DOMAIN_RK3288(7, 7, 4, false),
[RK3288_PD_HEVC] = DOMAIN_RK3288(14, 10, 9, false),
[RK3288_PD_VIDEO] = DOMAIN_RK3288(8, 8, 3, false),
[RK3288_PD_GPU] = DOMAIN_RK3288(9, 9, 2, false),
[RK3288_PD_VIO] = DOMAIN_RK3288(BIT(7), BIT(7), BIT(4), false),
[RK3288_PD_HEVC] = DOMAIN_RK3288(BIT(14), BIT(10), BIT(9), false),
[RK3288_PD_VIDEO] = DOMAIN_RK3288(BIT(8), BIT(8), BIT(3), false),
[RK3288_PD_GPU] = DOMAIN_RK3288(BIT(9), BIT(9), BIT(2), false),
};
static const struct rockchip_domain_info rk3328_pm_domains[] = {
[RK3328_PD_CORE] = DOMAIN_RK3328(-1, 0, 0, false),
[RK3328_PD_GPU] = DOMAIN_RK3328(-1, 1, 1, false),
[RK3328_PD_BUS] = DOMAIN_RK3328(-1, 2, 2, true),
[RK3328_PD_MSCH] = DOMAIN_RK3328(-1, 3, 3, true),
[RK3328_PD_PERI] = DOMAIN_RK3328(-1, 4, 4, true),
[RK3328_PD_VIDEO] = DOMAIN_RK3328(-1, 5, 5, false),
[RK3328_PD_HEVC] = DOMAIN_RK3328(-1, 6, 6, false),
[RK3328_PD_VIO] = DOMAIN_RK3328(-1, 8, 8, false),
[RK3328_PD_VPU] = DOMAIN_RK3328(-1, 9, 9, false),
[RK3328_PD_CORE] = DOMAIN_RK3328(0, BIT(0), BIT(0), false),
[RK3328_PD_GPU] = DOMAIN_RK3328(0, BIT(1), BIT(1), false),
[RK3328_PD_BUS] = DOMAIN_RK3328(0, BIT(2), BIT(2), true),
[RK3328_PD_MSCH] = DOMAIN_RK3328(0, BIT(3), BIT(3), true),
[RK3328_PD_PERI] = DOMAIN_RK3328(0, BIT(4), BIT(4), true),
[RK3328_PD_VIDEO] = DOMAIN_RK3328(0, BIT(5), BIT(5), false),
[RK3328_PD_HEVC] = DOMAIN_RK3328(0, BIT(6), BIT(6), false),
[RK3328_PD_VIO] = DOMAIN_RK3328(0, BIT(8), BIT(8), false),
[RK3328_PD_VPU] = DOMAIN_RK3328(0, BIT(9), BIT(9), false),
};
static const struct rockchip_domain_info rk3366_pm_domains[] = {
[RK3366_PD_PERI] = DOMAIN_RK3368(10, 10, 6, true),
[RK3366_PD_VIO] = DOMAIN_RK3368(14, 14, 8, false),
[RK3366_PD_VIDEO] = DOMAIN_RK3368(13, 13, 7, false),
[RK3366_PD_RKVDEC] = DOMAIN_RK3368(11, 11, 7, false),
[RK3366_PD_WIFIBT] = DOMAIN_RK3368(8, 8, 9, false),
[RK3366_PD_VPU] = DOMAIN_RK3368(12, 12, 7, false),
[RK3366_PD_GPU] = DOMAIN_RK3368(15, 15, 2, false),
[RK3366_PD_PERI] = DOMAIN_RK3368(BIT(10), BIT(10), BIT(6), true),
[RK3366_PD_VIO] = DOMAIN_RK3368(BIT(14), BIT(14), BIT(8), false),
[RK3366_PD_VIDEO] = DOMAIN_RK3368(BIT(13), BIT(13), BIT(7), false),
[RK3366_PD_RKVDEC] = DOMAIN_RK3368(BIT(11), BIT(11), BIT(7), false),
[RK3366_PD_WIFIBT] = DOMAIN_RK3368(BIT(8), BIT(8), BIT(9), false),
[RK3366_PD_VPU] = DOMAIN_RK3368(BIT(12), BIT(12), BIT(7), false),
[RK3366_PD_GPU] = DOMAIN_RK3368(BIT(15), BIT(15), BIT(2), false),
};
static const struct rockchip_domain_info rk3368_pm_domains[] = {
[RK3368_PD_PERI] = DOMAIN_RK3368(13, 12, 6, true),
[RK3368_PD_VIO] = DOMAIN_RK3368(15, 14, 8, false),
[RK3368_PD_VIDEO] = DOMAIN_RK3368(14, 13, 7, false),
[RK3368_PD_GPU_0] = DOMAIN_RK3368(16, 15, 2, false),
[RK3368_PD_GPU_1] = DOMAIN_RK3368(17, 16, 2, false),
[RK3368_PD_PERI] = DOMAIN_RK3368(BIT(13), BIT(12), BIT(6), true),
[RK3368_PD_VIO] = DOMAIN_RK3368(BIT(15), BIT(14), BIT(8), false),
[RK3368_PD_VIDEO] = DOMAIN_RK3368(BIT(14), BIT(13), BIT(7), false),
[RK3368_PD_GPU_0] = DOMAIN_RK3368(BIT(16), BIT(15), BIT(2), false),
[RK3368_PD_GPU_1] = DOMAIN_RK3368(BIT(17), BIT(16), BIT(2), false),
};
static const struct rockchip_domain_info rk3399_pm_domains[] = {
[RK3399_PD_TCPD0] = DOMAIN_RK3399(8, 8, -1, false),
[RK3399_PD_TCPD1] = DOMAIN_RK3399(9, 9, -1, false),
[RK3399_PD_CCI] = DOMAIN_RK3399(10, 10, -1, true),
[RK3399_PD_CCI0] = DOMAIN_RK3399(-1, -1, 15, true),
[RK3399_PD_CCI1] = DOMAIN_RK3399(-1, -1, 16, true),
[RK3399_PD_PERILP] = DOMAIN_RK3399(11, 11, 1, true),
[RK3399_PD_PERIHP] = DOMAIN_RK3399(12, 12, 2, true),
[RK3399_PD_CENTER] = DOMAIN_RK3399(13, 13, 14, true),
[RK3399_PD_VIO] = DOMAIN_RK3399(14, 14, 17, false),
[RK3399_PD_GPU] = DOMAIN_RK3399(15, 15, 0, false),
[RK3399_PD_VCODEC] = DOMAIN_RK3399(16, 16, 3, false),
[RK3399_PD_VDU] = DOMAIN_RK3399(17, 17, 4, false),
[RK3399_PD_RGA] = DOMAIN_RK3399(18, 18, 5, false),
[RK3399_PD_IEP] = DOMAIN_RK3399(19, 19, 6, false),
[RK3399_PD_VO] = DOMAIN_RK3399(20, 20, -1, false),
[RK3399_PD_VOPB] = DOMAIN_RK3399(-1, -1, 7, false),
[RK3399_PD_VOPL] = DOMAIN_RK3399(-1, -1, 8, false),
[RK3399_PD_ISP0] = DOMAIN_RK3399(22, 22, 9, false),
[RK3399_PD_ISP1] = DOMAIN_RK3399(23, 23, 10, false),
[RK3399_PD_HDCP] = DOMAIN_RK3399(24, 24, 11, false),
[RK3399_PD_GMAC] = DOMAIN_RK3399(25, 25, 23, true),
[RK3399_PD_EMMC] = DOMAIN_RK3399(26, 26, 24, true),
[RK3399_PD_USB3] = DOMAIN_RK3399(27, 27, 12, true),
[RK3399_PD_EDP] = DOMAIN_RK3399(28, 28, 22, false),
[RK3399_PD_GIC] = DOMAIN_RK3399(29, 29, 27, true),
[RK3399_PD_SD] = DOMAIN_RK3399(30, 30, 28, true),
[RK3399_PD_SDIOAUDIO] = DOMAIN_RK3399(31, 31, 29, true),
[RK3399_PD_TCPD0] = DOMAIN_RK3399(BIT(8), BIT(8), 0, false),
[RK3399_PD_TCPD1] = DOMAIN_RK3399(BIT(9), BIT(9), 0, false),
[RK3399_PD_CCI] = DOMAIN_RK3399(BIT(10), BIT(10), 0, true),
[RK3399_PD_CCI0] = DOMAIN_RK3399(0, 0, BIT(15), true),
[RK3399_PD_CCI1] = DOMAIN_RK3399(0, 0, BIT(16), true),
[RK3399_PD_PERILP] = DOMAIN_RK3399(BIT(11), BIT(11), BIT(1), true),
[RK3399_PD_PERIHP] = DOMAIN_RK3399(BIT(12), BIT(12), BIT(2), true),
[RK3399_PD_CENTER] = DOMAIN_RK3399(BIT(13), BIT(13), BIT(14), true),
[RK3399_PD_VIO] = DOMAIN_RK3399(BIT(14), BIT(14), BIT(17), false),
[RK3399_PD_GPU] = DOMAIN_RK3399(BIT(15), BIT(15), BIT(0), false),
[RK3399_PD_VCODEC] = DOMAIN_RK3399(BIT(16), BIT(16), BIT(3), false),
[RK3399_PD_VDU] = DOMAIN_RK3399(BIT(17), BIT(17), BIT(4), false),
[RK3399_PD_RGA] = DOMAIN_RK3399(BIT(18), BIT(18), BIT(5), false),
[RK3399_PD_IEP] = DOMAIN_RK3399(BIT(19), BIT(19), BIT(6), false),
[RK3399_PD_VO] = DOMAIN_RK3399(BIT(20), BIT(20), 0, false),
[RK3399_PD_VOPB] = DOMAIN_RK3399(0, 0, BIT(7), false),
[RK3399_PD_VOPL] = DOMAIN_RK3399(0, 0, BIT(8), false),
[RK3399_PD_ISP0] = DOMAIN_RK3399(BIT(22), BIT(22), BIT(9), false),
[RK3399_PD_ISP1] = DOMAIN_RK3399(BIT(23), BIT(23), BIT(10), false),
[RK3399_PD_HDCP] = DOMAIN_RK3399(BIT(24), BIT(24), BIT(11), false),
[RK3399_PD_GMAC] = DOMAIN_RK3399(BIT(25), BIT(25), BIT(23), true),
[RK3399_PD_EMMC] = DOMAIN_RK3399(BIT(26), BIT(26), BIT(24), true),
[RK3399_PD_USB3] = DOMAIN_RK3399(BIT(27), BIT(27), BIT(12), true),
[RK3399_PD_EDP] = DOMAIN_RK3399(BIT(28), BIT(28), BIT(22), false),
[RK3399_PD_GIC] = DOMAIN_RK3399(BIT(29), BIT(29), BIT(27), true),
[RK3399_PD_SD] = DOMAIN_RK3399(BIT(30), BIT(30), BIT(28), true),
[RK3399_PD_SDIOAUDIO] = DOMAIN_RK3399(BIT(31), BIT(31), BIT(29), true),
};
static const struct rockchip_pmu_info px30_pmu = {

View File

@ -109,6 +109,7 @@ config ARCH_TEGRA_186_SOC
config ARCH_TEGRA_194_SOC
bool "NVIDIA Tegra194 SoC"
select MAILBOX
select PINCTRL_TEGRA194
select TEGRA_BPMP
select TEGRA_HSP_MBOX
select TEGRA_IVC

View File

@ -133,8 +133,10 @@ static int tegra_fuse_probe(struct platform_device *pdev)
fuse->clk = devm_clk_get(&pdev->dev, "fuse");
if (IS_ERR(fuse->clk)) {
dev_err(&pdev->dev, "failed to get FUSE clock: %ld",
PTR_ERR(fuse->clk));
if (PTR_ERR(fuse->clk) != -EPROBE_DEFER)
dev_err(&pdev->dev, "failed to get FUSE clock: %ld",
PTR_ERR(fuse->clk));
fuse->base = base;
return PTR_ERR(fuse->clk);
}

View File

@ -232,6 +232,11 @@ struct tegra_pmc_soc {
const char * const *reset_levels;
unsigned int num_reset_levels;
/*
* These describe events that can wake the system from sleep (i.e.
* LP0 or SC7). Wakeup from other sleep states (such as LP1 or LP2)
* are dealt with in the LIC.
*/
const struct tegra_wake_event *wake_events;
unsigned int num_wake_events;
};
@ -1855,6 +1860,9 @@ static int tegra_pmc_irq_alloc(struct irq_domain *domain, unsigned int virq,
unsigned int i;
int err = 0;
if (WARN_ON(num_irqs > 1))
return -EINVAL;
for (i = 0; i < soc->num_wake_events; i++) {
const struct tegra_wake_event *event = &soc->wake_events[i];
@ -1895,6 +1903,11 @@ static int tegra_pmc_irq_alloc(struct irq_domain *domain, unsigned int virq,
}
}
/*
* For interrupts that don't have associated wake events, assign a
* dummy hardware IRQ number. This is used in the ->irq_set_type()
* and ->irq_set_wake() callbacks to return early for these IRQs.
*/
if (i == soc->num_wake_events)
err = irq_domain_set_hwirq_and_chip(domain, virq, ULONG_MAX,
&pmc->irq, pmc);
@ -1913,6 +1926,10 @@ static int tegra_pmc_irq_set_wake(struct irq_data *data, unsigned int on)
unsigned int offset, bit;
u32 value;
/* nothing to do if there's no associated wake event */
if (WARN_ON(data->hwirq == ULONG_MAX))
return 0;
offset = data->hwirq / 32;
bit = data->hwirq % 32;
@ -1940,6 +1957,7 @@ static int tegra_pmc_irq_set_type(struct irq_data *data, unsigned int type)
struct tegra_pmc *pmc = irq_data_get_irq_chip_data(data);
u32 value;
/* nothing to do if there's no associated wake event */
if (data->hwirq == ULONG_MAX)
return 0;

View File

@ -0,0 +1,14 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (c) 2018, Linaro Ltd. */
#ifndef __DT_BINDINGS_POWER_QCOM_AOSS_QMP_H
#define __DT_BINDINGS_POWER_QCOM_AOSS_QMP_H
#define AOSS_QMP_LS_CDSP 0
#define AOSS_QMP_LS_LPASS 1
#define AOSS_QMP_LS_MODEM 2
#define AOSS_QMP_LS_SLPI 3
#define AOSS_QMP_LS_SPSS 4
#define AOSS_QMP_LS_VENUS 5
#endif

View File

@ -36,4 +36,38 @@
#define MSM8996_VDDSSCX 5
#define MSM8996_VDDSSCX_VFC 6
/* MSM8998 Power Domain Indexes */
#define MSM8998_VDDCX 0
#define MSM8998_VDDCX_AO 1
#define MSM8998_VDDCX_VFL 2
#define MSM8998_VDDMX 3
#define MSM8998_VDDMX_AO 4
#define MSM8998_VDDMX_VFL 5
#define MSM8998_SSCCX 6
#define MSM8998_SSCCX_VFL 7
#define MSM8998_SSCMX 8
#define MSM8998_SSCMX_VFL 9
/* QCS404 Power Domains */
#define QCS404_VDDMX 0
#define QCS404_VDDMX_AO 1
#define QCS404_VDDMX_VFL 2
#define QCS404_LPICX 3
#define QCS404_LPICX_VFL 4
#define QCS404_LPIMX 5
#define QCS404_LPIMX_VFL 6
/* RPM SMD Power Domain performance levels */
#define RPM_SMD_LEVEL_RETENTION 16
#define RPM_SMD_LEVEL_RETENTION_PLUS 32
#define RPM_SMD_LEVEL_MIN_SVS 48
#define RPM_SMD_LEVEL_LOW_SVS 64
#define RPM_SMD_LEVEL_SVS 128
#define RPM_SMD_LEVEL_SVS_PLUS 192
#define RPM_SMD_LEVEL_NOM 256
#define RPM_SMD_LEVEL_NOM_PLUS 320
#define RPM_SMD_LEVEL_TURBO 384
#define RPM_SMD_LEVEL_TURBO_NO_CPR 416
#define RPM_SMD_LEVEL_BINNING 512
#endif

View File

@ -0,0 +1,51 @@
/* SPDX-License-Identifier: GPL-2.0+ */
/*
* Copyright (c) 2018 Bitmain Ltd.
* Copyright (c) 2019 Linaro Ltd.
*/
#ifndef _DT_BINDINGS_BM1880_RESET_H
#define _DT_BINDINGS_BM1880_RESET_H
#define BM1880_RST_MAIN_AP 0
#define BM1880_RST_SECOND_AP 1
#define BM1880_RST_DDR 2
#define BM1880_RST_VIDEO 3
#define BM1880_RST_JPEG 4
#define BM1880_RST_VPP 5
#define BM1880_RST_GDMA 6
#define BM1880_RST_AXI_SRAM 7
#define BM1880_RST_TPU 8
#define BM1880_RST_USB 9
#define BM1880_RST_ETH0 10
#define BM1880_RST_ETH1 11
#define BM1880_RST_NAND 12
#define BM1880_RST_EMMC 13
#define BM1880_RST_SD 14
#define BM1880_RST_SDMA 15
#define BM1880_RST_I2S0 16
#define BM1880_RST_I2S1 17
#define BM1880_RST_UART0_1_CLK 18
#define BM1880_RST_UART0_1_ACLK 19
#define BM1880_RST_UART2_3_CLK 20
#define BM1880_RST_UART2_3_ACLK 21
#define BM1880_RST_MINER 22
#define BM1880_RST_I2C0 23
#define BM1880_RST_I2C1 24
#define BM1880_RST_I2C2 25
#define BM1880_RST_I2C3 26
#define BM1880_RST_I2C4 27
#define BM1880_RST_PWM0 28
#define BM1880_RST_PWM1 29
#define BM1880_RST_PWM2 30
#define BM1880_RST_PWM3 31
#define BM1880_RST_SPI 32
#define BM1880_RST_GPIO0 33
#define BM1880_RST_GPIO1 34
#define BM1880_RST_GPIO2 35
#define BM1880_RST_EFUSE 36
#define BM1880_RST_WDT 37
#define BM1880_RST_AHB_ROM 38
#define BM1880_RST_SPIC 39
#endif /* _DT_BINDINGS_BM1880_RESET_H */

View File

@ -19,6 +19,7 @@ enum ti_sysc_module_type {
struct ti_sysc_cookie {
void *data;
void *clkdm;
};
/**
@ -46,6 +47,10 @@ struct sysc_regbits {
s8 emufree_shift;
};
#define SYSC_MODULE_QUIRK_HDQ1W BIT(17)
#define SYSC_MODULE_QUIRK_I2C BIT(16)
#define SYSC_MODULE_QUIRK_WDT BIT(15)
#define SYSS_QUIRK_RESETDONE_INVERTED BIT(14)
#define SYSC_QUIRK_SWSUP_MSTANDBY BIT(13)
#define SYSC_QUIRK_SWSUP_SIDLE_ACT BIT(12)
#define SYSC_QUIRK_SWSUP_SIDLE BIT(11)
@ -125,9 +130,16 @@ struct ti_sysc_module_data {
};
struct device;
struct clk;
struct ti_sysc_platform_data {
struct of_dev_auxdata *auxdata;
int (*init_clockdomain)(struct device *dev, struct clk *fck,
struct clk *ick, struct ti_sysc_cookie *cookie);
void (*clkdm_deny_idle)(struct device *dev,
const struct ti_sysc_cookie *cookie);
void (*clkdm_allow_idle)(struct device *dev,
const struct ti_sysc_cookie *cookie);
int (*init_module)(struct device *dev,
const struct ti_sysc_module_data *data,
struct ti_sysc_cookie *cookie);

View File

@ -144,6 +144,7 @@ struct scmi_power_ops {
struct scmi_sensor_info {
u32 id;
u8 type;
s8 scale;
char name[SCMI_MAX_STR_SIZE];
};

View File

@ -241,12 +241,254 @@ struct ti_sci_rm_irq_ops {
u16 global_event, u8 vint_status_bit);
};
/* RA config.addr_lo parameter is valid for RM ring configure TI_SCI message */
#define TI_SCI_MSG_VALUE_RM_RING_ADDR_LO_VALID BIT(0)
/* RA config.addr_hi parameter is valid for RM ring configure TI_SCI message */
#define TI_SCI_MSG_VALUE_RM_RING_ADDR_HI_VALID BIT(1)
/* RA config.count parameter is valid for RM ring configure TI_SCI message */
#define TI_SCI_MSG_VALUE_RM_RING_COUNT_VALID BIT(2)
/* RA config.mode parameter is valid for RM ring configure TI_SCI message */
#define TI_SCI_MSG_VALUE_RM_RING_MODE_VALID BIT(3)
/* RA config.size parameter is valid for RM ring configure TI_SCI message */
#define TI_SCI_MSG_VALUE_RM_RING_SIZE_VALID BIT(4)
/* RA config.order_id parameter is valid for RM ring configure TISCI message */
#define TI_SCI_MSG_VALUE_RM_RING_ORDER_ID_VALID BIT(5)
#define TI_SCI_MSG_VALUE_RM_ALL_NO_ORDER \
(TI_SCI_MSG_VALUE_RM_RING_ADDR_LO_VALID | \
TI_SCI_MSG_VALUE_RM_RING_ADDR_HI_VALID | \
TI_SCI_MSG_VALUE_RM_RING_COUNT_VALID | \
TI_SCI_MSG_VALUE_RM_RING_MODE_VALID | \
TI_SCI_MSG_VALUE_RM_RING_SIZE_VALID)
/**
* struct ti_sci_rm_ringacc_ops - Ring Accelerator Management operations
* @config: configure the SoC Navigator Subsystem Ring Accelerator ring
* @get_config: get the SoC Navigator Subsystem Ring Accelerator ring
* configuration
*/
struct ti_sci_rm_ringacc_ops {
int (*config)(const struct ti_sci_handle *handle,
u32 valid_params, u16 nav_id, u16 index,
u32 addr_lo, u32 addr_hi, u32 count, u8 mode,
u8 size, u8 order_id
);
int (*get_config)(const struct ti_sci_handle *handle,
u32 nav_id, u32 index, u8 *mode,
u32 *addr_lo, u32 *addr_hi, u32 *count,
u8 *size, u8 *order_id);
};
/**
* struct ti_sci_rm_psil_ops - PSI-L thread operations
* @pair: pair PSI-L source thread to a destination thread.
* If the src_thread is mapped to UDMA tchan, the corresponding channel's
* TCHAN_THRD_ID register is updated.
* If the dst_thread is mapped to UDMA rchan, the corresponding channel's
* RCHAN_THRD_ID register is updated.
* @unpair: unpair PSI-L source thread from a destination thread.
* If the src_thread is mapped to UDMA tchan, the corresponding channel's
* TCHAN_THRD_ID register is cleared.
* If the dst_thread is mapped to UDMA rchan, the corresponding channel's
* RCHAN_THRD_ID register is cleared.
*/
struct ti_sci_rm_psil_ops {
int (*pair)(const struct ti_sci_handle *handle, u32 nav_id,
u32 src_thread, u32 dst_thread);
int (*unpair)(const struct ti_sci_handle *handle, u32 nav_id,
u32 src_thread, u32 dst_thread);
};
/* UDMAP channel types */
#define TI_SCI_RM_UDMAP_CHAN_TYPE_PKT_PBRR 2
#define TI_SCI_RM_UDMAP_CHAN_TYPE_PKT_PBRR_SB 3 /* RX only */
#define TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_PBRR 10
#define TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_PBVR 11
#define TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_BCOPY_PBRR 12
#define TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_BCOPY_PBVR 13
#define TI_SCI_RM_UDMAP_RX_FLOW_DESC_HOST 0
#define TI_SCI_RM_UDMAP_RX_FLOW_DESC_MONO 2
#define TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_64_BYTES 1
#define TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_128_BYTES 2
#define TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_256_BYTES 3
/* UDMAP TX/RX channel valid_params common declarations */
#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_PAUSE_ON_ERR_VALID BIT(0)
#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_ATYPE_VALID BIT(1)
#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_CHAN_TYPE_VALID BIT(2)
#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_FETCH_SIZE_VALID BIT(3)
#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_CQ_QNUM_VALID BIT(4)
#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_PRIORITY_VALID BIT(5)
#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_QOS_VALID BIT(6)
#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_ORDER_ID_VALID BIT(7)
#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_SCHED_PRIORITY_VALID BIT(8)
#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_BURST_SIZE_VALID BIT(14)
/**
* Configures a Navigator Subsystem UDMAP transmit channel
*
* Configures a Navigator Subsystem UDMAP transmit channel registers.
* See @ti_sci_msg_rm_udmap_tx_ch_cfg_req
*/
struct ti_sci_msg_rm_udmap_tx_ch_cfg {
u32 valid_params;
#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_FILT_EINFO_VALID BIT(9)
#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_FILT_PSWORDS_VALID BIT(10)
#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_SUPR_TDPKT_VALID BIT(11)
#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_CREDIT_COUNT_VALID BIT(12)
#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_FDEPTH_VALID BIT(13)
u16 nav_id;
u16 index;
u8 tx_pause_on_err;
u8 tx_filt_einfo;
u8 tx_filt_pswords;
u8 tx_atype;
u8 tx_chan_type;
u8 tx_supr_tdpkt;
u16 tx_fetch_size;
u8 tx_credit_count;
u16 txcq_qnum;
u8 tx_priority;
u8 tx_qos;
u8 tx_orderid;
u16 fdepth;
u8 tx_sched_priority;
u8 tx_burst_size;
};
/**
* Configures a Navigator Subsystem UDMAP receive channel
*
* Configures a Navigator Subsystem UDMAP receive channel registers.
* See @ti_sci_msg_rm_udmap_rx_ch_cfg_req
*/
struct ti_sci_msg_rm_udmap_rx_ch_cfg {
u32 valid_params;
#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_START_VALID BIT(9)
#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_CNT_VALID BIT(10)
#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_IGNORE_SHORT_VALID BIT(11)
#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_IGNORE_LONG_VALID BIT(12)
u16 nav_id;
u16 index;
u16 rx_fetch_size;
u16 rxcq_qnum;
u8 rx_priority;
u8 rx_qos;
u8 rx_orderid;
u8 rx_sched_priority;
u16 flowid_start;
u16 flowid_cnt;
u8 rx_pause_on_err;
u8 rx_atype;
u8 rx_chan_type;
u8 rx_ignore_short;
u8 rx_ignore_long;
u8 rx_burst_size;
};
/**
* Configures a Navigator Subsystem UDMAP receive flow
*
* Configures a Navigator Subsystem UDMAP receive flow's registers.
* See @tis_ci_msg_rm_udmap_flow_cfg_req
*/
struct ti_sci_msg_rm_udmap_flow_cfg {
u32 valid_params;
#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_EINFO_PRESENT_VALID BIT(0)
#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_PSINFO_PRESENT_VALID BIT(1)
#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_ERROR_HANDLING_VALID BIT(2)
#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DESC_TYPE_VALID BIT(3)
#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_SOP_OFFSET_VALID BIT(4)
#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_QNUM_VALID BIT(5)
#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_SRC_TAG_HI_VALID BIT(6)
#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_SRC_TAG_LO_VALID BIT(7)
#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_TAG_HI_VALID BIT(8)
#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_TAG_LO_VALID BIT(9)
#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_SRC_TAG_HI_SEL_VALID BIT(10)
#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_SRC_TAG_LO_SEL_VALID BIT(11)
#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_TAG_HI_SEL_VALID BIT(12)
#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_TAG_LO_SEL_VALID BIT(13)
#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ0_SZ0_QNUM_VALID BIT(14)
#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ1_QNUM_VALID BIT(15)
#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ2_QNUM_VALID BIT(16)
#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ3_QNUM_VALID BIT(17)
#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_PS_LOCATION_VALID BIT(18)
u16 nav_id;
u16 flow_index;
u8 rx_einfo_present;
u8 rx_psinfo_present;
u8 rx_error_handling;
u8 rx_desc_type;
u16 rx_sop_offset;
u16 rx_dest_qnum;
u8 rx_src_tag_hi;
u8 rx_src_tag_lo;
u8 rx_dest_tag_hi;
u8 rx_dest_tag_lo;
u8 rx_src_tag_hi_sel;
u8 rx_src_tag_lo_sel;
u8 rx_dest_tag_hi_sel;
u8 rx_dest_tag_lo_sel;
u16 rx_fdq0_sz0_qnum;
u16 rx_fdq1_qnum;
u16 rx_fdq2_qnum;
u16 rx_fdq3_qnum;
u8 rx_ps_location;
};
/**
* struct ti_sci_rm_udmap_ops - UDMA Management operations
* @tx_ch_cfg: configure SoC Navigator Subsystem UDMA transmit channel.
* @rx_ch_cfg: configure SoC Navigator Subsystem UDMA receive channel.
* @rx_flow_cfg1: configure SoC Navigator Subsystem UDMA receive flow.
*/
struct ti_sci_rm_udmap_ops {
int (*tx_ch_cfg)(const struct ti_sci_handle *handle,
const struct ti_sci_msg_rm_udmap_tx_ch_cfg *params);
int (*rx_ch_cfg)(const struct ti_sci_handle *handle,
const struct ti_sci_msg_rm_udmap_rx_ch_cfg *params);
int (*rx_flow_cfg)(const struct ti_sci_handle *handle,
const struct ti_sci_msg_rm_udmap_flow_cfg *params);
};
/**
* struct ti_sci_proc_ops - Processor Control operations
* @request: Request to control a physical processor. The requesting host
* should be in the processor access list
* @release: Relinquish a physical processor control
* @handover: Handover a physical processor control to another host
* in the permitted list
* @set_config: Set base configuration of a processor
* @set_control: Setup limited control flags in specific cases
* @get_status: Get the state of physical processor
*
* NOTE: The following paramteres are generic in nature for all these ops,
* -handle: Pointer to TI SCI handle as retrieved by *ti_sci_get_handle
* -pid: Processor ID
* -hid: Host ID
*/
struct ti_sci_proc_ops {
int (*request)(const struct ti_sci_handle *handle, u8 pid);
int (*release)(const struct ti_sci_handle *handle, u8 pid);
int (*handover)(const struct ti_sci_handle *handle, u8 pid, u8 hid);
int (*set_config)(const struct ti_sci_handle *handle, u8 pid,
u64 boot_vector, u32 cfg_set, u32 cfg_clr);
int (*set_control)(const struct ti_sci_handle *handle, u8 pid,
u32 ctrl_set, u32 ctrl_clr);
int (*get_status)(const struct ti_sci_handle *handle, u8 pid,
u64 *boot_vector, u32 *cfg_flags, u32 *ctrl_flags,
u32 *status_flags);
};
/**
* struct ti_sci_ops - Function support for TI SCI
* @dev_ops: Device specific operations
* @clk_ops: Clock specific operations
* @rm_core_ops: Resource management core operations.
* @rm_irq_ops: IRQ management specific operations
* @proc_ops: Processor Control specific operations
*/
struct ti_sci_ops {
struct ti_sci_core_ops core_ops;
@ -254,6 +496,10 @@ struct ti_sci_ops {
struct ti_sci_clk_ops clk_ops;
struct ti_sci_rm_core_ops rm_core_ops;
struct ti_sci_rm_irq_ops rm_irq_ops;
struct ti_sci_rm_ringacc_ops rm_ring_ops;
struct ti_sci_rm_psil_ops rm_psil_ops;
struct ti_sci_rm_udmap_ops rm_udmap_ops;
struct ti_sci_proc_ops proc_ops;
};
/**

View File

@ -133,5 +133,13 @@ int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num);
* failed to probe or 0 if the bman driver did not probed yet.
*/
int bman_is_probed(void);
/**
* bman_portals_probed - Check if all cpu bound bman portals are probed
*
* Returns 1 if all the required cpu bound bman portals successfully probed,
* -1 if probe errors appeared or 0 if the bman portals did not yet finished
* probing.
*/
int bman_portals_probed(void);
#endif /* __FSL_BMAN_H */

View File

@ -1194,6 +1194,15 @@ int qman_release_cgrid(u32 id);
*/
int qman_is_probed(void);
/**
* qman_portals_probed - Check if all cpu bound qman portals are probed
*
* Returns 1 if all the required cpu bound qman portals successfully probed,
* -1 if probe errors appeared or 0 if the qman portals did not yet finished
* probing.
*/
int qman_portals_probed(void);
/**
* qman_dqrr_get_ithresh - Get coalesce interrupt threshold
* @portal: portal to get the value for

View File

@ -531,14 +531,6 @@ config LRU_CACHE
config CLZ_TAB
bool
config DDR
bool "JEDEC DDR data"
help
Data from JEDEC specs for DDR SDRAM memories,
particularly the AC timing parameters and addressing
information. This data is useful for drivers handling
DDR SDRAM controllers.
config IRQ_POLL
bool "IRQ polling library"
help

View File

@ -209,8 +209,6 @@ obj-$(CONFIG_SIGNATURE) += digsig.o
lib-$(CONFIG_CLZ_TAB) += clz_tab.o
obj-$(CONFIG_DDR) += jedec_ddr_data.o
obj-$(CONFIG_GENERIC_STRNCPY_FROM_USER) += strncpy_from_user.o
obj-$(CONFIG_GENERIC_STRNLEN_USER) += strnlen_user.o