From 331b26080961f0289c3a8a8e5e65f6524b23be19 Mon Sep 17 00:00:00 2001
From: Jeffrey Ladouceur <Jeffrey.Ladouceur@freescale.com>
Date: Tue, 7 Apr 2015 23:24:55 -0400
Subject: [PATCH 198/226] staging: fsl-mc: dpio services driver
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This is a commit of a squash of the cummulative dpio services patches
in the sdk 2.0 kernel as of 3/7/2016.

staging: fsl-mc: dpio: initial implementation of dpio services

* Port from kernel 3.16 to 3.19
* upgrade to match MC fw 7.0.0
* return -EPROBE_DEFER if fsl_mc_portal_allocate() fails.
* enable DPIO interrupt support
* implement service FQDAN handling
* DPIO service selects DPIO objects using crude algorithms for now, we
  will look to make this smarter later on.
* Locks all DPIO ops that aren't innately lockless. Smarter selection
  logic may allow locking to be relaxed eventually.
* Portable QBMan driver source (and low-level MC flib code for DPIO) is
  included and encapsulated within the DPIO driver.

Signed-off-by: Geoff Thorpe <Geoff.Thorpe@freescale.com>
Signed-off-by: Haiying Wang <Haiying.Wang@freescale.com>
Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
Signed-off-by: Cristian Sovaiala <cristian.sovaiala@freescale.com>
Signed-off-by: J. German Rivera <German.Rivera@freescale.com>
Signed-off-by: Jeffrey Ladouceur <Jeffrey.Ladouceur@freescale.com>
[Stuart: resolved merge conflicts]
Signed-off-by: Stuart Yoder <stuart.yoder@nxp.com>

dpio: Use locks when querying fq state

merged from patch in 3.19-bringup branch.

Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
Signed-off-by: Jeffrey Ladouceur <Jeffrey.Ladouceur@freescale.com>
Change-Id: Ia4d09f8a0cf4d8a4a2aa1cb39be789c34425286d
Reviewed-on: http://git.am.freescale.net:8181/34707
Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
Reviewed-by: Haiying Wang <Haiying.Wang@freescale.com>
Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>

qbman: Fix potential race in VDQCR handling

Remove atomic_read() check of the VDQCR busy marker.  These checks were racy
as the flag could be incorrectly cleared if checked while another thread was
starting a pull command.  The check is unneeded since we can determine the
owner of the outstanding pull command through other means.

Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
Change-Id: Icc64577c0a4ce6dadef208975e980adfc6796c86
Reviewed-on: http://git.am.freescale.net:8181/34705
Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
Reviewed-by: Haiying Wang <Haiying.Wang@freescale.com>
Reviewed-by: Roy Pledge <roy.pledge@freescale.com>
Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>

dpio: Fix IRQ handler and remove useless spinlock

The IRQ handler for a threaded IRQ requires two parts: initally the handler
should check status and inhibit the IRQ then the threaded portion should
process and reenable.

Also remove a spinlock that was redundant with the QMan driver and a debug
check that could trigger under a race condition

Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
Signed-off-by: Jeffrey Ladouceur <Jeffrey.Ladouceur@freescale.com>
Change-Id: I64926583af0be954228de94ae354fa005c8ec88a
Reviewed-on: http://git.am.freescale.net:8181/34706
Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
Reviewed-by: Haiying Wang <Haiying.Wang@freescale.com>
Reviewed-by: Roy Pledge <roy.pledge@freescale.com>
Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>

staging: fsl-mc: dpio: Implement polling if IRQ not available

Temporarly add a polling mode to DPIO in the case that the IRQ
registration fails

Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
Change-Id: Iebbd488fd14dd9878ef846e40f3ebcbcd0eb1e80
Reviewed-on: http://git.am.freescale.net:8181/34775
Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
Reviewed-by: Jeffrey Ladouceur <Jeffrey.Ladouceur@freescale.com>
Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>

fsl-mc-dpio: Fix to make this work without interrupt

Some additional fixes to make dpio driver work in poll mode.
This is needed for direct assignment to KVM Guest.

Signed-off-by: Bharat Bhushan <Bharat.Bhushan@freescale.com>
Change-Id: Icf66b8c0c7f7e1610118f78396534c067f594934
Reviewed-on: http://git.am.freescale.net:8181/35333
Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
Reviewed-by: Roy Pledge <roy.pledge@freescale.com>
Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>

fsl-mc-dpio: Make QBMan token tracking internal

Previousy the QBMan portal code required the caller to properly set and
check for a token value used by the driver to detect when the QMan
hardware had completed a dequeue.  This patch simplifes the driver
interface by internally dealing with token values.  The driver will now
set the token value to 0 once it has dequeued a frame while a token
value of 1 indicates the HW has completed the dequeue but SW has not
consumed the frame yet.

Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
Change-Id: If94d9728b0faa0fd79b47108f5cb05a425b89c18
Reviewed-on: http://git.am.freescale.net:8181/35433
Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
Reviewed-by: Haiying Wang <Haiying.Wang@freescale.com>
Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>

fsl-mc-dpio: Distribute DPIO IRQs among cores

Configure the DPIO IRQ affinities across all available cores

Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
Change-Id: Ib45968a070460b7e9410bfe6067b20ecd3524c54
Reviewed-on: http://git.am.freescale.net:8181/35540
Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
Reviewed-by: Haiying Wang <Haiying.Wang@freescale.com>
Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>

dpio/qbman: add flush after finishing cena write

Signed-off-by: Haiying Wang <Haiying.Wang@freescale.com>
Change-Id: I19537f101f7f5b443d60c0ad0e5d96c1dc302223
Reviewed-on: http://git.am.freescale.net:8181/35854
Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
Reviewed-by: Roy Pledge <roy.pledge@freescale.com>
Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>

dpio/qbman: rename qbman_dq_entry to qbman_result

Currently qbman_dq_entry is used for both dq result in dqrr
and memory, and notifications in dqrr and memory. It doesn't
make sense to have dq_entry in name for those notifications
which have nothing to do with dq. So we rename this as
qbman_result which is meaningful for both cases.

Signed-off-by: Haiying Wang <Haiying.Wang@freescale.com>
Change-Id: I62b3e729c571a1195e8802a9fab3fca97a14eae4
Reviewed-on: http://git.am.freescale.net:8181/35535
Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
Reviewed-by: Roy Pledge <roy.pledge@freescale.com>
Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>

dpio/qbman: add APIs to parse BPSCN and CGCU

BPSCN and CGCU are notifications which can only be written to memory.
We need to consider the host endianness while parsing these notification.
Also modify the check of FQRN/CSCN_MEM with the same consideration.

Signed-off-by: Haiying Wang <Haiying.Wang@freescale.com>
Change-Id: I572e0aa126107aed40e1ce326d5df7956882a939
Reviewed-on: http://git.am.freescale.net:8181/35536
Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
Reviewed-by: Roy Pledge <roy.pledge@freescale.com>
Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>

dpio/qbman: remove EXPORT_SYMBOL for qbman APIs

because they are only used by dpio.

Signed-off-by: Haiying Wang <Haiying.Wang@freescale.com>
Change-Id: I12e7b81c2d32f3c7b3df9fd73b742b1b675f4b8b
Reviewed-on: http://git.am.freescale.net:8181/35537
Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
Reviewed-by: Roy Pledge <roy.pledge@freescale.com>
Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>

dpio/qbman: add invalidate and prefetch support

for cachable memory access.
Also remove the redundant memory barriers.

Signed-off-by: Haiying Wang <Haiying.Wang@freescale.com>
Change-Id: I452a768278d1c5ef37e5741e9b011d725cb57b30
Reviewed-on: http://git.am.freescale.net:8181/35873
Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
Reviewed-by: Roy Pledge <roy.pledge@freescale.com>
Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>

dpio-driver: Fix qman-portal interrupt masking in poll mode

DPIO driver should mask qman-portal interrupt reporting When
working in poll mode. has_irq flag is used for same, but
interrupt maksing was happening before it was decided that
system will work in poll mode of interrupt mode.

This patch fixes the issue and not irq masking/enabling is
happening after irq/poll mode is decided.

Signed-off-by: Bharat Bhushan <Bharat.Bhushan@freescale.com>
Change-Id: I44de07b6142e80b3daea45e7d51a2d2799b2ed8d
Reviewed-on: http://git.am.freescale.net:8181/37100
Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
Reviewed-by: Roy Pledge <roy.pledge@freescale.com>
Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
(cherry picked from commit 3579244250dcb287a0fe58bcc3b3780076d040a2)

dpio: Add a function to query buffer pool depth

Add a debug function thay allows users to query the number
of buffers in a specific buffer pool

Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
Change-Id: Ie9a5f2e86d6a04ae61868bcc807121780c53cf6c
Reviewed-on: http://git.am.freescale.net:8181/36069
Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
(cherry picked from commit 3c749d860592f62f6b219232580ca35fd1075337)

dpio: Use normal cachable non-shareable memory for qbman cena

QBMan SWP CENA portal memory requires the memory to be cacheable,
and non-shareable.

Signed-off-by: Haiying Wang <Haiying.Wang@freescale.com>
Change-Id: I1c01cffe9ff2503fea2396d7cc761508f6e1ca85
Reviewed-on: http://git.am.freescale.net:8181/35487
Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
(cherry picked from commit 2a7e1ede7e155d9219006999893912e0b029ce4c)

fsl-dpio: Process frames in IRQ context

Stop using threaded IRQs and move back to hardirq top-halves.
This is the first patch of a small series adapting the DPIO and Ethernet
code to these changes.

Signed-off-by: Roy Pledge <roy.pledge@freescale.com>
Tested-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
Tested-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
[Stuart: split out dpaa-eth part separately]
Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>

fsl-dpio: Fast DPIO object selection

The DPIO service code had a couple of problems with performance impact:
  - The DPIO service object was protected by a global lock, within
    functions called from the fast datapath on multiple CPUs.
  - The DPIO service code would iterate unnecessarily through its linked
    list, while most of the time it looks for CPU-bound objects.

Add a fast-access array pointing to the same dpaa_io objects as the DPIO
service's linked list, used in non-preemptible contexts.
Avoid list access/reordering if a specific CPU was requested. This
greatly limits contention on the global service lock.
Make explicit calls for per-CPU DPIO service objects if the current
context permits (which is the case on most of the Ethernet fastpath).

These changes incidentally fix a functional problem, too: according to
the specification of struct dpaa_io_notification_ctx, registration should
fail if the specification of 'desired_cpu' cannot be observed. Instead,
dpaa_io_service_register() would keep searching for non-affine DPIO
objects, even when that was not requested.

Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
Change-Id: I2dd78bc56179f97d3fd78052a653456e5f89ed82
Reviewed-on: http://git.am.freescale.net:8181/37689
Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
Reviewed-by: Roy Pledge <roy.pledge@freescale.com>
Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>

DPIO: Implement a missing lock in DPIO

Implement missing DPIO service notification deregistration lock

Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
Change-Id: Ida9a4d00cc3a66bc215c260a8df2b197366736f7
Reviewed-on: http://git.am.freescale.net:8181/38497
Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
Reviewed-by: Haiying Wang <Haiying.Wang@freescale.com>
Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>

staging: fsl-mc: migrated dpio flibs for MC fw 8.0.0

Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>

fsl_qbman: Ensure SDQCR is only enabled if a channel is selected

QMan HW considers an SDQCR command that does not indicate any
channels to dequeue from to be an error. This change ensures that
a NULL command is set in the case no channels are selected for dequeue

Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
Change-Id: I8861304881885db00df4a29d760848990d706c70
Reviewed-on: http://git.am.freescale.net:8181/38498
Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
Reviewed-by: Haiying Wang <Haiying.Wang@freescale.com>
Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>

flib: dpio: Fix compiler warning.

Gcc takes the credit here.
To be merged with other fixes on this branch.

Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
Change-Id: If81f35ab3e8061aae1e03b72ab16a4c1dc390c3a
Reviewed-on: http://git.am.freescale.net:8181/39148
Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>

staging: fsl-mc: dpio: remove programing of MSIs in dpio driver

this is now handled in the bus driver

Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>

fsl_qbman: Enable CDAN generation

Enable CDAN notificiation registration in both QBMan and DPIO

Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>

fsl_dpio: Implement API to dequeue from a channel

Implement an API that allows users to dequeue from a channel

Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>

fsl-dpio: Change dequeue command type

For now CDANs don't work with priority precedence.

Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>

fsl-dpio: Export FQD context getter function

Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>

fsl_dpio: Fix DPIO polling thread logic

Fix the logic for the DPIO polling logic and ensure the thread
is not parked

Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
[Stuart: fixed typo in comment]
Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>

fsl-dpio,qbman: Export functions

A few of the functions used by the Ethernet driver were not exported
yet. Needed in order to compile Eth driver as a module.

Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>

fsl_qbman: Use proper accessors when reading QBMan portals

Use accessors that properly byteswap when accessing QBMan portals

Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>

fsl_qbman: Fix encoding of 64 byte values

The QBMan driver encodes commands in 32 bit host endianess then
coverts to little endian before sending to HW. This means 64
byte values need to be encoded so that the values will be
correctly swapped when the commands are written to HW.

Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>

dpaa_fd: Add functions for SG entries endianness conversions

Scatter gather entries are little endian at the hardware level.
Add functions for converting the SG entry structure to cpu
endianness to avoid incorrect behaviour on BE kernels.

Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>

fsl_dpaa: update header files with kernel-doc format

Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>

qbman: update header fiels to follow kernel-doc format

Plus rename orp_id as opr_id based on the BG.

Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>

fsl/dpio: rename ldpaa to dpaa2

Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>
(Stuart: removed eth part out into separate patch)
Signed-off-by: Stuart Yoder <stuart.yoder@nxp.com>

qbman_test: update qbman_test

- Update to sync with latest change in qbman driver.
- Add bpscn test case

Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>

fsl-dpio: add FLE (Frame List Entry) for FMT=dpaa_fd_list support

Signed-off-by: Horia Geantă <horia.geanta@freescale.com>

fsl-dpio: add accessors for FD[FRC]

Signed-off-by: Horia Geantă <horia.geanta@freescale.com>

fsl-dpio: add accessors for FD[FLC]

Signed-off-by: Horia Geantă <horia.geanta@freescale.com>
(Stuart: corrected typo in subject)
Signed-off-by: Stuart Yoder <stuart.yoder@nxp.com>

fsl/dpio: dpaa2_fd: Add the comments for newly added APIs.

Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>
[Stuart: added fsl/dpio prefix on commit subject]
Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>

fsl-dpio: rename dpaa_* structure to dpaa2_*

Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>
(Stuart: split eth and caam parts out into separate patches)
Signed-off-by: Stuart Yoder <stuart.yoder@nxp.com>

fsl-dpio: update the header file with more description in comments

plus fix some typos.

Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>
Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>

fsl-dpio: fix Klocwork issues.

Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>

fsl_dpio: Fix kernel doc issues and add an overview

Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>

fsl-dpio,qbman: Prefer affine portal to acquire/release buffers

The FQ enqueue/dequeue DPIO code attempts to select an affine QBMan
portal in order to minimize contention (under the assumption that most
of the calling code runs in affine contexts). Doing the same now for
buffer acquire/release.

Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>

fsl-dpio: prefer affine QBMan portal in dpaa2_io_service_enqueue_fq

Commit 7b057d9bc3d31 ("fsl-dpio: Fast DPIO object selection")
took care of dpaa2_io_service_enqueue_qd, missing
dpaa2_io_service_enqueue_fq.

Cc: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
Signed-off-by: Horia Geantă <horia.geanta@freescale.com>

fsl/dpio: update the dpio flib files from mc9.0.0 release

Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>

fsl/dpio: pass qman_version from dpio attributes to swp desc

Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>

fsl/dpio/qbman: Use qman version to determin dqrr size

Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>

fsl-dpio: Fix dequeue type enum values

enum qbman_pull_type_e did not follow the volatile dequeue command
specification, for which VERB=b'00 is a valid value (but of no
interest to us).

Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>

fsl-dpio: Volatile dequeue with priority precedence

Use priority precedence to do volatile dequeue from channels, rather
than active FQ precedence.

Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>

Signed-off-by: Stuart Yoder <stuart.yoder@nxp.com>
---
 drivers/staging/fsl-mc/bus/Kconfig                 |   16 +
 drivers/staging/fsl-mc/bus/Makefile                |    3 +
 drivers/staging/fsl-mc/bus/dpio/Makefile           |    9 +
 drivers/staging/fsl-mc/bus/dpio/dpio-drv.c         |  405 +++++++
 drivers/staging/fsl-mc/bus/dpio/dpio-drv.h         |   33 +
 drivers/staging/fsl-mc/bus/dpio/dpio.c             |  468 ++++++++
 drivers/staging/fsl-mc/bus/dpio/dpio_service.c     |  801 +++++++++++++
 drivers/staging/fsl-mc/bus/dpio/fsl_dpio.h         |  460 ++++++++
 drivers/staging/fsl-mc/bus/dpio/fsl_dpio_cmd.h     |  184 +++
 drivers/staging/fsl-mc/bus/dpio/fsl_qbman_base.h   |  123 ++
 drivers/staging/fsl-mc/bus/dpio/fsl_qbman_portal.h |  753 ++++++++++++
 drivers/staging/fsl-mc/bus/dpio/qbman_debug.c      |  846 ++++++++++++++
 drivers/staging/fsl-mc/bus/dpio/qbman_debug.h      |  136 +++
 drivers/staging/fsl-mc/bus/dpio/qbman_portal.c     | 1212 ++++++++++++++++++++
 drivers/staging/fsl-mc/bus/dpio/qbman_portal.h     |  261 +++++
 drivers/staging/fsl-mc/bus/dpio/qbman_private.h    |  173 +++
 drivers/staging/fsl-mc/bus/dpio/qbman_sys.h        |  307 +++++
 drivers/staging/fsl-mc/bus/dpio/qbman_sys_decl.h   |   86 ++
 drivers/staging/fsl-mc/bus/dpio/qbman_test.c       |  664 +++++++++++
 drivers/staging/fsl-mc/include/fsl_dpaa2_fd.h      |  774 +++++++++++++
 drivers/staging/fsl-mc/include/fsl_dpaa2_io.h      |  619 ++++++++++
 21 files changed, 8333 insertions(+)
 create mode 100644 drivers/staging/fsl-mc/bus/dpio/Makefile
 create mode 100644 drivers/staging/fsl-mc/bus/dpio/dpio-drv.c
 create mode 100644 drivers/staging/fsl-mc/bus/dpio/dpio-drv.h
 create mode 100644 drivers/staging/fsl-mc/bus/dpio/dpio.c
 create mode 100644 drivers/staging/fsl-mc/bus/dpio/dpio_service.c
 create mode 100644 drivers/staging/fsl-mc/bus/dpio/fsl_dpio.h
 create mode 100644 drivers/staging/fsl-mc/bus/dpio/fsl_dpio_cmd.h
 create mode 100644 drivers/staging/fsl-mc/bus/dpio/fsl_qbman_base.h
 create mode 100644 drivers/staging/fsl-mc/bus/dpio/fsl_qbman_portal.h
 create mode 100644 drivers/staging/fsl-mc/bus/dpio/qbman_debug.c
 create mode 100644 drivers/staging/fsl-mc/bus/dpio/qbman_debug.h
 create mode 100644 drivers/staging/fsl-mc/bus/dpio/qbman_portal.c
 create mode 100644 drivers/staging/fsl-mc/bus/dpio/qbman_portal.h
 create mode 100644 drivers/staging/fsl-mc/bus/dpio/qbman_private.h
 create mode 100644 drivers/staging/fsl-mc/bus/dpio/qbman_sys.h
 create mode 100644 drivers/staging/fsl-mc/bus/dpio/qbman_sys_decl.h
 create mode 100644 drivers/staging/fsl-mc/bus/dpio/qbman_test.c
 create mode 100644 drivers/staging/fsl-mc/include/fsl_dpaa2_fd.h
 create mode 100644 drivers/staging/fsl-mc/include/fsl_dpaa2_io.h

--- a/drivers/staging/fsl-mc/bus/Kconfig
+++ b/drivers/staging/fsl-mc/bus/Kconfig
@@ -28,3 +28,19 @@ config FSL_MC_RESTOOL
         help
           Driver that provides kernel support for the Freescale Management
 	  Complex resource manager user-space tool.
+
+config FSL_MC_DPIO
+	tristate "Freescale Data Path I/O (DPIO) driver"
+	depends on FSL_MC_BUS
+	help
+	  Driver for Freescale Data Path I/O (DPIO) devices.
+	  A DPIO device provides queue and buffer management facilities
+	  for software to interact with other Data Path devices. This
+	  driver does not expose the DPIO device individually, but
+	  groups them under a service layer API.
+
+config FSL_QBMAN_DEBUG
+	tristate "Freescale QBMAN Debug APIs"
+	depends on FSL_MC_DPIO
+	help
+	  QBMan debug assistant APIs.
--- a/drivers/staging/fsl-mc/bus/Makefile
+++ b/drivers/staging/fsl-mc/bus/Makefile
@@ -21,3 +21,6 @@ mc-bus-driver-objs := mc-bus.o \
 
 # MC restool kernel support
 obj-$(CONFIG_FSL_MC_RESTOOL) += mc-restool.o
+
+# MC DPIO driver
+obj-$(CONFIG_FSL_MC_DPIO) += dpio/
--- /dev/null
+++ b/drivers/staging/fsl-mc/bus/dpio/Makefile
@@ -0,0 +1,9 @@
+#
+# Freescale DPIO driver
+#
+
+obj-$(CONFIG_FSL_MC_BUS) += fsl-dpio-drv.o
+
+fsl-dpio-drv-objs := dpio-drv.o dpio_service.o dpio.o qbman_portal.o
+
+obj-$(CONFIG_FSL_QBMAN_DEBUG) += qbman_debug.o
--- /dev/null
+++ b/drivers/staging/fsl-mc/bus/dpio/dpio-drv.c
@@ -0,0 +1,405 @@
+/* Copyright 2014 Freescale Semiconductor Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *	 notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *	 notice, this list of conditions and the following disclaimer in the
+ *	 documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *	 names of its contributors may be used to endorse or promote products
+ *	 derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/types.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/interrupt.h>
+#include <linux/msi.h>
+#include <linux/dma-mapping.h>
+#include <linux/kthread.h>
+#include <linux/delay.h>
+
+#include "../../include/mc.h"
+#include "../../include/fsl_dpaa2_io.h"
+
+#include "fsl_qbman_portal.h"
+#include "fsl_dpio.h"
+#include "fsl_dpio_cmd.h"
+
+#include "dpio-drv.h"
+
+#define DPIO_DESCRIPTION "DPIO Driver"
+
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_AUTHOR("Freescale Semiconductor, Inc");
+MODULE_DESCRIPTION(DPIO_DESCRIPTION);
+
+#define MAX_DPIO_IRQ_NAME 16 /* Big enough for "FSL DPIO %d" */
+
+struct dpio_priv {
+	struct dpaa2_io *io;
+	char irq_name[MAX_DPIO_IRQ_NAME];
+	struct task_struct *thread;
+};
+
+static int dpio_thread(void *data)
+{
+	struct dpaa2_io *io = data;
+
+	while (!kthread_should_stop()) {
+		int err = dpaa2_io_poll(io);
+
+		if (err) {
+			pr_err("dpaa2_io_poll() failed\n");
+			return err;
+		}
+		msleep(50);
+	}
+	return 0;
+}
+
+static irqreturn_t dpio_irq_handler(int irq_num, void *arg)
+{
+	struct device *dev = (struct device *)arg;
+	struct dpio_priv *priv = dev_get_drvdata(dev);
+
+	return dpaa2_io_irq(priv->io);
+}
+
+static void unregister_dpio_irq_handlers(struct fsl_mc_device *ls_dev)
+{
+	int i;
+	struct fsl_mc_device_irq *irq;
+	int irq_count = ls_dev->obj_desc.irq_count;
+
+	for (i = 0; i < irq_count; i++) {
+		irq = ls_dev->irqs[i];
+		devm_free_irq(&ls_dev->dev, irq->msi_desc->irq, &ls_dev->dev);
+	}
+}
+
+static int register_dpio_irq_handlers(struct fsl_mc_device *ls_dev, int cpu)
+{
+	struct dpio_priv *priv;
+	unsigned int i;
+	int error;
+	struct fsl_mc_device_irq *irq;
+	unsigned int num_irq_handlers_registered = 0;
+	int irq_count = ls_dev->obj_desc.irq_count;
+	cpumask_t mask;
+
+	priv = dev_get_drvdata(&ls_dev->dev);
+
+	if (WARN_ON(irq_count != 1))
+		return -EINVAL;
+
+	for (i = 0; i < irq_count; i++) {
+		irq = ls_dev->irqs[i];
+		error = devm_request_irq(&ls_dev->dev,
+					 irq->msi_desc->irq,
+					 dpio_irq_handler,
+					 0,
+					 priv->irq_name,
+					 &ls_dev->dev);
+		if (error < 0) {
+			dev_err(&ls_dev->dev,
+				"devm_request_irq() failed: %d\n",
+				error);
+			goto error_unregister_irq_handlers;
+		}
+
+		/* Set the IRQ affinity */
+		cpumask_clear(&mask);
+		cpumask_set_cpu(cpu, &mask);
+		if (irq_set_affinity(irq->msi_desc->irq, &mask))
+			pr_err("irq_set_affinity failed irq %d cpu %d\n",
+			       irq->msi_desc->irq, cpu);
+
+		num_irq_handlers_registered++;
+	}
+
+	return 0;
+
+error_unregister_irq_handlers:
+	for (i = 0; i < num_irq_handlers_registered; i++) {
+		irq = ls_dev->irqs[i];
+		devm_free_irq(&ls_dev->dev, irq->msi_desc->irq,
+			      &ls_dev->dev);
+	}
+
+	return error;
+}
+
+static int __cold
+dpaa2_dpio_probe(struct fsl_mc_device *ls_dev)
+{
+	struct dpio_attr dpio_attrs;
+	struct dpaa2_io_desc desc;
+	struct dpio_priv *priv;
+	int err = -ENOMEM;
+	struct device *dev = &ls_dev->dev;
+	struct dpaa2_io *defservice;
+	bool irq_allocated = false;
+	static int next_cpu;
+
+	priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+	if (!priv)
+		goto err_priv_alloc;
+
+	dev_set_drvdata(dev, priv);
+
+	err = fsl_mc_portal_allocate(ls_dev, 0, &ls_dev->mc_io);
+	if (err) {
+		dev_err(dev, "MC portal allocation failed\n");
+		err = -EPROBE_DEFER;
+		goto err_mcportal;
+	}
+
+	err = dpio_open(ls_dev->mc_io, 0, ls_dev->obj_desc.id,
+			&ls_dev->mc_handle);
+	if (err) {
+		dev_err(dev, "dpio_open() failed\n");
+		goto err_open;
+	}
+
+	err = dpio_get_attributes(ls_dev->mc_io, 0, ls_dev->mc_handle,
+				  &dpio_attrs);
+	if (err) {
+		dev_err(dev, "dpio_get_attributes() failed %d\n", err);
+		goto err_get_attr;
+	}
+	err = dpio_enable(ls_dev->mc_io, 0, ls_dev->mc_handle);
+	if (err) {
+		dev_err(dev, "dpio_enable() failed %d\n", err);
+		goto err_get_attr;
+	}
+	pr_info("ce_paddr=0x%llx, ci_paddr=0x%llx, portalid=%d, prios=%d\n",
+		ls_dev->regions[0].start,
+		ls_dev->regions[1].start,
+		dpio_attrs.qbman_portal_id,
+		dpio_attrs.num_priorities);
+
+	pr_info("ce_size=0x%llx, ci_size=0x%llx\n",
+		resource_size(&ls_dev->regions[0]),
+		resource_size(&ls_dev->regions[1]));
+
+	desc.qman_version = dpio_attrs.qbman_version;
+	/* Build DPIO driver object out of raw MC object */
+	desc.receives_notifications = dpio_attrs.num_priorities ? 1 : 0;
+	desc.has_irq = 1;
+	desc.will_poll = 1;
+	desc.has_8prio = dpio_attrs.num_priorities == 8 ? 1 : 0;
+	desc.cpu = next_cpu;
+	desc.stash_affinity = 1; /* TODO: Figure out how to determine
+				    this setting - will we ever have non-affine
+				    portals where we stash to a platform cache? */
+	next_cpu = (next_cpu + 1) % num_active_cpus();
+	desc.dpio_id = ls_dev->obj_desc.id;
+	desc.regs_cena = ioremap_cache_ns(ls_dev->regions[0].start,
+		resource_size(&ls_dev->regions[0]));
+	desc.regs_cinh = ioremap(ls_dev->regions[1].start,
+		resource_size(&ls_dev->regions[1]));
+
+	err = fsl_mc_allocate_irqs(ls_dev);
+	if (err) {
+		dev_err(dev, "DPIO fsl_mc_allocate_irqs failed\n");
+		desc.has_irq = 0;
+	} else {
+		irq_allocated = true;
+
+		snprintf(priv->irq_name, MAX_DPIO_IRQ_NAME, "FSL DPIO %d",
+			 desc.dpio_id);
+
+		err = register_dpio_irq_handlers(ls_dev, desc.cpu);
+		if (err)
+			desc.has_irq = 0;
+	}
+
+	priv->io = dpaa2_io_create(&desc);
+	if (!priv->io) {
+		dev_err(dev, "DPIO setup failed\n");
+		goto err_dpaa2_io_create;
+	}
+
+	/* If no irq then go to poll mode */
+	if (desc.has_irq == 0) {
+		dev_info(dev, "Using polling mode for DPIO %d\n",
+			 desc.dpio_id);
+		/* goto err_register_dpio_irq; */
+		/* TEMP: Start polling if IRQ could not
+		   be registered.  This will go away once
+		   KVM support for MSI is present */
+		if (irq_allocated == true)
+			fsl_mc_free_irqs(ls_dev);
+
+		if (desc.stash_affinity)
+			priv->thread = kthread_create_on_cpu(dpio_thread,
+							     priv->io,
+							     desc.cpu,
+							     "dpio_aff%u");
+		else
+			priv->thread =
+				kthread_create(dpio_thread,
+					       priv->io,
+					       "dpio_non%u",
+					       dpio_attrs.qbman_portal_id);
+		if (IS_ERR(priv->thread)) {
+			dev_err(dev, "DPIO thread failure\n");
+			err = PTR_ERR(priv->thread);
+			goto err_dpaa_thread;
+		}
+		kthread_unpark(priv->thread);
+		wake_up_process(priv->thread);
+	}
+
+	defservice = dpaa2_io_default_service();
+	err = dpaa2_io_service_add(defservice, priv->io);
+	dpaa2_io_down(defservice);
+	if (err) {
+		dev_err(dev, "DPIO add-to-service failed\n");
+		goto err_dpaa2_io_add;
+	}
+
+	dev_info(dev, "dpio: probed object %d\n", ls_dev->obj_desc.id);
+	dev_info(dev, "   receives_notifications = %d\n",
+			desc.receives_notifications);
+	dev_info(dev, "   has_irq = %d\n", desc.has_irq);
+	dpio_close(ls_dev->mc_io, 0, ls_dev->mc_handle);
+	fsl_mc_portal_free(ls_dev->mc_io);
+	return 0;
+
+err_dpaa2_io_add:
+	unregister_dpio_irq_handlers(ls_dev);
+/* TEMP: To be restored once polling is removed
+  err_register_dpio_irq:
+	fsl_mc_free_irqs(ls_dev);
+*/
+err_dpaa_thread:
+err_dpaa2_io_create:
+	dpio_disable(ls_dev->mc_io, 0, ls_dev->mc_handle);
+err_get_attr:
+	dpio_close(ls_dev->mc_io, 0, ls_dev->mc_handle);
+err_open:
+	fsl_mc_portal_free(ls_dev->mc_io);
+err_mcportal:
+	dev_set_drvdata(dev, NULL);
+	devm_kfree(dev, priv);
+err_priv_alloc:
+	return err;
+}
+
+/*
+ * Tear down interrupts for a given DPIO object
+ */
+static void dpio_teardown_irqs(struct fsl_mc_device *ls_dev)
+{
+	/* (void)disable_dpio_irqs(ls_dev); */
+	unregister_dpio_irq_handlers(ls_dev);
+	fsl_mc_free_irqs(ls_dev);
+}
+
+static int __cold
+dpaa2_dpio_remove(struct fsl_mc_device *ls_dev)
+{
+	struct device *dev;
+	struct dpio_priv *priv;
+	int err;
+
+	dev = &ls_dev->dev;
+	priv = dev_get_drvdata(dev);
+
+	/* there is no implementation yet for pulling a DPIO object out of a
+	 * running service (and they're currently always running).
+	 */
+	dev_crit(dev, "DPIO unplugging is broken, the service holds onto it\n");
+
+	if (priv->thread)
+		kthread_stop(priv->thread);
+	else
+		dpio_teardown_irqs(ls_dev);
+
+	err = fsl_mc_portal_allocate(ls_dev, 0, &ls_dev->mc_io);
+	if (err) {
+		dev_err(dev, "MC portal allocation failed\n");
+		goto err_mcportal;
+	}
+
+	err = dpio_open(ls_dev->mc_io, 0, ls_dev->obj_desc.id,
+			&ls_dev->mc_handle);
+	if (err) {
+		dev_err(dev, "dpio_open() failed\n");
+		goto err_open;
+	}
+
+	dev_set_drvdata(dev, NULL);
+	dpaa2_io_down(priv->io);
+
+	err = 0;
+
+	dpio_disable(ls_dev->mc_io, 0, ls_dev->mc_handle);
+	dpio_close(ls_dev->mc_io, 0, ls_dev->mc_handle);
+err_open:
+	fsl_mc_portal_free(ls_dev->mc_io);
+err_mcportal:
+	return err;
+}
+
+static const struct fsl_mc_device_match_id dpaa2_dpio_match_id_table[] = {
+	{
+		.vendor = FSL_MC_VENDOR_FREESCALE,
+		.obj_type = "dpio",
+		.ver_major = DPIO_VER_MAJOR,
+		.ver_minor = DPIO_VER_MINOR
+	},
+	{ .vendor = 0x0 }
+};
+
+static struct fsl_mc_driver dpaa2_dpio_driver = {
+	.driver = {
+		.name		= KBUILD_MODNAME,
+		.owner		= THIS_MODULE,
+	},
+	.probe		= dpaa2_dpio_probe,
+	.remove		= dpaa2_dpio_remove,
+	.match_id_table = dpaa2_dpio_match_id_table
+};
+
+static int dpio_driver_init(void)
+{
+	int err;
+
+	err = dpaa2_io_service_driver_init();
+	if (!err) {
+		err = fsl_mc_driver_register(&dpaa2_dpio_driver);
+		if (err)
+			dpaa2_io_service_driver_exit();
+	}
+	return err;
+}
+static void dpio_driver_exit(void)
+{
+	fsl_mc_driver_unregister(&dpaa2_dpio_driver);
+	dpaa2_io_service_driver_exit();
+}
+module_init(dpio_driver_init);
+module_exit(dpio_driver_exit);
--- /dev/null
+++ b/drivers/staging/fsl-mc/bus/dpio/dpio-drv.h
@@ -0,0 +1,33 @@
+/* Copyright 2014 Freescale Semiconductor Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *	 notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *	 notice, this list of conditions and the following disclaimer in the
+ *	 documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *	 names of its contributors may be used to endorse or promote products
+ *	 derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+int dpaa2_io_service_driver_init(void);
+void dpaa2_io_service_driver_exit(void);
--- /dev/null
+++ b/drivers/staging/fsl-mc/bus/dpio/dpio.c
@@ -0,0 +1,468 @@
+/* Copyright 2013-2015 Freescale Semiconductor Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#include "../../include/mc-sys.h"
+#include "../../include/mc-cmd.h"
+#include "fsl_dpio.h"
+#include "fsl_dpio_cmd.h"
+
+int dpio_open(struct fsl_mc_io *mc_io,
+	      uint32_t cmd_flags,
+	      int dpio_id,
+	      uint16_t *token)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_OPEN,
+					  cmd_flags,
+					  0);
+	DPIO_CMD_OPEN(cmd, dpio_id);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	*token = MC_CMD_HDR_READ_TOKEN(cmd.header);
+
+	return 0;
+}
+
+int dpio_close(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_CLOSE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpio_create(struct fsl_mc_io *mc_io,
+		uint32_t cmd_flags,
+		const struct dpio_cfg *cfg,
+		uint16_t *token)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_CREATE,
+					  cmd_flags,
+					  0);
+	DPIO_CMD_CREATE(cmd, cfg);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	*token = MC_CMD_HDR_READ_TOKEN(cmd.header);
+
+	return 0;
+}
+
+int dpio_destroy(struct fsl_mc_io *mc_io,
+		 uint32_t cmd_flags,
+		 uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_DESTROY,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpio_enable(struct fsl_mc_io *mc_io,
+		uint32_t cmd_flags,
+		uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_ENABLE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpio_disable(struct fsl_mc_io *mc_io,
+		 uint32_t cmd_flags,
+		 uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_DISABLE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpio_is_enabled(struct fsl_mc_io *mc_io,
+		    uint32_t cmd_flags,
+		    uint16_t token,
+		    int *en)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_IS_ENABLED, cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPIO_RSP_IS_ENABLED(cmd, *en);
+
+	return 0;
+}
+
+int dpio_reset(struct fsl_mc_io *mc_io,
+	       uint32_t cmd_flags,
+	       uint16_t token)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_RESET,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpio_set_irq(struct fsl_mc_io	*mc_io,
+		 uint32_t		cmd_flags,
+		 uint16_t		token,
+		 uint8_t		irq_index,
+		 struct dpio_irq_cfg	*irq_cfg)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_SET_IRQ,
+					  cmd_flags,
+					  token);
+	DPIO_CMD_SET_IRQ(cmd, irq_index, irq_cfg);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpio_get_irq(struct fsl_mc_io	*mc_io,
+		 uint32_t		cmd_flags,
+		 uint16_t		token,
+		 uint8_t		irq_index,
+		 int			*type,
+		 struct dpio_irq_cfg	*irq_cfg)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_GET_IRQ,
+					  cmd_flags,
+					  token);
+	DPIO_CMD_GET_IRQ(cmd, irq_index);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPIO_RSP_GET_IRQ(cmd, *type, irq_cfg);
+
+	return 0;
+}
+
+int dpio_set_irq_enable(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			uint8_t irq_index,
+			uint8_t en)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_SET_IRQ_ENABLE,
+					  cmd_flags,
+					  token);
+	DPIO_CMD_SET_IRQ_ENABLE(cmd, irq_index, en);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpio_get_irq_enable(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			uint8_t irq_index,
+			uint8_t *en)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_GET_IRQ_ENABLE,
+					  cmd_flags,
+					  token);
+	DPIO_CMD_GET_IRQ_ENABLE(cmd, irq_index);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPIO_RSP_GET_IRQ_ENABLE(cmd, *en);
+
+	return 0;
+}
+
+int dpio_set_irq_mask(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint32_t mask)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_SET_IRQ_MASK,
+					  cmd_flags,
+					  token);
+	DPIO_CMD_SET_IRQ_MASK(cmd, irq_index, mask);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpio_get_irq_mask(struct fsl_mc_io *mc_io,
+		      uint32_t cmd_flags,
+		      uint16_t token,
+		      uint8_t irq_index,
+		      uint32_t *mask)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_GET_IRQ_MASK,
+					  cmd_flags,
+					  token);
+	DPIO_CMD_GET_IRQ_MASK(cmd, irq_index);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPIO_RSP_GET_IRQ_MASK(cmd, *mask);
+
+	return 0;
+}
+
+int dpio_get_irq_status(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			uint8_t irq_index,
+			uint32_t *status)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_GET_IRQ_STATUS,
+					  cmd_flags,
+					  token);
+	DPIO_CMD_GET_IRQ_STATUS(cmd, irq_index, *status);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPIO_RSP_GET_IRQ_STATUS(cmd, *status);
+
+	return 0;
+}
+
+int dpio_clear_irq_status(struct fsl_mc_io *mc_io,
+			  uint32_t cmd_flags,
+			  uint16_t token,
+			  uint8_t irq_index,
+			  uint32_t status)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_CLEAR_IRQ_STATUS,
+					  cmd_flags,
+					  token);
+	DPIO_CMD_CLEAR_IRQ_STATUS(cmd, irq_index, status);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpio_get_attributes(struct fsl_mc_io *mc_io,
+			uint32_t cmd_flags,
+			uint16_t token,
+			struct dpio_attr *attr)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_GET_ATTR,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPIO_RSP_GET_ATTR(cmd, attr);
+
+	return 0;
+}
+
+int dpio_set_stashing_destination(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t sdest)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_SET_STASHING_DEST,
+					  cmd_flags,
+					  token);
+	DPIO_CMD_SET_STASHING_DEST(cmd, sdest);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t *sdest)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_GET_STASHING_DEST,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPIO_RSP_GET_STASHING_DEST(cmd, *sdest);
+
+	return 0;
+}
+
+int dpio_add_static_dequeue_channel(struct fsl_mc_io *mc_io,
+				    uint32_t cmd_flags,
+				    uint16_t token,
+				    int dpcon_id,
+				    uint8_t *channel_index)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_ADD_STATIC_DEQUEUE_CHANNEL,
+					  cmd_flags,
+					  token);
+	DPIO_CMD_ADD_STATIC_DEQUEUE_CHANNEL(cmd, dpcon_id);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	DPIO_RSP_ADD_STATIC_DEQUEUE_CHANNEL(cmd, *channel_index);
+
+	return 0;
+}
+
+int dpio_remove_static_dequeue_channel(struct fsl_mc_io *mc_io,
+				       uint32_t cmd_flags,
+				       uint16_t token,
+				       int dpcon_id)
+{
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(
+				DPIO_CMDID_REMOVE_STATIC_DEQUEUE_CHANNEL,
+				cmd_flags,
+				token);
+	DPIO_CMD_REMOVE_STATIC_DEQUEUE_CHANNEL(cmd, dpcon_id);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
--- /dev/null
+++ b/drivers/staging/fsl-mc/bus/dpio/dpio_service.c
@@ -0,0 +1,801 @@
+/* Copyright 2014 Freescale Semiconductor Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *	 notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *	 notice, this list of conditions and the following disclaimer in the
+ *	 documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *	 names of its contributors may be used to endorse or promote products
+ *	 derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <linux/types.h>
+#include "fsl_qbman_portal.h"
+#include "../../include/mc.h"
+#include "../../include/fsl_dpaa2_io.h"
+#include "fsl_dpio.h"
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/interrupt.h>
+#include <linux/dma-mapping.h>
+#include <linux/slab.h>
+
+#include "dpio-drv.h"
+#include "qbman_debug.h"
+
+#define UNIMPLEMENTED() pr_err("FOO: %s unimplemented!\n", __func__)
+
+#define MAGIC_SERVICE 0xabcd9876
+#define MAGIC_OBJECT 0x1234fedc
+
+struct dpaa2_io {
+	/* If MAGIC_SERVICE, this is a group of objects, use the 'service' part
+	 * of the union. If MAGIC_OBJECT, use the 'object' part of the union. If
+	 * it's neither, something got corrupted. This is mainly to satisfy
+	 * dpaa2_io_from_registration(), which dereferences a caller-
+	 * instantiated struct and so warrants a bug-checking step - hence the
+	 * magic rather than a boolean.
+	 */
+	unsigned int magic;
+	atomic_t refs;
+	union {
+		struct dpaa2_io_service {
+			spinlock_t lock;
+			struct list_head list;
+			/* for targeted dpaa2_io selection */
+			struct dpaa2_io *objects_by_cpu[NR_CPUS];
+			cpumask_t cpus_notifications;
+			cpumask_t cpus_stashing;
+			int has_nonaffine;
+			/* slight hack. record the special case of the
+			 * "default service", because that's the case where we
+			 * need to avoid a kfree() ... */
+			int is_defservice;
+		} service;
+		struct dpaa2_io_object {
+			struct dpaa2_io_desc dpio_desc;
+			struct qbman_swp_desc swp_desc;
+			struct qbman_swp *swp;
+			/* If the object is part of a service, this is it (and
+			 * 'node' is linked into the service's list) */
+			struct dpaa2_io *service;
+			struct list_head node;
+			/* Interrupt mask, as used with
+			 * qbman_swp_interrupt_[gs]et_vanish(). This isn't
+			 * locked, because the higher layer is driving all
+			 * "ingress" processing. */
+			uint32_t irq_mask;
+			/* As part of simplifying assumptions, we provide an
+			 * irq-safe lock for each type of DPIO operation that
+			 * isn't innately lockless. The selection algorithms
+			 * (which are simplified) require this, whereas
+			 * eventually adherence to cpu-affinity will presumably
+			 * relax the locking requirements. */
+			spinlock_t lock_mgmt_cmd;
+			spinlock_t lock_notifications;
+			struct list_head notifications;
+		} object;
+	};
+};
+
+struct dpaa2_io_store {
+	unsigned int max;
+	dma_addr_t paddr;
+	struct dpaa2_dq *vaddr;
+	void *alloced_addr; /* the actual return from kmalloc as it may
+			       be adjusted for alignment purposes */
+	unsigned int idx; /* position of the next-to-be-returned entry */
+	struct qbman_swp *swp; /* portal used to issue VDQCR */
+	struct device *dev; /* device used for DMA mapping */
+};
+
+static struct dpaa2_io def_serv;
+
+/**********************/
+/* Internal functions */
+/**********************/
+
+static void service_init(struct dpaa2_io *d, int is_defservice)
+{
+	struct dpaa2_io_service *s = &d->service;
+
+	d->magic = MAGIC_SERVICE;
+	atomic_set(&d->refs, 1);
+	spin_lock_init(&s->lock);
+	INIT_LIST_HEAD(&s->list);
+	cpumask_clear(&s->cpus_notifications);
+	cpumask_clear(&s->cpus_stashing);
+	s->has_nonaffine = 0;
+	s->is_defservice = is_defservice;
+}
+
+/* Selection algorithms, stupid ones at that. These are to handle the case where
+ * the given dpaa2_io is a service, by choosing the non-service dpaa2_io within
+ * it to use.
+ */
+static struct dpaa2_io *_service_select_by_cpu_slow(struct dpaa2_io_service *ss,
+						   int cpu)
+{
+	struct dpaa2_io *o;
+	unsigned long irqflags;
+
+	spin_lock_irqsave(&ss->lock, irqflags);
+	/* TODO: this is about the dumbest and slowest selection algorithm you
+	 * could imagine. (We're looking for something working first, and
+	 * something efficient second...)
+	 */
+	list_for_each_entry(o, &ss->list, object.node)
+		if (o->object.dpio_desc.cpu == cpu)
+			goto found;
+
+	/* No joy. Try the first nonaffine portal (bleurgh) */
+	if (ss->has_nonaffine)
+		list_for_each_entry(o, &ss->list, object.node)
+			if (!o->object.dpio_desc.stash_affinity)
+				goto found;
+
+	/* No joy. Try the first object. Told you it was horrible. */
+	if (!list_empty(&ss->list))
+		o = list_entry(ss->list.next, struct dpaa2_io, object.node);
+	else
+		o = NULL;
+
+found:
+	spin_unlock_irqrestore(&ss->lock, irqflags);
+	return o;
+}
+
+static struct dpaa2_io *service_select_by_cpu(struct dpaa2_io *d, int cpu)
+{
+	struct dpaa2_io_service *ss;
+	unsigned long irqflags;
+
+	if (!d)
+		d = &def_serv;
+	else if (d->magic == MAGIC_OBJECT)
+		return d;
+	BUG_ON(d->magic != MAGIC_SERVICE);
+
+	ss = &d->service;
+
+	/* If cpu==-1, choose the current cpu, with no guarantees about
+	 * potentially being migrated away.
+	 */
+	if (unlikely(cpu < 0)) {
+		spin_lock_irqsave(&ss->lock, irqflags);
+		cpu = smp_processor_id();
+		spin_unlock_irqrestore(&ss->lock, irqflags);
+
+		return _service_select_by_cpu_slow(ss, cpu);
+	}
+
+	/* If a specific cpu was requested, pick it up immediately */
+	return ss->objects_by_cpu[cpu];
+}
+
+static inline struct dpaa2_io *service_select_any(struct dpaa2_io *d)
+{
+	struct dpaa2_io_service *ss;
+	struct dpaa2_io *o;
+	unsigned long irqflags;
+
+	if (!d)
+		d = &def_serv;
+	else if (d->magic == MAGIC_OBJECT)
+		return d;
+	BUG_ON(d->magic != MAGIC_SERVICE);
+
+	/*
+	 * Lock the service, looking for the first DPIO object in the list,
+	 * ignore everything else about that DPIO, and choose it to do the
+	 * operation! As a post-selection step, move the DPIO to the end of
+	 * the list. It should improve load-balancing a little, although it
+	 * might also incur a performance hit, given that the lock is *global*
+	 * and this may be called on the fast-path...
+	 */
+	ss = &d->service;
+	spin_lock_irqsave(&ss->lock, irqflags);
+	if (!list_empty(&ss->list)) {
+		o = list_entry(ss->list.next, struct dpaa2_io, object.node);
+		list_del(&o->object.node);
+		list_add_tail(&o->object.node, &ss->list);
+	} else
+		o = NULL;
+	spin_unlock_irqrestore(&ss->lock, irqflags);
+	return o;
+}
+
+/* If the context is not preemptible, select the service affine to the
+ * current cpu. Otherwise, "select any".
+ */
+static inline struct dpaa2_io *_service_select(struct dpaa2_io *d)
+{
+	struct dpaa2_io *temp = d;
+
+	if (likely(!preemptible())) {
+		d = service_select_by_cpu(d, smp_processor_id());
+		if (likely(d))
+			return d;
+	}
+	return service_select_any(temp);
+}
+
+/**********************/
+/* Exported functions */
+/**********************/
+
+struct dpaa2_io *dpaa2_io_create(const struct dpaa2_io_desc *desc)
+{
+	struct dpaa2_io *ret = kmalloc(sizeof(*ret), GFP_KERNEL);
+	struct dpaa2_io_object *o = &ret->object;
+
+	if (!ret)
+		return NULL;
+	ret->magic = MAGIC_OBJECT;
+	atomic_set(&ret->refs, 1);
+	o->dpio_desc = *desc;
+	o->swp_desc.cena_bar = o->dpio_desc.regs_cena;
+	o->swp_desc.cinh_bar = o->dpio_desc.regs_cinh;
+	o->swp_desc.qman_version = o->dpio_desc.qman_version;
+	o->swp = qbman_swp_init(&o->swp_desc);
+	o->service = NULL;
+	if (!o->swp) {
+		kfree(ret);
+		return NULL;
+	}
+	INIT_LIST_HEAD(&o->node);
+	spin_lock_init(&o->lock_mgmt_cmd);
+	spin_lock_init(&o->lock_notifications);
+	INIT_LIST_HEAD(&o->notifications);
+	if (!o->dpio_desc.has_irq)
+		qbman_swp_interrupt_set_vanish(o->swp, 0xffffffff);
+	else {
+		/* For now only enable DQRR interrupts */
+		qbman_swp_interrupt_set_trigger(o->swp,
+						QBMAN_SWP_INTERRUPT_DQRI);
+	}
+	qbman_swp_interrupt_clear_status(o->swp, 0xffffffff);
+	if (o->dpio_desc.receives_notifications)
+		qbman_swp_push_set(o->swp, 0, 1);
+	return ret;
+}
+EXPORT_SYMBOL(dpaa2_io_create);
+
+struct dpaa2_io *dpaa2_io_create_service(void)
+{
+	struct dpaa2_io *ret = kmalloc(sizeof(*ret), GFP_KERNEL);
+
+	if (ret)
+		service_init(ret, 0);
+	return ret;
+}
+EXPORT_SYMBOL(dpaa2_io_create_service);
+
+struct dpaa2_io *dpaa2_io_default_service(void)
+{
+	atomic_inc(&def_serv.refs);
+	return &def_serv;
+}
+EXPORT_SYMBOL(dpaa2_io_default_service);
+
+void dpaa2_io_down(struct dpaa2_io *d)
+{
+	if (!atomic_dec_and_test(&d->refs))
+		return;
+	if (d->magic == MAGIC_SERVICE) {
+		BUG_ON(!list_empty(&d->service.list));
+		if (d->service.is_defservice)
+			/* avoid the kfree()! */
+			return;
+	} else {
+		BUG_ON(d->magic != MAGIC_OBJECT);
+		BUG_ON(d->object.service);
+		BUG_ON(!list_empty(&d->object.notifications));
+	}
+	kfree(d);
+}
+EXPORT_SYMBOL(dpaa2_io_down);
+
+int dpaa2_io_service_add(struct dpaa2_io *s, struct dpaa2_io *o)
+{
+	struct dpaa2_io_service *ss = &s->service;
+	struct dpaa2_io_object *oo = &o->object;
+	int res = -EINVAL;
+
+	if ((s->magic != MAGIC_SERVICE) || (o->magic != MAGIC_OBJECT))
+		return res;
+	atomic_inc(&o->refs);
+	atomic_inc(&s->refs);
+	spin_lock(&ss->lock);
+	/* 'obj' must not already be associated with a service */
+	if (!oo->service) {
+		oo->service = s;
+		list_add(&oo->node, &ss->list);
+		if (oo->dpio_desc.receives_notifications) {
+			cpumask_set_cpu(oo->dpio_desc.cpu,
+					&ss->cpus_notifications);
+			/* Update the fast-access array */
+			ss->objects_by_cpu[oo->dpio_desc.cpu] =
+				container_of(oo, struct dpaa2_io, object);
+		}
+		if (oo->dpio_desc.stash_affinity)
+			cpumask_set_cpu(oo->dpio_desc.cpu,
+					&ss->cpus_stashing);
+		if (!oo->dpio_desc.stash_affinity)
+			ss->has_nonaffine = 1;
+		/* success */
+		res = 0;
+	}
+	spin_unlock(&ss->lock);
+	if (res) {
+		dpaa2_io_down(s);
+		dpaa2_io_down(o);
+	}
+	return res;
+}
+EXPORT_SYMBOL(dpaa2_io_service_add);
+
+int dpaa2_io_get_descriptor(struct dpaa2_io *obj, struct dpaa2_io_desc *desc)
+{
+	if (obj->magic == MAGIC_SERVICE)
+		return -EINVAL;
+	BUG_ON(obj->magic != MAGIC_OBJECT);
+	*desc = obj->object.dpio_desc;
+	return 0;
+}
+EXPORT_SYMBOL(dpaa2_io_get_descriptor);
+
+#define DPAA_POLL_MAX 32
+
+int dpaa2_io_poll(struct dpaa2_io *obj)
+{
+	const struct dpaa2_dq *dq;
+	struct qbman_swp *swp;
+	int max = 0;
+
+	if (obj->magic != MAGIC_OBJECT)
+		return -EINVAL;
+	swp = obj->object.swp;
+	dq = qbman_swp_dqrr_next(swp);
+	while (dq) {
+		if (qbman_result_is_SCN(dq)) {
+			struct dpaa2_io_notification_ctx *ctx;
+			uint64_t q64;
+
+			q64 = qbman_result_SCN_ctx(dq);
+			ctx = (void *)q64;
+			ctx->cb(ctx);
+		} else
+			pr_crit("Unrecognised/ignored DQRR entry\n");
+		qbman_swp_dqrr_consume(swp, dq);
+		++max;
+		if (max > DPAA_POLL_MAX)
+			return 0;
+		dq = qbman_swp_dqrr_next(swp);
+	}
+	return 0;
+}
+EXPORT_SYMBOL(dpaa2_io_poll);
+
+int dpaa2_io_irq(struct dpaa2_io *obj)
+{
+	struct qbman_swp *swp;
+	uint32_t status;
+
+	if (obj->magic != MAGIC_OBJECT)
+		return -EINVAL;
+	swp = obj->object.swp;
+	status = qbman_swp_interrupt_read_status(swp);
+	if (!status)
+		return IRQ_NONE;
+	dpaa2_io_poll(obj);
+	qbman_swp_interrupt_clear_status(swp, status);
+	qbman_swp_interrupt_set_inhibit(swp, 0);
+	return IRQ_HANDLED;
+}
+EXPORT_SYMBOL(dpaa2_io_irq);
+
+int dpaa2_io_pause_poll(struct dpaa2_io *obj)
+{
+	UNIMPLEMENTED();
+	return -EINVAL;
+}
+EXPORT_SYMBOL(dpaa2_io_pause_poll);
+
+int dpaa2_io_resume_poll(struct dpaa2_io *obj)
+{
+	UNIMPLEMENTED();
+	return -EINVAL;
+}
+EXPORT_SYMBOL(dpaa2_io_resume_poll);
+
+void dpaa2_io_service_notifications(struct dpaa2_io *s, cpumask_t *mask)
+{
+	struct dpaa2_io_service *ss = &s->service;
+
+	BUG_ON(s->magic != MAGIC_SERVICE);
+	cpumask_copy(mask, &ss->cpus_notifications);
+}
+EXPORT_SYMBOL(dpaa2_io_service_notifications);
+
+void dpaa2_io_service_stashing(struct dpaa2_io *s, cpumask_t *mask)
+{
+	struct dpaa2_io_service *ss = &s->service;
+
+	BUG_ON(s->magic != MAGIC_SERVICE);
+	cpumask_copy(mask, &ss->cpus_stashing);
+}
+EXPORT_SYMBOL(dpaa2_io_service_stashing);
+
+int dpaa2_io_service_has_nonaffine(struct dpaa2_io *s)
+{
+	struct dpaa2_io_service *ss = &s->service;
+
+	BUG_ON(s->magic != MAGIC_SERVICE);
+	return ss->has_nonaffine;
+}
+EXPORT_SYMBOL(dpaa2_io_service_has_nonaffine);
+
+int dpaa2_io_service_register(struct dpaa2_io *d,
+			     struct dpaa2_io_notification_ctx *ctx)
+{
+	unsigned long irqflags;
+
+	d = service_select_by_cpu(d, ctx->desired_cpu);
+	if (!d)
+		return -ENODEV;
+	ctx->dpio_id = d->object.dpio_desc.dpio_id;
+	ctx->qman64 = (uint64_t)ctx;
+	ctx->dpio_private = d;
+	spin_lock_irqsave(&d->object.lock_notifications, irqflags);
+	list_add(&ctx->node, &d->object.notifications);
+	spin_unlock_irqrestore(&d->object.lock_notifications, irqflags);
+	if (ctx->is_cdan)
+		/* Enable the generation of CDAN notifications */
+		qbman_swp_CDAN_set_context_enable(d->object.swp,
+						  (uint16_t)ctx->id,
+						  ctx->qman64);
+	return 0;
+}
+EXPORT_SYMBOL(dpaa2_io_service_register);
+
+int dpaa2_io_service_deregister(struct dpaa2_io *service,
+			       struct dpaa2_io_notification_ctx *ctx)
+{
+	struct dpaa2_io *d = ctx->dpio_private;
+	unsigned long irqflags;
+
+	if (!service)
+		service = &def_serv;
+	BUG_ON((service != d) && (service != d->object.service));
+	if (ctx->is_cdan)
+		qbman_swp_CDAN_disable(d->object.swp,
+				       (uint16_t)ctx->id);
+	spin_lock_irqsave(&d->object.lock_notifications, irqflags);
+	list_del(&ctx->node);
+	spin_unlock_irqrestore(&d->object.lock_notifications, irqflags);
+	return 0;
+}
+EXPORT_SYMBOL(dpaa2_io_service_deregister);
+
+int dpaa2_io_service_rearm(struct dpaa2_io *d,
+			  struct dpaa2_io_notification_ctx *ctx)
+{
+	unsigned long irqflags;
+	int err;
+
+	d = _service_select(d);
+	if (!d)
+		return -ENODEV;
+	spin_lock_irqsave(&d->object.lock_mgmt_cmd, irqflags);
+	if (ctx->is_cdan)
+		err = qbman_swp_CDAN_enable(d->object.swp, (uint16_t)ctx->id);
+	else
+		err = qbman_swp_fq_schedule(d->object.swp, ctx->id);
+	spin_unlock_irqrestore(&d->object.lock_mgmt_cmd, irqflags);
+	return err;
+}
+EXPORT_SYMBOL(dpaa2_io_service_rearm);
+
+int dpaa2_io_from_registration(struct dpaa2_io_notification_ctx *ctx,
+			      struct dpaa2_io **io)
+{
+	struct dpaa2_io_notification_ctx *tmp;
+	struct dpaa2_io *d = ctx->dpio_private;
+	unsigned long irqflags;
+	int ret = 0;
+
+	BUG_ON(d->magic != MAGIC_OBJECT);
+	/* Iterate the notifications associated with 'd' looking for a match. If
+	 * not, we've been passed an unregistered ctx! */
+	spin_lock_irqsave(&d->object.lock_notifications, irqflags);
+	list_for_each_entry(tmp, &d->object.notifications, node)
+		if (tmp == ctx)
+			goto found;
+	ret = -EINVAL;
+found:
+	spin_unlock_irqrestore(&d->object.lock_notifications, irqflags);
+	if (!ret) {
+		atomic_inc(&d->refs);
+		*io = d;
+	}
+	return ret;
+}
+EXPORT_SYMBOL(dpaa2_io_from_registration);
+
+int dpaa2_io_service_get_persistent(struct dpaa2_io *service, int cpu,
+				   struct dpaa2_io **ret)
+{
+	if (cpu == -1)
+		*ret = service_select_any(service);
+	else
+		*ret = service_select_by_cpu(service, cpu);
+	if (*ret) {
+		atomic_inc(&(*ret)->refs);
+		return 0;
+	}
+	return -ENODEV;
+}
+EXPORT_SYMBOL(dpaa2_io_service_get_persistent);
+
+int dpaa2_io_service_pull_fq(struct dpaa2_io *d, uint32_t fqid,
+			    struct dpaa2_io_store *s)
+{
+	struct qbman_pull_desc pd;
+	int err;
+
+	qbman_pull_desc_clear(&pd);
+	qbman_pull_desc_set_storage(&pd, s->vaddr, s->paddr, 1);
+	qbman_pull_desc_set_numframes(&pd, (uint8_t)s->max);
+	qbman_pull_desc_set_fq(&pd, fqid);
+	d = _service_select(d);
+	if (!d)
+		return -ENODEV;
+	s->swp = d->object.swp;
+	err = qbman_swp_pull(d->object.swp, &pd);
+	if (err)
+		s->swp = NULL;
+	return err;
+}
+EXPORT_SYMBOL(dpaa2_io_service_pull_fq);
+
+int dpaa2_io_service_pull_channel(struct dpaa2_io *d, uint32_t channelid,
+				 struct dpaa2_io_store *s)
+{
+	struct qbman_pull_desc pd;
+	int err;
+
+	qbman_pull_desc_clear(&pd);
+	qbman_pull_desc_set_storage(&pd, s->vaddr, s->paddr, 1);
+	qbman_pull_desc_set_numframes(&pd, (uint8_t)s->max);
+	qbman_pull_desc_set_channel(&pd, channelid, qbman_pull_type_prio);
+	d = _service_select(d);
+	if (!d)
+		return -ENODEV;
+	s->swp = d->object.swp;
+	err = qbman_swp_pull(d->object.swp, &pd);
+	if (err)
+		s->swp = NULL;
+	return err;
+}
+EXPORT_SYMBOL(dpaa2_io_service_pull_channel);
+
+int dpaa2_io_service_enqueue_fq(struct dpaa2_io *d,
+			       uint32_t fqid,
+			       const struct dpaa2_fd *fd)
+{
+	struct qbman_eq_desc ed;
+
+	d = _service_select(d);
+	if (!d)
+		return -ENODEV;
+	qbman_eq_desc_clear(&ed);
+	qbman_eq_desc_set_no_orp(&ed, 0);
+	qbman_eq_desc_set_fq(&ed, fqid);
+	return qbman_swp_enqueue(d->object.swp, &ed,
+				 (const struct qbman_fd *)fd);
+}
+EXPORT_SYMBOL(dpaa2_io_service_enqueue_fq);
+
+int dpaa2_io_service_enqueue_qd(struct dpaa2_io *d,
+			       uint32_t qdid, uint8_t prio, uint16_t qdbin,
+			       const struct dpaa2_fd *fd)
+{
+	struct qbman_eq_desc ed;
+
+	d = _service_select(d);
+	if (!d)
+		return -ENODEV;
+	qbman_eq_desc_clear(&ed);
+	qbman_eq_desc_set_no_orp(&ed, 0);
+	qbman_eq_desc_set_qd(&ed, qdid, qdbin, prio);
+	return qbman_swp_enqueue(d->object.swp, &ed,
+				 (const struct qbman_fd *)fd);
+}
+EXPORT_SYMBOL(dpaa2_io_service_enqueue_qd);
+
+int dpaa2_io_service_release(struct dpaa2_io *d,
+			    uint32_t bpid,
+			    const uint64_t *buffers,
+			    unsigned int num_buffers)
+{
+	struct qbman_release_desc rd;
+
+	d = _service_select(d);
+	if (!d)
+		return -ENODEV;
+	qbman_release_desc_clear(&rd);
+	qbman_release_desc_set_bpid(&rd, bpid);
+	return qbman_swp_release(d->object.swp, &rd, buffers, num_buffers);
+}
+EXPORT_SYMBOL(dpaa2_io_service_release);
+
+int dpaa2_io_service_acquire(struct dpaa2_io *d,
+			    uint32_t bpid,
+			    uint64_t *buffers,
+			    unsigned int num_buffers)
+{
+	unsigned long irqflags;
+	int err;
+
+	d = _service_select(d);
+	if (!d)
+		return -ENODEV;
+	spin_lock_irqsave(&d->object.lock_mgmt_cmd, irqflags);
+	err = qbman_swp_acquire(d->object.swp, bpid, buffers, num_buffers);
+	spin_unlock_irqrestore(&d->object.lock_mgmt_cmd, irqflags);
+	return err;
+}
+EXPORT_SYMBOL(dpaa2_io_service_acquire);
+
+struct dpaa2_io_store *dpaa2_io_store_create(unsigned int max_frames,
+					   struct device *dev)
+{
+	struct dpaa2_io_store *ret = kmalloc(sizeof(*ret), GFP_KERNEL);
+	size_t size;
+
+	BUG_ON(!max_frames || (max_frames > 16));
+	if (!ret)
+		return NULL;
+	ret->max = max_frames;
+	size = max_frames * sizeof(struct dpaa2_dq) + 64;
+	ret->alloced_addr = kmalloc(size, GFP_KERNEL);
+	if (!ret->alloced_addr) {
+		kfree(ret);
+		return NULL;
+	}
+	ret->vaddr =  PTR_ALIGN(ret->alloced_addr, 64);
+	ret->paddr = dma_map_single(dev, ret->vaddr,
+				    sizeof(struct dpaa2_dq) * max_frames,
+				    DMA_FROM_DEVICE);
+	if (dma_mapping_error(dev, ret->paddr)) {
+		kfree(ret->alloced_addr);
+		kfree(ret);
+		return NULL;
+	}
+	ret->idx = 0;
+	ret->dev = dev;
+	return ret;
+}
+EXPORT_SYMBOL(dpaa2_io_store_create);
+
+void dpaa2_io_store_destroy(struct dpaa2_io_store *s)
+{
+	dma_unmap_single(s->dev, s->paddr, sizeof(struct dpaa2_dq) * s->max,
+			 DMA_FROM_DEVICE);
+	kfree(s->alloced_addr);
+	kfree(s);
+}
+EXPORT_SYMBOL(dpaa2_io_store_destroy);
+
+struct dpaa2_dq *dpaa2_io_store_next(struct dpaa2_io_store *s, int *is_last)
+{
+	int match;
+	struct dpaa2_dq *ret = &s->vaddr[s->idx];
+
+	match = qbman_result_has_new_result(s->swp, ret);
+	if (!match) {
+		*is_last = 0;
+		return NULL;
+	}
+	BUG_ON(!qbman_result_is_DQ(ret));
+	s->idx++;
+	if (dpaa2_dq_is_pull_complete(ret)) {
+		*is_last = 1;
+		s->idx = 0;
+		/* If we get an empty dequeue result to terminate a zero-results
+		 * vdqcr, return NULL to the caller rather than expecting him to
+		 * check non-NULL results every time. */
+		if (!(dpaa2_dq_flags(ret) & DPAA2_DQ_STAT_VALIDFRAME))
+			ret = NULL;
+	} else
+		*is_last = 0;
+	return ret;
+}
+EXPORT_SYMBOL(dpaa2_io_store_next);
+
+#ifdef CONFIG_FSL_QBMAN_DEBUG
+int dpaa2_io_query_fq_count(struct dpaa2_io *d, uint32_t fqid,
+			   uint32_t *fcnt, uint32_t *bcnt)
+{
+	struct qbman_attr state;
+	struct qbman_swp *swp;
+	unsigned long irqflags;
+	int ret;
+
+	d = service_select_any(d);
+	if (!d)
+		return -ENODEV;
+
+	swp = d->object.swp;
+	spin_lock_irqsave(&d->object.lock_mgmt_cmd, irqflags);
+	ret = qbman_fq_query_state(swp, fqid, &state);
+	spin_unlock_irqrestore(&d->object.lock_mgmt_cmd, irqflags);
+	if (ret)
+		return ret;
+	*fcnt = qbman_fq_state_frame_count(&state);
+	*bcnt = qbman_fq_state_byte_count(&state);
+
+	return 0;
+}
+EXPORT_SYMBOL(dpaa2_io_query_fq_count);
+
+int dpaa2_io_query_bp_count(struct dpaa2_io *d, uint32_t bpid,
+			   uint32_t *num)
+{
+	struct qbman_attr state;
+	struct qbman_swp *swp;
+	unsigned long irqflags;
+	int ret;
+
+	d = service_select_any(d);
+	if (!d)
+		return -ENODEV;
+
+	swp = d->object.swp;
+	spin_lock_irqsave(&d->object.lock_mgmt_cmd, irqflags);
+	ret = qbman_bp_query(swp, bpid, &state);
+	spin_unlock_irqrestore(&d->object.lock_mgmt_cmd, irqflags);
+	if (ret)
+		return ret;
+	*num = qbman_bp_info_num_free_bufs(&state);
+	return 0;
+}
+EXPORT_SYMBOL(dpaa2_io_query_bp_count);
+
+#endif
+
+/* module init/exit hooks called from dpio-drv.c. These are declared in
+ * dpio-drv.h.
+ */
+int dpaa2_io_service_driver_init(void)
+{
+	service_init(&def_serv, 1);
+	return 0;
+}
+
+void dpaa2_io_service_driver_exit(void)
+{
+	if (atomic_read(&def_serv.refs) != 1)
+		pr_err("default DPIO service leaves dangling DPIO objects!\n");
+}
--- /dev/null
+++ b/drivers/staging/fsl-mc/bus/dpio/fsl_dpio.h
@@ -0,0 +1,460 @@
+/* Copyright 2013-2015 Freescale Semiconductor Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __FSL_DPIO_H
+#define __FSL_DPIO_H
+
+/* Data Path I/O Portal API
+ * Contains initialization APIs and runtime control APIs for DPIO
+ */
+
+struct fsl_mc_io;
+
+/**
+ * dpio_open() - Open a control session for the specified object
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @dpio_id:	DPIO unique ID
+ * @token:	Returned token; use in subsequent API calls
+ *
+ * This function can be used to open a control session for an
+ * already created object; an object may have been declared in
+ * the DPL or by calling the dpio_create() function.
+ * This function returns a unique authentication token,
+ * associated with the specific object ID and the specific MC
+ * portal; this token must be used in all subsequent commands for
+ * this specific object.
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_open(struct fsl_mc_io	*mc_io,
+	      uint32_t		cmd_flags,
+	      int		dpio_id,
+	      uint16_t		*token);
+
+/**
+ * dpio_close() - Close the control session of the object
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_close(struct fsl_mc_io	*mc_io,
+	       uint32_t		cmd_flags,
+	       uint16_t		token);
+
+/**
+ * enum dpio_channel_mode - DPIO notification channel mode
+ * @DPIO_NO_CHANNEL: No support for notification channel
+ * @DPIO_LOCAL_CHANNEL: Notifications on data availability can be received by a
+ *	dedicated channel in the DPIO; user should point the queue's
+ *	destination in the relevant interface to this DPIO
+ */
+enum dpio_channel_mode {
+	DPIO_NO_CHANNEL = 0,
+	DPIO_LOCAL_CHANNEL = 1,
+};
+
+/**
+ * struct dpio_cfg - Structure representing DPIO configuration
+ * @channel_mode: Notification channel mode
+ * @num_priorities: Number of priorities for the notification channel (1-8);
+ *			relevant only if 'channel_mode = DPIO_LOCAL_CHANNEL'
+ */
+struct dpio_cfg {
+	enum dpio_channel_mode	channel_mode;
+	uint8_t		num_priorities;
+};
+
+/**
+ * dpio_create() - Create the DPIO object.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @cfg:	Configuration structure
+ * @token:	Returned token; use in subsequent API calls
+ *
+ * Create the DPIO object, allocate required resources and
+ * perform required initialization.
+ *
+ * The object can be created either by declaring it in the
+ * DPL file, or by calling this function.
+ *
+ * This function returns a unique authentication token,
+ * associated with the specific object ID and the specific MC
+ * portal; this token must be used in all subsequent calls to
+ * this specific object. For objects that are created using the
+ * DPL file, call dpio_open() function to get an authentication
+ * token first.
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_create(struct fsl_mc_io	*mc_io,
+		uint32_t		cmd_flags,
+		const struct dpio_cfg	*cfg,
+		uint16_t		*token);
+
+/**
+ * dpio_destroy() - Destroy the DPIO object and release all its resources.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ *
+ * Return:	'0' on Success; Error code otherwise
+ */
+int dpio_destroy(struct fsl_mc_io	*mc_io,
+		 uint32_t		cmd_flags,
+		 uint16_t		token);
+
+/**
+ * dpio_enable() - Enable the DPIO, allow I/O portal operations.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ *
+ * Return:	'0' on Success; Error code otherwise
+ */
+int dpio_enable(struct fsl_mc_io	*mc_io,
+		uint32_t		cmd_flags,
+		uint16_t		token);
+
+/**
+ * dpio_disable() - Disable the DPIO, stop any I/O portal operation.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ *
+ * Return:	'0' on Success; Error code otherwise
+ */
+int dpio_disable(struct fsl_mc_io	*mc_io,
+		 uint32_t		cmd_flags,
+		 uint16_t		token);
+
+/**
+ * dpio_is_enabled() - Check if the DPIO is enabled.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @en:	Returns '1' if object is enabled; '0' otherwise
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_is_enabled(struct fsl_mc_io	*mc_io,
+		    uint32_t		cmd_flags,
+		    uint16_t		token,
+		    int		*en);
+
+/**
+ * dpio_reset() - Reset the DPIO, returns the object to initial state.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_reset(struct fsl_mc_io	*mc_io,
+	       uint32_t			cmd_flags,
+	       uint16_t		token);
+
+/**
+ * dpio_set_stashing_destination() - Set the stashing destination.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @sdest:	stashing destination value
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_set_stashing_destination(struct fsl_mc_io	*mc_io,
+				  uint32_t		cmd_flags,
+				  uint16_t		token,
+				  uint8_t		sdest);
+
+/**
+ * dpio_get_stashing_destination() - Get the stashing destination..
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @sdest:	Returns the stashing destination value
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_get_stashing_destination(struct fsl_mc_io	*mc_io,
+				  uint32_t		cmd_flags,
+				  uint16_t		token,
+				  uint8_t		*sdest);
+
+/**
+ * dpio_add_static_dequeue_channel() - Add a static dequeue channel.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @dpcon_id:	DPCON object ID
+ * @channel_index: Returned channel index to be used in qbman API
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_add_static_dequeue_channel(struct fsl_mc_io	*mc_io,
+				    uint32_t		cmd_flags,
+				    uint16_t		token,
+				    int		dpcon_id,
+				    uint8_t		*channel_index);
+
+/**
+ * dpio_remove_static_dequeue_channel() - Remove a static dequeue channel.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @dpcon_id:	DPCON object ID
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_remove_static_dequeue_channel(struct fsl_mc_io	*mc_io,
+				       uint32_t		cmd_flags,
+				       uint16_t		token,
+				       int			dpcon_id);
+
+/**
+ * DPIO IRQ Index and Events
+ */
+
+/**
+ * Irq software-portal index
+ */
+#define DPIO_IRQ_SWP_INDEX				0
+
+/**
+ * struct dpio_irq_cfg - IRQ configuration
+ * @addr:	Address that must be written to signal a message-based interrupt
+ * @val:	Value to write into irq_addr address
+ * @irq_num: A user defined number associated with this IRQ
+ */
+struct dpio_irq_cfg {
+	     uint64_t		addr;
+	     uint32_t		val;
+	     int		irq_num;
+};
+
+/**
+ * dpio_set_irq() - Set IRQ information for the DPIO to trigger an interrupt.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @irq_index:	Identifies the interrupt index to configure
+ * @irq_cfg:	IRQ configuration
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_set_irq(struct fsl_mc_io	*mc_io,
+		 uint32_t		cmd_flags,
+		 uint16_t		token,
+		 uint8_t		irq_index,
+		 struct dpio_irq_cfg	*irq_cfg);
+
+/**
+ * dpio_get_irq() - Get IRQ information from the DPIO.
+ *
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @irq_index:	The interrupt index to configure
+ * @type:	Interrupt type: 0 represents message interrupt
+ *		type (both irq_addr and irq_val are valid)
+ * @irq_cfg:	IRQ attributes
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_get_irq(struct fsl_mc_io	*mc_io,
+		 uint32_t		cmd_flags,
+		 uint16_t		token,
+		 uint8_t		irq_index,
+		 int			*type,
+		 struct dpio_irq_cfg	*irq_cfg);
+
+/**
+ * dpio_set_irq_enable() - Set overall interrupt state.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @irq_index:	The interrupt index to configure
+ * @en:		Interrupt state - enable = 1, disable = 0
+ *
+ * Allows GPP software to control when interrupts are generated.
+ * Each interrupt can have up to 32 causes.  The enable/disable control's the
+ * overall interrupt state. if the interrupt is disabled no causes will cause
+ * an interrupt.
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_set_irq_enable(struct fsl_mc_io	*mc_io,
+			uint32_t		cmd_flags,
+			uint16_t		token,
+			uint8_t			irq_index,
+			uint8_t			en);
+
+/**
+ * dpio_get_irq_enable() - Get overall interrupt state
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @irq_index:	The interrupt index to configure
+ * @en:		Returned interrupt state - enable = 1, disable = 0
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_get_irq_enable(struct fsl_mc_io	*mc_io,
+			uint32_t		cmd_flags,
+			uint16_t		token,
+			uint8_t			irq_index,
+			uint8_t			*en);
+
+/**
+ * dpio_set_irq_mask() - Set interrupt mask.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @irq_index:	The interrupt index to configure
+ * @mask:	event mask to trigger interrupt;
+ *			each bit:
+ *				0 = ignore event
+ *				1 = consider event for asserting IRQ
+ *
+ * Every interrupt can have up to 32 causes and the interrupt model supports
+ * masking/unmasking each cause independently
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_set_irq_mask(struct fsl_mc_io	*mc_io,
+		      uint32_t		cmd_flags,
+		      uint16_t		token,
+		      uint8_t		irq_index,
+		      uint32_t		mask);
+
+/**
+ * dpio_get_irq_mask() - Get interrupt mask.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @irq_index:	The interrupt index to configure
+ * @mask:	Returned event mask to trigger interrupt
+ *
+ * Every interrupt can have up to 32 causes and the interrupt model supports
+ * masking/unmasking each cause independently
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_get_irq_mask(struct fsl_mc_io	*mc_io,
+		      uint32_t		cmd_flags,
+		      uint16_t		token,
+		      uint8_t		irq_index,
+		      uint32_t		*mask);
+
+/**
+ * dpio_get_irq_status() - Get the current status of any pending interrupts.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @irq_index:	The interrupt index to configure
+ * @status:	Returned interrupts status - one bit per cause:
+ *			0 = no interrupt pending
+ *			1 = interrupt pending
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_get_irq_status(struct fsl_mc_io	*mc_io,
+			uint32_t		cmd_flags,
+			uint16_t		token,
+			uint8_t			irq_index,
+			uint32_t		*status);
+
+/**
+ * dpio_clear_irq_status() - Clear a pending interrupt's status
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @irq_index:	The interrupt index to configure
+ * @status:	bits to clear (W1C) - one bit per cause:
+ *			0 = don't change
+ *			1 = clear status bit
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_clear_irq_status(struct fsl_mc_io	*mc_io,
+			  uint32_t		cmd_flags,
+			  uint16_t		token,
+			  uint8_t		irq_index,
+			  uint32_t		status);
+
+/**
+ * struct dpio_attr - Structure representing DPIO attributes
+ * @id: DPIO object ID
+ * @version: DPIO version
+ * @qbman_portal_ce_offset: offset of the software portal cache-enabled area
+ * @qbman_portal_ci_offset: offset of the software portal cache-inhibited area
+ * @qbman_portal_id: Software portal ID
+ * @channel_mode: Notification channel mode
+ * @num_priorities: Number of priorities for the notification channel (1-8);
+ *			relevant only if 'channel_mode = DPIO_LOCAL_CHANNEL'
+ * @qbman_version: QBMAN version
+ */
+struct dpio_attr {
+	int			id;
+	/**
+	 * struct version - DPIO version
+	 * @major: DPIO major version
+	 * @minor: DPIO minor version
+	 */
+	struct {
+		uint16_t major;
+		uint16_t minor;
+	} version;
+	uint64_t		qbman_portal_ce_offset;
+	uint64_t		qbman_portal_ci_offset;
+	uint16_t		qbman_portal_id;
+	enum dpio_channel_mode	channel_mode;
+	uint8_t			num_priorities;
+	uint32_t		qbman_version;
+};
+
+/**
+ * dpio_get_attributes() - Retrieve DPIO attributes
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @attr:	Returned object's attributes
+ *
+ * Return:	'0' on Success; Error code otherwise
+ */
+int dpio_get_attributes(struct fsl_mc_io	*mc_io,
+			uint32_t		cmd_flags,
+			uint16_t		token,
+			struct dpio_attr	*attr);
+#endif /* __FSL_DPIO_H */
--- /dev/null
+++ b/drivers/staging/fsl-mc/bus/dpio/fsl_dpio_cmd.h
@@ -0,0 +1,184 @@
+/* Copyright 2013-2015 Freescale Semiconductor Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of the above-listed copyright holders nor the
+ * names of any contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _FSL_DPIO_CMD_H
+#define _FSL_DPIO_CMD_H
+
+/* DPIO Version */
+#define DPIO_VER_MAJOR				3
+#define DPIO_VER_MINOR				2
+
+/* Command IDs */
+#define DPIO_CMDID_CLOSE				0x800
+#define DPIO_CMDID_OPEN					0x803
+#define DPIO_CMDID_CREATE				0x903
+#define DPIO_CMDID_DESTROY				0x900
+
+#define DPIO_CMDID_ENABLE				0x002
+#define DPIO_CMDID_DISABLE				0x003
+#define DPIO_CMDID_GET_ATTR				0x004
+#define DPIO_CMDID_RESET				0x005
+#define DPIO_CMDID_IS_ENABLED				0x006
+
+#define DPIO_CMDID_SET_IRQ				0x010
+#define DPIO_CMDID_GET_IRQ				0x011
+#define DPIO_CMDID_SET_IRQ_ENABLE			0x012
+#define DPIO_CMDID_GET_IRQ_ENABLE			0x013
+#define DPIO_CMDID_SET_IRQ_MASK				0x014
+#define DPIO_CMDID_GET_IRQ_MASK				0x015
+#define DPIO_CMDID_GET_IRQ_STATUS			0x016
+#define DPIO_CMDID_CLEAR_IRQ_STATUS			0x017
+
+#define DPIO_CMDID_SET_STASHING_DEST		0x120
+#define DPIO_CMDID_GET_STASHING_DEST		0x121
+#define DPIO_CMDID_ADD_STATIC_DEQUEUE_CHANNEL		0x122
+#define DPIO_CMDID_REMOVE_STATIC_DEQUEUE_CHANNEL	0x123
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPIO_CMD_OPEN(cmd, dpio_id) \
+	MC_CMD_OP(cmd, 0, 0,  32, int,     dpio_id)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPIO_CMD_CREATE(cmd, cfg) \
+do { \
+	MC_CMD_OP(cmd, 0, 16, 2,  enum dpio_channel_mode,	\
+					   cfg->channel_mode);\
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t, cfg->num_priorities);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPIO_RSP_IS_ENABLED(cmd, en) \
+	MC_RSP_OP(cmd, 0, 0,  1,  int,	    en)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPIO_CMD_SET_IRQ(cmd, irq_index, irq_cfg) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  8,  uint8_t,  irq_index);\
+	MC_CMD_OP(cmd, 0, 32, 32, uint32_t, irq_cfg->val);\
+	MC_CMD_OP(cmd, 1, 0,  64, uint64_t, irq_cfg->addr);\
+	MC_CMD_OP(cmd, 2, 0,  32, int,	    irq_cfg->irq_num); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPIO_CMD_GET_IRQ(cmd, irq_index) \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPIO_RSP_GET_IRQ(cmd, type, irq_cfg) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  32, uint32_t, irq_cfg->val); \
+	MC_RSP_OP(cmd, 1, 0,  64, uint64_t, irq_cfg->addr); \
+	MC_RSP_OP(cmd, 2, 0,  32, int,	    irq_cfg->irq_num); \
+	MC_RSP_OP(cmd, 2, 32, 32, int,	    type); \
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPIO_CMD_SET_IRQ_ENABLE(cmd, irq_index, en) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  8,  uint8_t, en); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t, irq_index);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPIO_CMD_GET_IRQ_ENABLE(cmd, irq_index) \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPIO_RSP_GET_IRQ_ENABLE(cmd, en) \
+	MC_RSP_OP(cmd, 0, 0,  8,  uint8_t,  en)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPIO_CMD_SET_IRQ_MASK(cmd, irq_index, mask) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, uint32_t, mask); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPIO_CMD_GET_IRQ_MASK(cmd, irq_index) \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPIO_RSP_GET_IRQ_MASK(cmd, mask) \
+	MC_RSP_OP(cmd, 0, 0,  32, uint32_t, mask)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPIO_CMD_GET_IRQ_STATUS(cmd, irq_index, status) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, uint32_t, status);\
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPIO_RSP_GET_IRQ_STATUS(cmd, status) \
+	MC_RSP_OP(cmd, 0, 0,  32, uint32_t, status)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPIO_CMD_CLEAR_IRQ_STATUS(cmd, irq_index, status) \
+do { \
+	MC_CMD_OP(cmd, 0, 0,  32, uint32_t, status); \
+	MC_CMD_OP(cmd, 0, 32, 8,  uint8_t,  irq_index);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPIO_RSP_GET_ATTR(cmd, attr) \
+do { \
+	MC_RSP_OP(cmd, 0, 0,  32, int,	    attr->id);\
+	MC_RSP_OP(cmd, 0, 32, 16, uint16_t, attr->qbman_portal_id);\
+	MC_RSP_OP(cmd, 0, 48, 8,  uint8_t,  attr->num_priorities);\
+	MC_RSP_OP(cmd, 0, 56, 4,  enum dpio_channel_mode, attr->channel_mode);\
+	MC_RSP_OP(cmd, 1, 0,  64, uint64_t, attr->qbman_portal_ce_offset);\
+	MC_RSP_OP(cmd, 2, 0,  64, uint64_t, attr->qbman_portal_ci_offset);\
+	MC_RSP_OP(cmd, 3, 0,  16, uint16_t, attr->version.major);\
+	MC_RSP_OP(cmd, 3, 16, 16, uint16_t, attr->version.minor);\
+	MC_RSP_OP(cmd, 3, 32, 32, uint32_t, attr->qbman_version);\
+} while (0)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPIO_CMD_SET_STASHING_DEST(cmd, sdest) \
+	MC_CMD_OP(cmd, 0, 0,  8,  uint8_t,  sdest)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPIO_RSP_GET_STASHING_DEST(cmd, sdest) \
+	MC_RSP_OP(cmd, 0, 0,  8,  uint8_t,  sdest)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPIO_CMD_ADD_STATIC_DEQUEUE_CHANNEL(cmd, dpcon_id) \
+	MC_CMD_OP(cmd, 0, 0,  32, int,      dpcon_id)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPIO_RSP_ADD_STATIC_DEQUEUE_CHANNEL(cmd, channel_index) \
+	MC_RSP_OP(cmd, 0, 0,  8,  uint8_t,  channel_index)
+
+/*                cmd, param, offset, width, type, arg_name */
+#define DPIO_CMD_REMOVE_STATIC_DEQUEUE_CHANNEL(cmd, dpcon_id) \
+	MC_CMD_OP(cmd, 0, 0,  32, int,      dpcon_id)
+#endif /* _FSL_DPIO_CMD_H */
--- /dev/null
+++ b/drivers/staging/fsl-mc/bus/dpio/fsl_qbman_base.h
@@ -0,0 +1,123 @@
+/* Copyright (C) 2014 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _FSL_QBMAN_BASE_H
+#define _FSL_QBMAN_BASE_H
+
+/**
+ * struct qbman_block_desc - qbman block descriptor structure
+ *
+ * Descriptor for a QBMan instance on the SoC. On partitions/targets that do not
+ * control this QBMan instance, these values may simply be place-holders. The
+ * idea is simply that we be able to distinguish between them, eg. so that SWP
+ * descriptors can identify which QBMan instance they belong to.
+ */
+struct qbman_block_desc {
+	void *ccsr_reg_bar; /* CCSR register map */
+	int irq_rerr;  /* Recoverable error interrupt line */
+	int irq_nrerr; /* Non-recoverable error interrupt line */
+};
+
+/**
+ * struct qbman_swp_desc - qbman software portal descriptor structure
+ *
+ * Descriptor for a QBMan software portal, expressed in terms that make sense to
+ * the user context. Ie. on MC, this information is likely to be true-physical,
+ * and instantiated statically at compile-time. On GPP, this information is
+ * likely to be obtained via "discovery" over a partition's "layerscape bus"
+ * (ie. in response to a MC portal command), and would take into account any
+ * virtualisation of the GPP user's address space and/or interrupt numbering.
+ */
+struct qbman_swp_desc {
+	const struct qbman_block_desc *block; /* The QBMan instance */
+	void *cena_bar; /* Cache-enabled portal register map */
+	void *cinh_bar; /* Cache-inhibited portal register map */
+	uint32_t qman_version;
+};
+
+/* Driver object for managing a QBMan portal */
+struct qbman_swp;
+
+/**
+ * struct qbman_fd - basci structure for qbman frame descriptor
+ *
+ * Place-holder for FDs, we represent it via the simplest form that we need for
+ * now. Different overlays may be needed to support different options, etc. (It
+ * is impractical to define One True Struct, because the resulting encoding
+ * routines (lots of read-modify-writes) would be worst-case performance whether
+ * or not circumstances required them.)
+ *
+ * Note, as with all data-structures exchanged between software and hardware (be
+ * they located in the portal register map or DMA'd to and from main-memory),
+ * the driver ensures that the caller of the driver API sees the data-structures
+ * in host-endianness. "struct qbman_fd" is no exception. The 32-bit words
+ * contained within this structure are represented in host-endianness, even if
+ * hardware always treats them as little-endian. As such, if any of these fields
+ * are interpreted in a binary (rather than numerical) fashion by hardware
+ * blocks (eg. accelerators), then the user should be careful. We illustrate
+ * with an example;
+ *
+ * Suppose the desired behaviour of an accelerator is controlled by the "frc"
+ * field of the FDs that are sent to it. Suppose also that the behaviour desired
+ * by the user corresponds to an "frc" value which is expressed as the literal
+ * sequence of bytes 0xfe, 0xed, 0xab, and 0xba. So "frc" should be the 32-bit
+ * value in which 0xfe is the first byte and 0xba is the last byte, and as
+ * hardware is little-endian, this amounts to a 32-bit "value" of 0xbaabedfe. If
+ * the software is little-endian also, this can simply be achieved by setting
+ * frc=0xbaabedfe. On the other hand, if software is big-endian, it should set
+ * frc=0xfeedabba! The best away of avoiding trouble with this sort of thing is
+ * to treat the 32-bit words as numerical values, in which the offset of a field
+ * from the beginning of the first byte (as required or generated by hardware)
+ * is numerically encoded by a left-shift (ie. by raising the field to a
+ * corresponding power of 2).  Ie. in the current example, software could set
+ * "frc" in the following way, and it would work correctly on both little-endian
+ * and big-endian operation;
+ *    fd.frc = (0xfe << 0) | (0xed << 8) | (0xab << 16) | (0xba << 24);
+ */
+struct qbman_fd {
+	union {
+		uint32_t words[8];
+		struct qbman_fd_simple {
+			uint32_t addr_lo;
+			uint32_t addr_hi;
+			uint32_t len;
+			/* offset in the MS 16 bits, BPID in the LS 16 bits */
+			uint32_t bpid_offset;
+			uint32_t frc; /* frame context */
+			/* "err", "va", "cbmt", "asal", [...] */
+			uint32_t ctrl;
+			/* flow context */
+			uint32_t flc_lo;
+			uint32_t flc_hi;
+		} simple;
+	};
+};
+
+#endif /* !_FSL_QBMAN_BASE_H */
--- /dev/null
+++ b/drivers/staging/fsl-mc/bus/dpio/fsl_qbman_portal.h
@@ -0,0 +1,753 @@
+/* Copyright (C) 2014 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _FSL_QBMAN_PORTAL_H
+#define _FSL_QBMAN_PORTAL_H
+
+#include "fsl_qbman_base.h"
+
+/**
+ * qbman_swp_init() - Create a functional object representing the given
+ * QBMan portal descriptor.
+ * @d: the given qbman swp descriptor
+ *
+ * Return qbman_swp portal object for success, NULL if the object cannot
+ * be created.
+ */
+struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d);
+/**
+ * qbman_swp_finish() - Create and destroy a functional object representing
+ * the given QBMan portal descriptor.
+ * @p: the qbman_swp object to be destroyed.
+ *
+ */
+void qbman_swp_finish(struct qbman_swp *p);
+
+/**
+ * qbman_swp_get_desc() - Get the descriptor of the given portal object.
+ * @p: the given portal object.
+ *
+ * Return the descriptor for this portal.
+ */
+const struct qbman_swp_desc *qbman_swp_get_desc(struct qbman_swp *p);
+
+	/**************/
+	/* Interrupts */
+	/**************/
+
+/* See the QBMan driver API documentation for details on the interrupt
+ * mechanisms. */
+#define QBMAN_SWP_INTERRUPT_EQRI ((uint32_t)0x00000001)
+#define QBMAN_SWP_INTERRUPT_EQDI ((uint32_t)0x00000002)
+#define QBMAN_SWP_INTERRUPT_DQRI ((uint32_t)0x00000004)
+#define QBMAN_SWP_INTERRUPT_RCRI ((uint32_t)0x00000008)
+#define QBMAN_SWP_INTERRUPT_RCDI ((uint32_t)0x00000010)
+#define QBMAN_SWP_INTERRUPT_VDCI ((uint32_t)0x00000020)
+
+/**
+ * qbman_swp_interrupt_get_vanish()
+ * qbman_swp_interrupt_set_vanish() - Get/Set the data in software portal
+ * interrupt status disable register.
+ * @p: the given software portal object.
+ * @mask: The mask to set in SWP_IDSR register.
+ *
+ * Return the settings in SWP_ISDR register for Get function.
+ */
+uint32_t qbman_swp_interrupt_get_vanish(struct qbman_swp *p);
+void qbman_swp_interrupt_set_vanish(struct qbman_swp *p, uint32_t mask);
+
+/**
+ * qbman_swp_interrupt_read_status()
+ * qbman_swp_interrupt_clear_status() - Get/Set the data in software portal
+ * interrupt status register.
+ * @p: the given software portal object.
+ * @mask: The mask to set in SWP_ISR register.
+ *
+ * Return the settings in SWP_ISR register for Get function.
+ *
+ */
+uint32_t qbman_swp_interrupt_read_status(struct qbman_swp *p);
+void qbman_swp_interrupt_clear_status(struct qbman_swp *p, uint32_t mask);
+
+/**
+ * qbman_swp_interrupt_get_trigger()
+ * qbman_swp_interrupt_set_trigger() - Get/Set the data in software portal
+ * interrupt enable register.
+ * @p: the given software portal object.
+ * @mask: The mask to set in SWP_IER register.
+ *
+ * Return the settings in SWP_IER register for Get function.
+ */
+uint32_t qbman_swp_interrupt_get_trigger(struct qbman_swp *p);
+void qbman_swp_interrupt_set_trigger(struct qbman_swp *p, uint32_t mask);
+
+/**
+ * qbman_swp_interrupt_get_inhibit()
+ * qbman_swp_interrupt_set_inhibit() - Set/Set the data in software portal
+ * interrupt inhibit register.
+ * @p: the given software portal object.
+ * @mask: The mask to set in SWP_IIR register.
+ *
+ * Return the settings in SWP_IIR register for Get function.
+ */
+int qbman_swp_interrupt_get_inhibit(struct qbman_swp *p);
+void qbman_swp_interrupt_set_inhibit(struct qbman_swp *p, int inhibit);
+
+	/************/
+	/* Dequeues */
+	/************/
+
+/* See the QBMan driver API documentation for details on the enqueue
+ * mechanisms. NB: the use of a 'dpaa2_' prefix for this type is because it is
+ * primarily used by the "DPIO" layer that sits above (and hides) the QBMan
+ * driver. The structure is defined in the DPIO interface, but to avoid circular
+ * dependencies we just pre/re-declare it here opaquely. */
+struct dpaa2_dq;
+
+/* ------------------- */
+/* Push-mode dequeuing */
+/* ------------------- */
+
+/**
+ * qbman_swp_push_get() - Get the push dequeue setup.
+ * @p: the software portal object.
+ * @channel_idx: the channel index to query.
+ * @enabled: returned boolean to show whether the push dequeue is enabled for
+ * the given channel.
+ */
+void qbman_swp_push_get(struct qbman_swp *, uint8_t channel_idx, int *enabled);
+/**
+ * qbman_swp_push_set() - Enable or disable push dequeue.
+ * @p: the software portal object.
+ * @channel_idx: the channel index..
+ * @enable: enable or disable push dequeue.
+ *
+ * The user of a portal can enable and disable push-mode dequeuing of up to 16
+ * channels independently. It does not specify this toggling by channel IDs, but
+ * rather by specifying the index (from 0 to 15) that has been mapped to the
+ * desired channel.
+ */
+void qbman_swp_push_set(struct qbman_swp *, uint8_t channel_idx, int enable);
+
+/* ------------------- */
+/* Pull-mode dequeuing */
+/* ------------------- */
+
+/**
+ * struct qbman_pull_desc - the structure for pull dequeue descriptor
+ */
+struct qbman_pull_desc {
+	uint32_t dont_manipulate_directly[6];
+};
+
+enum qbman_pull_type_e {
+	/* dequeue with priority precedence, respect intra-class scheduling */
+	qbman_pull_type_prio = 1,
+	/* dequeue with active FQ precedence, respect ICS */
+	qbman_pull_type_active,
+	/* dequeue with active FQ precedence, no ICS */
+	qbman_pull_type_active_noics
+};
+
+/**
+ * qbman_pull_desc_clear() - Clear the contents of a descriptor to
+ * default/starting state.
+ * @d: the pull dequeue descriptor to be cleared.
+ */
+void qbman_pull_desc_clear(struct qbman_pull_desc *d);
+
+/**
+ * qbman_pull_desc_set_storage()- Set the pull dequeue storage
+ * @d: the pull dequeue descriptor to be set.
+ * @storage: the pointer of the memory to store the dequeue result.
+ * @storage_phys: the physical address of the storage memory.
+ * @stash: to indicate whether write allocate is enabled.
+ *
+ * If not called, or if called with 'storage' as NULL, the result pull dequeues
+ * will produce results to DQRR. If 'storage' is non-NULL, then results are
+ * produced to the given memory location (using the physical/DMA address which
+ * the caller provides in 'storage_phys'), and 'stash' controls whether or not
+ * those writes to main-memory express a cache-warming attribute.
+ */
+void qbman_pull_desc_set_storage(struct qbman_pull_desc *d,
+				 struct dpaa2_dq *storage,
+				 dma_addr_t storage_phys,
+				 int stash);
+/**
+ * qbman_pull_desc_set_numframes() - Set the number of frames to be dequeued.
+ * @d: the pull dequeue descriptor to be set.
+ * @numframes: number of frames to be set, must be between 1 and 16, inclusive.
+ */
+void qbman_pull_desc_set_numframes(struct qbman_pull_desc *, uint8_t numframes);
+
+/**
+ * qbman_pull_desc_set_fq() - Set fqid from which the dequeue command dequeues.
+ * @fqid: the frame queue index of the given FQ.
+ *
+ * qbman_pull_desc_set_wq() - Set wqid from which the dequeue command dequeues.
+ * @wqid: composed of channel id and wqid within the channel.
+ * @dct: the dequeue command type.
+ *
+ * qbman_pull_desc_set_channel() - Set channelid from which the dequeue command
+ * dequeues.
+ * @chid: the channel id to be dequeued.
+ * @dct: the dequeue command type.
+ *
+ * Exactly one of the following descriptor "actions" should be set. (Calling any
+ * one of these will replace the effect of any prior call to one of these.)
+ * - pull dequeue from the given frame queue (FQ)
+ * - pull dequeue from any FQ in the given work queue (WQ)
+ * - pull dequeue from any FQ in any WQ in the given channel
+ */
+void qbman_pull_desc_set_fq(struct qbman_pull_desc *, uint32_t fqid);
+void qbman_pull_desc_set_wq(struct qbman_pull_desc *, uint32_t wqid,
+			    enum qbman_pull_type_e dct);
+void qbman_pull_desc_set_channel(struct qbman_pull_desc *, uint32_t chid,
+				 enum qbman_pull_type_e dct);
+
+/**
+ * qbman_swp_pull() - Issue the pull dequeue command
+ * @s: the software portal object.
+ * @d: the software portal descriptor which has been configured with
+ * the set of qbman_pull_desc_set_*() calls.
+ *
+ * Return 0 for success, and -EBUSY if the software portal is not ready
+ * to do pull dequeue.
+ */
+int qbman_swp_pull(struct qbman_swp *, struct qbman_pull_desc *d);
+
+/* -------------------------------- */
+/* Polling DQRR for dequeue results */
+/* -------------------------------- */
+
+/**
+ * qbman_swp_dqrr_next() - Get an valid DQRR entry.
+ * @s: the software portal object.
+ *
+ * Return NULL if there are no unconsumed DQRR entries. Return a DQRR entry
+ * only once, so repeated calls can return a sequence of DQRR entries, without
+ * requiring they be consumed immediately or in any particular order.
+ */
+const struct dpaa2_dq *qbman_swp_dqrr_next(struct qbman_swp *s);
+
+/**
+ * qbman_swp_dqrr_consume() -  Consume DQRR entries previously returned from
+ * qbman_swp_dqrr_next().
+ * @s: the software portal object.
+ * @dq: the DQRR entry to be consumed.
+ */
+void qbman_swp_dqrr_consume(struct qbman_swp *s, const struct dpaa2_dq *dq);
+
+/* ------------------------------------------------- */
+/* Polling user-provided storage for dequeue results */
+/* ------------------------------------------------- */
+/**
+ * qbman_result_has_new_result() - Check and get the dequeue response from the
+ * dq storage memory set in pull dequeue command
+ * @s: the software portal object.
+ * @dq: the dequeue result read from the memory.
+ *
+ * Only used for user-provided storage of dequeue results, not DQRR. For
+ * efficiency purposes, the driver will perform any required endianness
+ * conversion to ensure that the user's dequeue result storage is in host-endian
+ * format (whether or not that is the same as the little-endian format that
+ * hardware DMA'd to the user's storage). As such, once the user has called
+ * qbman_result_has_new_result() and been returned a valid dequeue result,
+ * they should not call it again on the same memory location (except of course
+ * if another dequeue command has been executed to produce a new result to that
+ * location).
+ *
+ * Return 1 for getting a valid dequeue result, or 0 for not getting a valid
+ * dequeue result.
+ */
+int qbman_result_has_new_result(struct qbman_swp *,
+				  const struct dpaa2_dq *);
+
+/* -------------------------------------------------------- */
+/* Parsing dequeue entries (DQRR and user-provided storage) */
+/* -------------------------------------------------------- */
+
+/**
+ * qbman_result_is_DQ() - check the dequeue result is a dequeue response or not
+ * @dq: the dequeue result to be checked.
+ *
+ * DQRR entries may contain non-dequeue results, ie. notifications
+ */
+int qbman_result_is_DQ(const struct dpaa2_dq *);
+
+/**
+ * qbman_result_is_SCN() - Check the dequeue result is notification or not
+ * @dq: the dequeue result to be checked.
+ *
+ * All the non-dequeue results (FQDAN/CDAN/CSCN/...) are "state change
+ * notifications" of one type or another. Some APIs apply to all of them, of the
+ * form qbman_result_SCN_***().
+ */
+static inline int qbman_result_is_SCN(const struct dpaa2_dq *dq)
+{
+	return !qbman_result_is_DQ(dq);
+}
+
+/**
+ * Recognise different notification types, only required if the user allows for
+ * these to occur, and cares about them when they do.
+ */
+int qbman_result_is_FQDAN(const struct dpaa2_dq *);
+				/* FQ Data Availability */
+int qbman_result_is_CDAN(const struct dpaa2_dq *);
+				/* Channel Data Availability */
+int qbman_result_is_CSCN(const struct dpaa2_dq *);
+				/* Congestion State Change */
+int qbman_result_is_BPSCN(const struct dpaa2_dq *);
+				/* Buffer Pool State Change */
+int qbman_result_is_CGCU(const struct dpaa2_dq *);
+				/* Congestion Group Count Update */
+/* Frame queue state change notifications; (FQDAN in theory counts too as it
+ * leaves a FQ parked, but it is primarily a data availability notification) */
+int qbman_result_is_FQRN(const struct dpaa2_dq *); /* Retirement */
+int qbman_result_is_FQRNI(const struct dpaa2_dq *);
+				/* Retirement Immediate */
+int qbman_result_is_FQPN(const struct dpaa2_dq *); /* Park */
+
+/* NB: for parsing dequeue results (when "is_DQ" is TRUE), use the higher-layer
+ * dpaa2_dq_*() functions. */
+
+/* State-change notifications (FQDAN/CDAN/CSCN/...). */
+/**
+ * qbman_result_SCN_state() - Get the state field in State-change notification
+ */
+uint8_t qbman_result_SCN_state(const struct dpaa2_dq *);
+/**
+ * qbman_result_SCN_rid() - Get the resource id in State-change notification
+ */
+uint32_t qbman_result_SCN_rid(const struct dpaa2_dq *);
+/**
+ * qbman_result_SCN_ctx() - Get the context data in State-change notification
+ */
+uint64_t qbman_result_SCN_ctx(const struct dpaa2_dq *);
+/**
+ * qbman_result_SCN_state_in_mem() - Get the state field in State-change
+ * notification which is written to memory instead of DQRR.
+ */
+uint8_t qbman_result_SCN_state_in_mem(const struct dpaa2_dq *);
+/**
+ * qbman_result_SCN_rid_in_mem() - Get the resource id in State-change
+ * notification which is written to memory instead of DQRR.
+ */
+uint32_t qbman_result_SCN_rid_in_mem(const struct dpaa2_dq *);
+
+/* Type-specific "resource IDs". Mainly for illustration purposes, though it
+ * also gives the appropriate type widths. */
+#define qbman_result_FQDAN_fqid(dq) qbman_result_SCN_rid(dq)
+#define qbman_result_FQRN_fqid(dq) qbman_result_SCN_rid(dq)
+#define qbman_result_FQRNI_fqid(dq) qbman_result_SCN_rid(dq)
+#define qbman_result_FQPN_fqid(dq) qbman_result_SCN_rid(dq)
+#define qbman_result_CDAN_cid(dq) ((uint16_t)qbman_result_SCN_rid(dq))
+#define qbman_result_CSCN_cgid(dq) ((uint16_t)qbman_result_SCN_rid(dq))
+
+/**
+ * qbman_result_bpscn_bpid() - Get the bpid from BPSCN
+ *
+ * Return the buffer pool id.
+ */
+uint16_t qbman_result_bpscn_bpid(const struct dpaa2_dq *);
+/**
+ * qbman_result_bpscn_has_free_bufs() - Check whether there are free
+ * buffers in the pool from BPSCN.
+ *
+ * Return the number of free buffers.
+ */
+int qbman_result_bpscn_has_free_bufs(const struct dpaa2_dq *);
+/**
+ * qbman_result_bpscn_is_depleted() - Check BPSCN to see whether the
+ * buffer pool is depleted.
+ *
+ * Return the status of buffer pool depletion.
+ */
+int qbman_result_bpscn_is_depleted(const struct dpaa2_dq *);
+/**
+ * qbman_result_bpscn_is_surplus() - Check BPSCN to see whether the buffer
+ * pool is surplus or not.
+ *
+ * Return the status of buffer pool surplus.
+ */
+int qbman_result_bpscn_is_surplus(const struct dpaa2_dq *);
+/**
+ * qbman_result_bpscn_ctx() - Get the BPSCN CTX from BPSCN message
+ *
+ * Return the BPSCN context.
+ */
+uint64_t qbman_result_bpscn_ctx(const struct dpaa2_dq *);
+
+/* Parsing CGCU */
+/**
+ * qbman_result_cgcu_cgid() - Check CGCU resouce id, i.e. cgid
+ *
+ * Return the CGCU resource id.
+ */
+uint16_t qbman_result_cgcu_cgid(const struct dpaa2_dq *);
+/**
+ * qbman_result_cgcu_icnt() - Get the I_CNT from CGCU
+ *
+ * Return instantaneous count in the CGCU notification.
+ */
+uint64_t qbman_result_cgcu_icnt(const struct dpaa2_dq *);
+
+	/************/
+	/* Enqueues */
+	/************/
+/**
+ * struct qbman_eq_desc - structure of enqueue descriptor
+ */
+struct qbman_eq_desc {
+	uint32_t dont_manipulate_directly[8];
+};
+
+/**
+ * struct qbman_eq_response - structure of enqueue response
+ */
+struct qbman_eq_response {
+	uint32_t dont_manipulate_directly[16];
+};
+
+/**
+ * qbman_eq_desc_clear() - Clear the contents of a descriptor to
+ * default/starting state.
+ */
+void qbman_eq_desc_clear(struct qbman_eq_desc *);
+
+/* Exactly one of the following descriptor "actions" should be set. (Calling
+ * any one of these will replace the effect of any prior call to one of these.)
+ * - enqueue without order-restoration
+ * - enqueue with order-restoration
+ * - fill a hole in the order-restoration sequence, without any enqueue
+ * - advance NESN (Next Expected Sequence Number), without any enqueue
+ * 'respond_success' indicates whether an enqueue response should be DMA'd
+ * after success (otherwise a response is DMA'd only after failure).
+ * 'incomplete' indicates that other fragments of the same 'seqnum' are yet to
+ * be enqueued.
+ */
+/**
+ * qbman_eq_desc_set_no_orp() - Set enqueue descriptor without orp
+ * @d: the enqueue descriptor.
+ * @response_success: 1 = enqueue with response always; 0 = enqueue with
+ * rejections returned on a FQ.
+ */
+void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success);
+
+/**
+ * qbman_eq_desc_set_orp() - Set order-resotration in the enqueue descriptor
+ * @d: the enqueue descriptor.
+ * @response_success: 1 = enqueue with response always; 0 = enqueue with
+ * rejections returned on a FQ.
+ * @opr_id: the order point record id.
+ * @seqnum: the order restoration sequence number.
+ * @incomplete: indiates whether this is the last fragments using the same
+ * sequeue number.
+ */
+void qbman_eq_desc_set_orp(struct qbman_eq_desc *d, int respond_success,
+			   uint32_t opr_id, uint32_t seqnum, int incomplete);
+
+/**
+ * qbman_eq_desc_set_orp_hole() - fill a hole in the order-restoration sequence
+ * without any enqueue
+ * @d: the enqueue descriptor.
+ * @opr_id: the order point record id.
+ * @seqnum: the order restoration sequence number.
+ */
+void qbman_eq_desc_set_orp_hole(struct qbman_eq_desc *d, uint32_t opr_id,
+				uint32_t seqnum);
+
+/**
+ * qbman_eq_desc_set_orp_nesn() -  advance NESN (Next Expected Sequence Number)
+ * without any enqueue
+ * @d: the enqueue descriptor.
+ * @opr_id: the order point record id.
+ * @seqnum: the order restoration sequence number.
+ */
+void qbman_eq_desc_set_orp_nesn(struct qbman_eq_desc *d, uint32_t opr_id,
+				uint32_t seqnum);
+
+/**
+ * qbman_eq_desc_set_response() - Set the enqueue response info.
+ * @d: the enqueue descriptor
+ * @storage_phys: the physical address of the enqueue response in memory.
+ * @stash: indicate that the write allocation enabled or not.
+ *
+ * In the case where an enqueue response is DMA'd, this determines where that
+ * response should go. (The physical/DMA address is given for hardware's
+ * benefit, but software should interpret it as a "struct qbman_eq_response"
+ * data structure.) 'stash' controls whether or not the write to main-memory
+ * expresses a cache-warming attribute.
+ */
+void qbman_eq_desc_set_response(struct qbman_eq_desc *d,
+				dma_addr_t storage_phys,
+				int stash);
+/**
+ * qbman_eq_desc_set_token() - Set token for the enqueue command
+ * @d: the enqueue descriptor
+ * @token: the token to be set.
+ *
+ * token is the value that shows up in an enqueue response that can be used to
+ * detect when the results have been published. The easiest technique is to zero
+ * result "storage" before issuing an enqueue, and use any non-zero 'token'
+ * value.
+ */
+void qbman_eq_desc_set_token(struct qbman_eq_desc *d, uint8_t token);
+
+/**
+ * qbman_eq_desc_set_fq()
+ * qbman_eq_desc_set_qd() - Set eithe FQ or Queuing Destination for the enqueue
+ * command.
+ * @d: the enqueue descriptor
+ * @fqid: the id of the frame queue to be enqueued.
+ * @qdid: the id of the queuing destination to be enqueued.
+ * @qd_bin: the queuing destination bin
+ * @qd_prio: the queuing destination priority.
+ *
+ * Exactly one of the following descriptor "targets" should be set. (Calling any
+ * one of these will replace the effect of any prior call to one of these.)
+ * - enqueue to a frame queue
+ * - enqueue to a queuing destination
+ * Note, that none of these will have any affect if the "action" type has been
+ * set to "orp_hole" or "orp_nesn".
+ */
+void qbman_eq_desc_set_fq(struct qbman_eq_desc *, uint32_t fqid);
+void qbman_eq_desc_set_qd(struct qbman_eq_desc *, uint32_t qdid,
+			  uint32_t qd_bin, uint32_t qd_prio);
+
+/**
+ * qbman_eq_desc_set_eqdi() - enable/disable EQDI interrupt
+ * @d: the enqueue descriptor
+ * @enable: boolean to enable/disable EQDI
+ *
+ * Determines whether or not the portal's EQDI interrupt source should be
+ * asserted after the enqueue command is completed.
+ */
+void qbman_eq_desc_set_eqdi(struct qbman_eq_desc *, int enable);
+
+/**
+ * qbman_eq_desc_set_dca() - Set DCA mode in the enqueue command.
+ * @d: the enqueue descriptor.
+ * @enable: enabled/disable DCA mode.
+ * @dqrr_idx: DCAP_CI, the DCAP consumer index.
+ * @park: determine the whether park the FQ or not
+ *
+ * Determines whether or not a portal DQRR entry should be consumed once the
+ * enqueue command is completed.  (And if so, and the DQRR entry corresponds
+ * to a held-active (order-preserving) FQ, whether the FQ should be parked
+ * instead of being rescheduled.)
+ */
+void qbman_eq_desc_set_dca(struct qbman_eq_desc *, int enable,
+				uint32_t dqrr_idx, int park);
+
+/**
+ * qbman_swp_enqueue() - Issue an enqueue command.
+ * @s: the software portal used for enqueue.
+ * @d: the enqueue descriptor.
+ * @fd: the frame descriptor to be enqueued.
+ *
+ * Please note that 'fd' should only be NULL if the "action" of the
+ * descriptor is "orp_hole" or "orp_nesn".
+ *
+ * Return 0 for successful enqueue, -EBUSY if the EQCR is not ready.
+ */
+int qbman_swp_enqueue(struct qbman_swp *, const struct qbman_eq_desc *,
+		      const struct qbman_fd *fd);
+
+/**
+ * qbman_swp_enqueue_thresh() - Set the threshold for EQRI interrupt.
+ *
+ * An EQRI interrupt can be generated when the fill-level of EQCR falls below
+ * the 'thresh' value set here. Setting thresh==0 (the default) disables.
+ */
+int qbman_swp_enqueue_thresh(struct qbman_swp *, unsigned int thresh);
+
+	/*******************/
+	/* Buffer releases */
+	/*******************/
+/**
+ * struct qbman_release_desc - The structure for buffer release descriptor
+ */
+struct qbman_release_desc {
+	uint32_t dont_manipulate_directly[1];
+};
+
+/**
+ * qbman_release_desc_clear() - Clear the contents of a descriptor to
+ * default/starting state.
+ */
+void qbman_release_desc_clear(struct qbman_release_desc *);
+
+/**
+ * qbman_release_desc_set_bpid() - Set the ID of the buffer pool to release to
+ */
+void qbman_release_desc_set_bpid(struct qbman_release_desc *, uint32_t bpid);
+
+/**
+ * qbman_release_desc_set_rcdi() - Determines whether or not the portal's RCDI
+ * interrupt source should be asserted after the release command is completed.
+ */
+void qbman_release_desc_set_rcdi(struct qbman_release_desc *, int enable);
+
+/**
+ * qbman_swp_release() - Issue a buffer release command.
+ * @s: the software portal object.
+ * @d: the release descriptor.
+ * @buffers: a pointer pointing to the buffer address to be released.
+ * @num_buffers: number of buffers to be released,  must be less than 8.
+ *
+ * Return 0 for success, -EBUSY if the release command ring is not ready.
+ */
+int qbman_swp_release(struct qbman_swp *s, const struct qbman_release_desc *d,
+		      const uint64_t *buffers, unsigned int num_buffers);
+
+	/*******************/
+	/* Buffer acquires */
+	/*******************/
+
+/**
+ * qbman_swp_acquire() - Issue a buffer acquire command.
+ * @s: the software portal object.
+ * @bpid: the buffer pool index.
+ * @buffers: a pointer pointing to the acquired buffer address|es.
+ * @num_buffers: number of buffers to be acquired, must be less than 8.
+ *
+ * Return 0 for success, or negative error code if the acquire command
+ * fails.
+ */
+int qbman_swp_acquire(struct qbman_swp *, uint32_t bpid, uint64_t *buffers,
+		      unsigned int num_buffers);
+
+	/*****************/
+	/* FQ management */
+	/*****************/
+
+/**
+ * qbman_swp_fq_schedule() - Move the fq to the scheduled state.
+ * @s: the software portal object.
+ * @fqid: the index of frame queue to be scheduled.
+ *
+ * There are a couple of different ways that a FQ can end up parked state,
+ * This schedules it.
+ *
+ * Return 0 for success, or negative error code for failure.
+ */
+int qbman_swp_fq_schedule(struct qbman_swp *s, uint32_t fqid);
+
+/**
+ * qbman_swp_fq_force() - Force the FQ to fully scheduled state.
+ * @s: the software portal object.
+ * @fqid: the index of frame queue to be forced.
+ *
+ * Force eligible will force a tentatively-scheduled FQ to be fully-scheduled
+ * and thus be available for selection by any channel-dequeuing behaviour (push
+ * or pull). If the FQ is subsequently "dequeued" from the channel and is still
+ * empty at the time this happens, the resulting dq_entry will have no FD.
+ * (qbman_result_DQ_fd() will return NULL.)
+ *
+ * Return 0 for success, or negative error code for failure.
+ */
+int qbman_swp_fq_force(struct qbman_swp *s, uint32_t fqid);
+
+/**
+ * qbman_swp_fq_xon()
+ * qbman_swp_fq_xoff() - XON/XOFF the frame queue.
+ * @s: the software portal object.
+ * @fqid: the index of frame queue.
+ *
+ * These functions change the FQ flow-control stuff between XON/XOFF. (The
+ * default is XON.) This setting doesn't affect enqueues to the FQ, just
+ * dequeues. XOFF FQs will remain in the tenatively-scheduled state, even when
+ * non-empty, meaning they won't be selected for scheduled dequeuing. If a FQ is
+ * changed to XOFF after it had already become truly-scheduled to a channel, and
+ * a pull dequeue of that channel occurs that selects that FQ for dequeuing,
+ * then the resulting dq_entry will have no FD. (qbman_result_DQ_fd() will
+ * return NULL.)
+ *
+ * Return 0 for success, or negative error code for failure.
+ */
+int qbman_swp_fq_xon(struct qbman_swp *s, uint32_t fqid);
+int qbman_swp_fq_xoff(struct qbman_swp *s, uint32_t fqid);
+
+	/**********************/
+	/* Channel management */
+	/**********************/
+
+/* If the user has been allocated a channel object that is going to generate
+ * CDANs to another channel, then these functions will be necessary.
+ * CDAN-enabled channels only generate a single CDAN notification, after which
+ * it they need to be reenabled before they'll generate another. (The idea is
+ * that pull dequeuing will occur in reaction to the CDAN, followed by a
+ * reenable step.) Each function generates a distinct command to hardware, so a
+ * combination function is provided if the user wishes to modify the "context"
+ * (which shows up in each CDAN message) each time they reenable, as a single
+ * command to hardware. */
+/**
+ * qbman_swp_CDAN_set_context() - Set CDAN context
+ * @s: the software portal object.
+ * @channelid: the channel index.
+ * @ctx: the context to be set in CDAN.
+ *
+ * Return 0 for success, or negative error code for failure.
+ */
+int qbman_swp_CDAN_set_context(struct qbman_swp *, uint16_t channelid,
+				uint64_t ctx);
+
+/**
+ * qbman_swp_CDAN_enable() - Enable CDAN for the channel.
+ * @s: the software portal object.
+ * @channelid: the index of the channel to generate CDAN.
+ *
+ * Return 0 for success, or negative error code for failure.
+ */
+int qbman_swp_CDAN_enable(struct qbman_swp *, uint16_t channelid);
+
+/**
+ * qbman_swp_CDAN_disable() - disable CDAN for the channel.
+ * @s: the software portal object.
+ * @channelid: the index of the channel to generate CDAN.
+ *
+ * Return 0 for success, or negative error code for failure.
+ */
+int qbman_swp_CDAN_disable(struct qbman_swp *, uint16_t channelid);
+
+/**
+ * qbman_swp_CDAN_set_context_enable() - Set CDAN contest and enable CDAN
+ * @s: the software portal object.
+ * @channelid: the index of the channel to generate CDAN.
+ * @ctx: the context set in CDAN.
+ *
+ * Return 0 for success, or negative error code for failure.
+ */
+int qbman_swp_CDAN_set_context_enable(struct qbman_swp *, uint16_t channelid,
+				      uint64_t ctx);
+
+#endif /* !_FSL_QBMAN_PORTAL_H */
--- /dev/null
+++ b/drivers/staging/fsl-mc/bus/dpio/qbman_debug.c
@@ -0,0 +1,846 @@
+/* Copyright (C) 2015 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "qbman_portal.h"
+#include "qbman_debug.h"
+#include "fsl_qbman_portal.h"
+
+/* QBMan portal management command code */
+#define QBMAN_BP_QUERY            0x32
+#define QBMAN_FQ_QUERY            0x44
+#define QBMAN_FQ_QUERY_NP         0x45
+#define QBMAN_CGR_QUERY           0x51
+#define QBMAN_WRED_QUERY          0x54
+#define QBMAN_CGR_STAT_QUERY      0x55
+#define QBMAN_CGR_STAT_QUERY_CLR  0x56
+
+enum qbman_attr_usage_e {
+	qbman_attr_usage_fq,
+	qbman_attr_usage_bpool,
+	qbman_attr_usage_cgr,
+};
+
+struct int_qbman_attr {
+	uint32_t words[32];
+	enum qbman_attr_usage_e usage;
+};
+
+#define attr_type_set(a, e) \
+{ \
+	struct qbman_attr *__attr = a; \
+	enum qbman_attr_usage_e __usage = e; \
+	((struct int_qbman_attr *)__attr)->usage = __usage; \
+}
+
+#define ATTR32(d) (&(d)->dont_manipulate_directly[0])
+#define ATTR32_1(d) (&(d)->dont_manipulate_directly[16])
+
+static struct qb_attr_code code_bp_bpid = QB_CODE(0, 16, 16);
+static struct qb_attr_code code_bp_bdi = QB_CODE(1, 16, 1);
+static struct qb_attr_code code_bp_va = QB_CODE(1, 17, 1);
+static struct qb_attr_code code_bp_wae = QB_CODE(1, 18, 1);
+static struct qb_attr_code code_bp_swdet = QB_CODE(4, 0, 16);
+static struct qb_attr_code code_bp_swdxt = QB_CODE(4, 16, 16);
+static struct qb_attr_code code_bp_hwdet = QB_CODE(5, 0, 16);
+static struct qb_attr_code code_bp_hwdxt = QB_CODE(5, 16, 16);
+static struct qb_attr_code code_bp_swset = QB_CODE(6, 0, 16);
+static struct qb_attr_code code_bp_swsxt = QB_CODE(6, 16, 16);
+static struct qb_attr_code code_bp_vbpid = QB_CODE(7, 0, 14);
+static struct qb_attr_code code_bp_icid = QB_CODE(7, 16, 15);
+static struct qb_attr_code code_bp_pl = QB_CODE(7, 31, 1);
+static struct qb_attr_code code_bp_bpscn_addr_lo = QB_CODE(8, 0, 32);
+static struct qb_attr_code code_bp_bpscn_addr_hi = QB_CODE(9, 0, 32);
+static struct qb_attr_code code_bp_bpscn_ctx_lo = QB_CODE(10, 0, 32);
+static struct qb_attr_code code_bp_bpscn_ctx_hi = QB_CODE(11, 0, 32);
+static struct qb_attr_code code_bp_hw_targ = QB_CODE(12, 0, 16);
+static struct qb_attr_code code_bp_state = QB_CODE(1, 24, 3);
+static struct qb_attr_code code_bp_fill = QB_CODE(2, 0, 32);
+static struct qb_attr_code code_bp_hdptr = QB_CODE(3, 0, 32);
+static struct qb_attr_code code_bp_sdcnt = QB_CODE(13, 0, 8);
+static struct qb_attr_code code_bp_hdcnt = QB_CODE(13, 1, 8);
+static struct qb_attr_code code_bp_sscnt = QB_CODE(13, 2, 8);
+
+void qbman_bp_attr_clear(struct qbman_attr *a)
+{
+	memset(a, 0, sizeof(*a));
+	attr_type_set(a, qbman_attr_usage_bpool);
+}
+
+int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
+		   struct qbman_attr *a)
+{
+	uint32_t *p;
+	uint32_t verb, rslt;
+	uint32_t *attr = ATTR32(a);
+
+	qbman_bp_attr_clear(a);
+
+	/* Start the management command */
+	p = qbman_swp_mc_start(s);
+	if (!p)
+		return -EBUSY;
+
+	/* Encode the caller-provided attributes */
+	qb_attr_code_encode(&code_bp_bpid, p, bpid);
+
+	/* Complete the management command */
+	p = qbman_swp_mc_complete(s, p, p[0] | QBMAN_BP_QUERY);
+
+	/* Decode the outcome */
+	verb = qb_attr_code_decode(&code_generic_verb, p);
+	rslt = qb_attr_code_decode(&code_generic_rslt, p);
+	BUG_ON(verb != QBMAN_BP_QUERY);
+
+	/* Determine success or failure */
+	if (unlikely(rslt != QBMAN_MC_RSLT_OK)) {
+		pr_err("Query of BPID 0x%x failed, code=0x%02x\n", bpid, rslt);
+		return -EIO;
+	}
+
+	/* For the query, word[0] of the result contains only the
+	 * verb/rslt fields, so skip word[0].
+	 */
+	word_copy(&attr[1], &p[1], 15);
+	return 0;
+}
+
+void qbman_bp_attr_get_bdi(struct qbman_attr *a, int *bdi, int *va, int *wae)
+{
+	uint32_t *p = ATTR32(a);
+
+	*bdi = !!qb_attr_code_decode(&code_bp_bdi, p);
+	*va = !!qb_attr_code_decode(&code_bp_va, p);
+	*wae = !!qb_attr_code_decode(&code_bp_wae, p);
+}
+
+static uint32_t qbman_bp_thresh_to_value(uint32_t val)
+{
+	return (val & 0xff) << ((val & 0xf00) >> 8);
+}
+
+void qbman_bp_attr_get_swdet(struct qbman_attr *a, uint32_t *swdet)
+{
+	uint32_t *p = ATTR32(a);
+
+	*swdet = qbman_bp_thresh_to_value(qb_attr_code_decode(&code_bp_swdet,
+					  p));
+}
+void qbman_bp_attr_get_swdxt(struct qbman_attr *a, uint32_t *swdxt)
+{
+	uint32_t *p = ATTR32(a);
+
+	*swdxt = qbman_bp_thresh_to_value(qb_attr_code_decode(&code_bp_swdxt,
+					  p));
+}
+void qbman_bp_attr_get_hwdet(struct qbman_attr *a, uint32_t *hwdet)
+{
+	uint32_t *p = ATTR32(a);
+
+	*hwdet = qbman_bp_thresh_to_value(qb_attr_code_decode(&code_bp_hwdet,
+					  p));
+}
+void qbman_bp_attr_get_hwdxt(struct qbman_attr *a, uint32_t *hwdxt)
+{
+	uint32_t *p = ATTR32(a);
+
+	*hwdxt = qbman_bp_thresh_to_value(qb_attr_code_decode(&code_bp_hwdxt,
+					  p));
+}
+
+void qbman_bp_attr_get_swset(struct qbman_attr *a, uint32_t *swset)
+{
+	uint32_t *p = ATTR32(a);
+
+	*swset = qbman_bp_thresh_to_value(qb_attr_code_decode(&code_bp_swset,
+					  p));
+}
+
+void qbman_bp_attr_get_swsxt(struct qbman_attr *a, uint32_t *swsxt)
+{
+	uint32_t *p = ATTR32(a);
+
+	*swsxt = qbman_bp_thresh_to_value(qb_attr_code_decode(&code_bp_swsxt,
+					  p));
+}
+
+void qbman_bp_attr_get_vbpid(struct qbman_attr *a, uint32_t *vbpid)
+{
+	uint32_t *p = ATTR32(a);
+
+	*vbpid = qb_attr_code_decode(&code_bp_vbpid, p);
+}
+
+void qbman_bp_attr_get_icid(struct qbman_attr *a, uint32_t *icid, int *pl)
+{
+	uint32_t *p = ATTR32(a);
+
+	*icid = qb_attr_code_decode(&code_bp_icid, p);
+	*pl = !!qb_attr_code_decode(&code_bp_pl, p);
+}
+
+void qbman_bp_attr_get_bpscn_addr(struct qbman_attr *a, uint64_t *bpscn_addr)
+{
+	uint32_t *p = ATTR32(a);
+
+	*bpscn_addr = ((uint64_t)qb_attr_code_decode(&code_bp_bpscn_addr_hi,
+			p) << 32) |
+			(uint64_t)qb_attr_code_decode(&code_bp_bpscn_addr_lo,
+			p);
+}
+
+void qbman_bp_attr_get_bpscn_ctx(struct qbman_attr *a, uint64_t *bpscn_ctx)
+{
+	uint32_t *p = ATTR32(a);
+
+	*bpscn_ctx = ((uint64_t)qb_attr_code_decode(&code_bp_bpscn_ctx_hi, p)
+			<< 32) |
+			(uint64_t)qb_attr_code_decode(&code_bp_bpscn_ctx_lo,
+			p);
+}
+
+void qbman_bp_attr_get_hw_targ(struct qbman_attr *a, uint32_t *hw_targ)
+{
+	uint32_t *p = ATTR32(a);
+
+	*hw_targ = qb_attr_code_decode(&code_bp_hw_targ, p);
+}
+
+int qbman_bp_info_has_free_bufs(struct qbman_attr *a)
+{
+	uint32_t *p = ATTR32(a);
+
+	return !(int)(qb_attr_code_decode(&code_bp_state, p) & 0x1);
+}
+
+int qbman_bp_info_is_depleted(struct qbman_attr *a)
+{
+	uint32_t *p = ATTR32(a);
+
+	return (int)(qb_attr_code_decode(&code_bp_state, p) & 0x2);
+}
+
+int qbman_bp_info_is_surplus(struct qbman_attr *a)
+{
+	uint32_t *p = ATTR32(a);
+
+	return (int)(qb_attr_code_decode(&code_bp_state, p) & 0x4);
+}
+
+uint32_t qbman_bp_info_num_free_bufs(struct qbman_attr *a)
+{
+	uint32_t *p = ATTR32(a);
+
+	return qb_attr_code_decode(&code_bp_fill, p);
+}
+
+uint32_t qbman_bp_info_hdptr(struct qbman_attr *a)
+{
+	uint32_t *p = ATTR32(a);
+
+	return qb_attr_code_decode(&code_bp_hdptr, p);
+}
+
+uint32_t qbman_bp_info_sdcnt(struct qbman_attr *a)
+{
+	uint32_t *p = ATTR32(a);
+
+	return qb_attr_code_decode(&code_bp_sdcnt, p);
+}
+
+uint32_t qbman_bp_info_hdcnt(struct qbman_attr *a)
+{
+	uint32_t *p = ATTR32(a);
+
+	return qb_attr_code_decode(&code_bp_hdcnt, p);
+}
+
+uint32_t qbman_bp_info_sscnt(struct qbman_attr *a)
+{
+	uint32_t *p = ATTR32(a);
+
+	return qb_attr_code_decode(&code_bp_sscnt, p);
+}
+
+static struct qb_attr_code code_fq_fqid = QB_CODE(1, 0, 24);
+static struct qb_attr_code code_fq_cgrid = QB_CODE(2, 16, 16);
+static struct qb_attr_code code_fq_destwq = QB_CODE(3, 0, 15);
+static struct qb_attr_code code_fq_fqctrl = QB_CODE(3, 24, 8);
+static struct qb_attr_code code_fq_icscred = QB_CODE(4, 0, 15);
+static struct qb_attr_code code_fq_tdthresh = QB_CODE(4, 16, 13);
+static struct qb_attr_code code_fq_oa_len = QB_CODE(5, 0, 12);
+static struct qb_attr_code code_fq_oa_ics = QB_CODE(5, 14, 1);
+static struct qb_attr_code code_fq_oa_cgr = QB_CODE(5, 15, 1);
+static struct qb_attr_code code_fq_mctl_bdi = QB_CODE(5, 24, 1);
+static struct qb_attr_code code_fq_mctl_ff = QB_CODE(5, 25, 1);
+static struct qb_attr_code code_fq_mctl_va = QB_CODE(5, 26, 1);
+static struct qb_attr_code code_fq_mctl_ps = QB_CODE(5, 27, 1);
+static struct qb_attr_code code_fq_ctx_lower32 = QB_CODE(6, 0, 32);
+static struct qb_attr_code code_fq_ctx_upper32 = QB_CODE(7, 0, 32);
+static struct qb_attr_code code_fq_icid = QB_CODE(8, 0, 15);
+static struct qb_attr_code code_fq_pl = QB_CODE(8, 15, 1);
+static struct qb_attr_code code_fq_vfqid = QB_CODE(9, 0, 24);
+static struct qb_attr_code code_fq_erfqid = QB_CODE(10, 0, 24);
+
+void qbman_fq_attr_clear(struct qbman_attr *a)
+{
+	memset(a, 0, sizeof(*a));
+	attr_type_set(a, qbman_attr_usage_fq);
+}
+
+/* FQ query function for programmable fields */
+int qbman_fq_query(struct qbman_swp *s, uint32_t fqid, struct qbman_attr *desc)
+{
+	uint32_t *p;
+	uint32_t verb, rslt;
+	uint32_t *d = ATTR32(desc);
+
+	qbman_fq_attr_clear(desc);
+
+	p = qbman_swp_mc_start(s);
+	if (!p)
+		return -EBUSY;
+	qb_attr_code_encode(&code_fq_fqid, p, fqid);
+	p = qbman_swp_mc_complete(s, p, QBMAN_FQ_QUERY);
+
+	/* Decode the outcome */
+	verb = qb_attr_code_decode(&code_generic_verb, p);
+	rslt = qb_attr_code_decode(&code_generic_rslt, p);
+	BUG_ON(verb != QBMAN_FQ_QUERY);
+
+	/* Determine success or failure */
+	if (unlikely(rslt != QBMAN_MC_RSLT_OK)) {
+		pr_err("Query of FQID 0x%x failed, code=0x%02x\n",
+		       fqid, rslt);
+		return -EIO;
+	}
+	/* For the configure, word[0] of the command contains only the WE-mask.
+	 * For the query, word[0] of the result contains only the verb/rslt
+	 * fields. Skip word[0] in the latter case. */
+	word_copy(&d[1], &p[1], 15);
+	return 0;
+}
+
+void qbman_fq_attr_get_fqctrl(struct qbman_attr *d, uint32_t *fqctrl)
+{
+	uint32_t *p = ATTR32(d);
+
+	*fqctrl = qb_attr_code_decode(&code_fq_fqctrl, p);
+}
+
+void qbman_fq_attr_get_cgrid(struct qbman_attr *d, uint32_t *cgrid)
+{
+	uint32_t *p = ATTR32(d);
+
+	*cgrid = qb_attr_code_decode(&code_fq_cgrid, p);
+}
+
+void qbman_fq_attr_get_destwq(struct qbman_attr *d, uint32_t *destwq)
+{
+	uint32_t *p = ATTR32(d);
+
+	*destwq = qb_attr_code_decode(&code_fq_destwq, p);
+}
+
+void qbman_fq_attr_get_icscred(struct qbman_attr *d, uint32_t *icscred)
+{
+	uint32_t *p = ATTR32(d);
+
+	*icscred = qb_attr_code_decode(&code_fq_icscred, p);
+}
+
+static struct qb_attr_code code_tdthresh_exp = QB_CODE(0, 0, 5);
+static struct qb_attr_code code_tdthresh_mant = QB_CODE(0, 5, 8);
+static uint32_t qbman_thresh_to_value(uint32_t val)
+{
+	uint32_t m, e;
+
+	m = qb_attr_code_decode(&code_tdthresh_mant, &val);
+	e = qb_attr_code_decode(&code_tdthresh_exp, &val);
+	return m << e;
+}
+
+void qbman_fq_attr_get_tdthresh(struct qbman_attr *d, uint32_t *tdthresh)
+{
+	uint32_t *p = ATTR32(d);
+
+	*tdthresh = qbman_thresh_to_value(qb_attr_code_decode(&code_fq_tdthresh,
+					p));
+}
+
+void qbman_fq_attr_get_oa(struct qbman_attr *d,
+			  int *oa_ics, int *oa_cgr, int32_t *oa_len)
+{
+	uint32_t *p = ATTR32(d);
+
+	*oa_ics = !!qb_attr_code_decode(&code_fq_oa_ics, p);
+	*oa_cgr = !!qb_attr_code_decode(&code_fq_oa_cgr, p);
+	*oa_len = qb_attr_code_makesigned(&code_fq_oa_len,
+			qb_attr_code_decode(&code_fq_oa_len, p));
+}
+
+void qbman_fq_attr_get_mctl(struct qbman_attr *d,
+			    int *bdi, int *ff, int *va, int *ps)
+{
+	uint32_t *p = ATTR32(d);
+
+	*bdi = !!qb_attr_code_decode(&code_fq_mctl_bdi, p);
+	*ff = !!qb_attr_code_decode(&code_fq_mctl_ff, p);
+	*va = !!qb_attr_code_decode(&code_fq_mctl_va, p);
+	*ps = !!qb_attr_code_decode(&code_fq_mctl_ps, p);
+}
+
+void qbman_fq_attr_get_ctx(struct qbman_attr *d, uint32_t *hi, uint32_t *lo)
+{
+	uint32_t *p = ATTR32(d);
+
+	*hi = qb_attr_code_decode(&code_fq_ctx_upper32, p);
+	*lo = qb_attr_code_decode(&code_fq_ctx_lower32, p);
+}
+
+void qbman_fq_attr_get_icid(struct qbman_attr *d, uint32_t *icid, int *pl)
+{
+	uint32_t *p = ATTR32(d);
+
+	*icid = qb_attr_code_decode(&code_fq_icid, p);
+	*pl = !!qb_attr_code_decode(&code_fq_pl, p);
+}
+
+void qbman_fq_attr_get_vfqid(struct qbman_attr *d, uint32_t *vfqid)
+{
+	uint32_t *p = ATTR32(d);
+
+	*vfqid = qb_attr_code_decode(&code_fq_vfqid, p);
+}
+
+void qbman_fq_attr_get_erfqid(struct qbman_attr *d, uint32_t *erfqid)
+{
+	uint32_t *p = ATTR32(d);
+
+	*erfqid = qb_attr_code_decode(&code_fq_erfqid, p);
+}
+
+/* Query FQ Non-Programmalbe Fields */
+static struct qb_attr_code code_fq_np_state = QB_CODE(0, 16, 3);
+static struct qb_attr_code code_fq_np_fe = QB_CODE(0, 19, 1);
+static struct qb_attr_code code_fq_np_x = QB_CODE(0, 20, 1);
+static struct qb_attr_code code_fq_np_r = QB_CODE(0, 21, 1);
+static struct qb_attr_code code_fq_np_oe = QB_CODE(0, 22, 1);
+static struct qb_attr_code code_fq_np_frm_cnt = QB_CODE(6, 0, 24);
+static struct qb_attr_code code_fq_np_byte_cnt = QB_CODE(7, 0, 32);
+
+int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
+			 struct qbman_attr *state)
+{
+	uint32_t *p;
+	uint32_t verb, rslt;
+	uint32_t *d = ATTR32(state);
+
+	qbman_fq_attr_clear(state);
+
+	p = qbman_swp_mc_start(s);
+	if (!p)
+		return -EBUSY;
+	qb_attr_code_encode(&code_fq_fqid, p, fqid);
+	p = qbman_swp_mc_complete(s, p, QBMAN_FQ_QUERY_NP);
+
+	/* Decode the outcome */
+	verb = qb_attr_code_decode(&code_generic_verb, p);
+	rslt = qb_attr_code_decode(&code_generic_rslt, p);
+	BUG_ON(verb != QBMAN_FQ_QUERY_NP);
+
+	/* Determine success or failure */
+	if (unlikely(rslt != QBMAN_MC_RSLT_OK)) {
+		pr_err("Query NP fields of FQID 0x%x failed, code=0x%02x\n",
+		       fqid, rslt);
+		return -EIO;
+	}
+	word_copy(&d[0], &p[0], 16);
+	return 0;
+}
+
+uint32_t qbman_fq_state_schedstate(const struct qbman_attr *state)
+{
+	const uint32_t *p = ATTR32(state);
+
+	return qb_attr_code_decode(&code_fq_np_state, p);
+}
+
+int qbman_fq_state_force_eligible(const struct qbman_attr *state)
+{
+	const uint32_t *p = ATTR32(state);
+
+	return !!qb_attr_code_decode(&code_fq_np_fe, p);
+}
+
+int qbman_fq_state_xoff(const struct qbman_attr *state)
+{
+	const uint32_t *p = ATTR32(state);
+
+	return !!qb_attr_code_decode(&code_fq_np_x, p);
+}
+
+int qbman_fq_state_retirement_pending(const struct qbman_attr *state)
+{
+	const uint32_t *p = ATTR32(state);
+
+	return !!qb_attr_code_decode(&code_fq_np_r, p);
+}
+
+int qbman_fq_state_overflow_error(const struct qbman_attr *state)
+{
+	const uint32_t *p = ATTR32(state);
+
+	return !!qb_attr_code_decode(&code_fq_np_oe, p);
+}
+
+uint32_t qbman_fq_state_frame_count(const struct qbman_attr *state)
+{
+	const uint32_t *p = ATTR32(state);
+
+	return qb_attr_code_decode(&code_fq_np_frm_cnt, p);
+}
+
+uint32_t qbman_fq_state_byte_count(const struct qbman_attr *state)
+{
+	const uint32_t *p = ATTR32(state);
+
+	return qb_attr_code_decode(&code_fq_np_byte_cnt, p);
+}
+
+/* Query CGR */
+static struct qb_attr_code code_cgr_cgid = QB_CODE(0, 16, 16);
+static struct qb_attr_code code_cgr_cscn_wq_en_enter = QB_CODE(2, 0, 1);
+static struct qb_attr_code code_cgr_cscn_wq_en_exit = QB_CODE(2, 1, 1);
+static struct qb_attr_code code_cgr_cscn_wq_icd = QB_CODE(2, 2, 1);
+static struct qb_attr_code code_cgr_mode = QB_CODE(3, 16, 2);
+static struct qb_attr_code code_cgr_rej_cnt_mode = QB_CODE(3, 18, 1);
+static struct qb_attr_code code_cgr_cscn_bdi = QB_CODE(3, 19, 1);
+static struct qb_attr_code code_cgr_cscn_wr_en_enter = QB_CODE(3, 24, 1);
+static struct qb_attr_code code_cgr_cscn_wr_en_exit = QB_CODE(3, 25, 1);
+static struct qb_attr_code code_cgr_cg_wr_ae = QB_CODE(3, 26, 1);
+static struct qb_attr_code code_cgr_cscn_dcp_en = QB_CODE(3, 27, 1);
+static struct qb_attr_code code_cgr_cg_wr_va = QB_CODE(3, 28, 1);
+static struct qb_attr_code code_cgr_i_cnt_wr_en = QB_CODE(4, 0, 1);
+static struct qb_attr_code code_cgr_i_cnt_wr_bnd = QB_CODE(4, 1, 5);
+static struct qb_attr_code code_cgr_td_en = QB_CODE(4, 8, 1);
+static struct qb_attr_code code_cgr_cs_thres = QB_CODE(4, 16, 13);
+static struct qb_attr_code code_cgr_cs_thres_x = QB_CODE(5, 0, 13);
+static struct qb_attr_code code_cgr_td_thres = QB_CODE(5, 16, 13);
+static struct qb_attr_code code_cgr_cscn_tdcp = QB_CODE(6, 0, 16);
+static struct qb_attr_code code_cgr_cscn_wqid = QB_CODE(6, 16, 16);
+static struct qb_attr_code code_cgr_cscn_vcgid = QB_CODE(7, 0, 16);
+static struct qb_attr_code code_cgr_cg_icid = QB_CODE(7, 16, 15);
+static struct qb_attr_code code_cgr_cg_pl = QB_CODE(7, 31, 1);
+static struct qb_attr_code code_cgr_cg_wr_addr_lo = QB_CODE(8, 0, 32);
+static struct qb_attr_code code_cgr_cg_wr_addr_hi = QB_CODE(9, 0, 32);
+static struct qb_attr_code code_cgr_cscn_ctx_lo = QB_CODE(10, 0, 32);
+static struct qb_attr_code code_cgr_cscn_ctx_hi = QB_CODE(11, 0, 32);
+
+void qbman_cgr_attr_clear(struct qbman_attr *a)
+{
+	memset(a, 0, sizeof(*a));
+	attr_type_set(a, qbman_attr_usage_cgr);
+}
+
+int qbman_cgr_query(struct qbman_swp *s, uint32_t cgid, struct qbman_attr *attr)
+{
+	uint32_t *p;
+	uint32_t verb, rslt;
+	uint32_t *d[2];
+	int i;
+	uint32_t query_verb;
+
+	d[0] = ATTR32(attr);
+	d[1] = ATTR32_1(attr);
+
+	qbman_cgr_attr_clear(attr);
+
+	for (i = 0; i < 2; i++) {
+		p = qbman_swp_mc_start(s);
+		if (!p)
+			return -EBUSY;
+		query_verb = i ? QBMAN_WRED_QUERY : QBMAN_CGR_QUERY;
+
+		qb_attr_code_encode(&code_cgr_cgid, p, cgid);
+		p = qbman_swp_mc_complete(s, p, p[0] | query_verb);
+
+		/* Decode the outcome */
+		verb = qb_attr_code_decode(&code_generic_verb, p);
+		rslt = qb_attr_code_decode(&code_generic_rslt, p);
+		BUG_ON(verb != query_verb);
+
+		/* Determine success or failure */
+		if (unlikely(rslt != QBMAN_MC_RSLT_OK)) {
+			pr_err("Query CGID 0x%x failed,", cgid);
+			pr_err(" verb=0x%02x, code=0x%02x\n", verb, rslt);
+			return -EIO;
+		}
+		/* For the configure, word[0] of the command contains only the
+		 * verb/cgid. For the query, word[0] of the result contains
+		 * only the verb/rslt fields. Skip word[0] in the latter case.
+		 */
+		word_copy(&d[i][1], &p[1], 15);
+	}
+	return 0;
+}
+
+void qbman_cgr_attr_get_ctl1(struct qbman_attr *d, int *cscn_wq_en_enter,
+			     int *cscn_wq_en_exit, int *cscn_wq_icd)
+	{
+	uint32_t *p = ATTR32(d);
+	*cscn_wq_en_enter = !!qb_attr_code_decode(&code_cgr_cscn_wq_en_enter,
+									 p);
+	*cscn_wq_en_exit = !!qb_attr_code_decode(&code_cgr_cscn_wq_en_exit, p);
+	*cscn_wq_icd = !!qb_attr_code_decode(&code_cgr_cscn_wq_icd, p);
+}
+
+void qbman_cgr_attr_get_mode(struct qbman_attr *d, uint32_t *mode,
+			     int *rej_cnt_mode, int *cscn_bdi)
+{
+	uint32_t *p = ATTR32(d);
+	*mode = qb_attr_code_decode(&code_cgr_mode, p);
+	*rej_cnt_mode = !!qb_attr_code_decode(&code_cgr_rej_cnt_mode, p);
+	*cscn_bdi = !!qb_attr_code_decode(&code_cgr_cscn_bdi, p);
+}
+
+void qbman_cgr_attr_get_ctl2(struct qbman_attr *d, int *cscn_wr_en_enter,
+			     int *cscn_wr_en_exit, int *cg_wr_ae,
+			     int *cscn_dcp_en, int *cg_wr_va)
+{
+	uint32_t *p = ATTR32(d);
+	*cscn_wr_en_enter = !!qb_attr_code_decode(&code_cgr_cscn_wr_en_enter,
+									p);
+	*cscn_wr_en_exit = !!qb_attr_code_decode(&code_cgr_cscn_wr_en_exit, p);
+	*cg_wr_ae = !!qb_attr_code_decode(&code_cgr_cg_wr_ae, p);
+	*cscn_dcp_en = !!qb_attr_code_decode(&code_cgr_cscn_dcp_en, p);
+	*cg_wr_va = !!qb_attr_code_decode(&code_cgr_cg_wr_va, p);
+}
+
+void qbman_cgr_attr_get_iwc(struct qbman_attr *d, int *i_cnt_wr_en,
+			    uint32_t *i_cnt_wr_bnd)
+{
+	uint32_t *p = ATTR32(d);
+	*i_cnt_wr_en = !!qb_attr_code_decode(&code_cgr_i_cnt_wr_en, p);
+	*i_cnt_wr_bnd = qb_attr_code_decode(&code_cgr_i_cnt_wr_bnd, p);
+}
+
+void qbman_cgr_attr_get_tdc(struct qbman_attr *d, int *td_en)
+{
+	uint32_t *p = ATTR32(d);
+	*td_en = !!qb_attr_code_decode(&code_cgr_td_en, p);
+}
+
+void qbman_cgr_attr_get_cs_thres(struct qbman_attr *d, uint32_t *cs_thres)
+{
+	uint32_t *p = ATTR32(d);
+	*cs_thres = qbman_thresh_to_value(qb_attr_code_decode(
+						&code_cgr_cs_thres, p));
+}
+
+void qbman_cgr_attr_get_cs_thres_x(struct qbman_attr *d,
+				   uint32_t *cs_thres_x)
+{
+	uint32_t *p = ATTR32(d);
+	*cs_thres_x = qbman_thresh_to_value(qb_attr_code_decode(
+					    &code_cgr_cs_thres_x, p));
+}
+
+void qbman_cgr_attr_get_td_thres(struct qbman_attr *d, uint32_t *td_thres)
+{
+	uint32_t *p = ATTR32(d);
+	*td_thres = qbman_thresh_to_value(qb_attr_code_decode(
+					  &code_cgr_td_thres, p));
+}
+
+void qbman_cgr_attr_get_cscn_tdcp(struct qbman_attr *d, uint32_t *cscn_tdcp)
+{
+	uint32_t *p = ATTR32(d);
+	*cscn_tdcp = qb_attr_code_decode(&code_cgr_cscn_tdcp, p);
+}
+
+void qbman_cgr_attr_get_cscn_wqid(struct qbman_attr *d, uint32_t *cscn_wqid)
+{
+	uint32_t *p = ATTR32(d);
+	*cscn_wqid = qb_attr_code_decode(&code_cgr_cscn_wqid, p);
+}
+
+void qbman_cgr_attr_get_cscn_vcgid(struct qbman_attr *d,
+				   uint32_t *cscn_vcgid)
+{
+	uint32_t *p = ATTR32(d);
+	*cscn_vcgid = qb_attr_code_decode(&code_cgr_cscn_vcgid, p);
+}
+
+void qbman_cgr_attr_get_cg_icid(struct qbman_attr *d, uint32_t *icid,
+				int *pl)
+{
+	uint32_t *p = ATTR32(d);
+	*icid = qb_attr_code_decode(&code_cgr_cg_icid, p);
+	*pl = !!qb_attr_code_decode(&code_cgr_cg_pl, p);
+}
+
+void qbman_cgr_attr_get_cg_wr_addr(struct qbman_attr *d,
+				   uint64_t *cg_wr_addr)
+{
+	uint32_t *p = ATTR32(d);
+	*cg_wr_addr = ((uint64_t)qb_attr_code_decode(&code_cgr_cg_wr_addr_hi,
+			p) << 32) |
+			(uint64_t)qb_attr_code_decode(&code_cgr_cg_wr_addr_lo,
+			p);
+}
+
+void qbman_cgr_attr_get_cscn_ctx(struct qbman_attr *d, uint64_t *cscn_ctx)
+{
+	uint32_t *p = ATTR32(d);
+	*cscn_ctx = ((uint64_t)qb_attr_code_decode(&code_cgr_cscn_ctx_hi, p)
+			<< 32) |
+			(uint64_t)qb_attr_code_decode(&code_cgr_cscn_ctx_lo, p);
+}
+
+#define WRED_EDP_WORD(n) (18 + n/4)
+#define WRED_EDP_OFFSET(n) (8 * (n % 4))
+#define WRED_PARM_DP_WORD(n) (n + 20)
+#define WRED_WE_EDP(n) (16 + n * 2)
+#define WRED_WE_PARM_DP(n) (17 + n * 2)
+void qbman_cgr_attr_wred_get_edp(struct qbman_attr *d, uint32_t idx,
+				 int *edp)
+{
+	uint32_t *p = ATTR32(d);
+	struct qb_attr_code code_wred_edp = QB_CODE(WRED_EDP_WORD(idx),
+						WRED_EDP_OFFSET(idx), 8);
+	*edp = (int)qb_attr_code_decode(&code_wred_edp, p);
+}
+
+void qbman_cgr_attr_wred_dp_decompose(uint32_t dp, uint64_t *minth,
+				      uint64_t *maxth, uint8_t *maxp)
+{
+	uint8_t ma, mn, step_i, step_s, pn;
+
+	ma = (uint8_t)(dp >> 24);
+	mn = (uint8_t)(dp >> 19) & 0x1f;
+	step_i = (uint8_t)(dp >> 11);
+	step_s = (uint8_t)(dp >> 6) & 0x1f;
+	pn = (uint8_t)dp & 0x3f;
+
+	*maxp = ((pn<<2) * 100)/256;
+
+	if (mn == 0)
+		*maxth = ma;
+	else
+		*maxth = ((ma+256) * (1<<(mn-1)));
+
+	if (step_s == 0)
+		*minth = *maxth - step_i;
+	else
+		*minth = *maxth - (256 + step_i) * (1<<(step_s - 1));
+}
+
+void qbman_cgr_attr_wred_get_parm_dp(struct qbman_attr *d, uint32_t idx,
+				     uint32_t *dp)
+{
+	uint32_t *p = ATTR32(d);
+	struct qb_attr_code code_wred_parm_dp = QB_CODE(WRED_PARM_DP_WORD(idx),
+						0, 8);
+	*dp = qb_attr_code_decode(&code_wred_parm_dp, p);
+}
+
+/* Query CGR/CCGR/CQ statistics */
+static struct qb_attr_code code_cgr_stat_ct = QB_CODE(4, 0, 32);
+static struct qb_attr_code code_cgr_stat_frame_cnt_lo = QB_CODE(4, 0, 32);
+static struct qb_attr_code code_cgr_stat_frame_cnt_hi = QB_CODE(5, 0, 8);
+static struct qb_attr_code code_cgr_stat_byte_cnt_lo = QB_CODE(6, 0, 32);
+static struct qb_attr_code code_cgr_stat_byte_cnt_hi = QB_CODE(7, 0, 16);
+static int qbman_cgr_statistics_query(struct qbman_swp *s, uint32_t cgid,
+				      int clear, uint32_t command_type,
+				      uint64_t *frame_cnt, uint64_t *byte_cnt)
+{
+	uint32_t *p;
+	uint32_t verb, rslt;
+	uint32_t query_verb;
+	uint32_t hi, lo;
+
+	p = qbman_swp_mc_start(s);
+	if (!p)
+		return -EBUSY;
+
+	qb_attr_code_encode(&code_cgr_cgid, p, cgid);
+	if (command_type < 2)
+		qb_attr_code_encode(&code_cgr_stat_ct, p, command_type);
+	query_verb = clear ?
+			QBMAN_CGR_STAT_QUERY_CLR : QBMAN_CGR_STAT_QUERY;
+	p = qbman_swp_mc_complete(s, p, p[0] | query_verb);
+
+	/* Decode the outcome */
+	verb = qb_attr_code_decode(&code_generic_verb, p);
+	rslt = qb_attr_code_decode(&code_generic_rslt, p);
+	BUG_ON(verb != query_verb);
+
+	/* Determine success or failure */
+	if (unlikely(rslt != QBMAN_MC_RSLT_OK)) {
+		pr_err("Query statistics of CGID 0x%x failed,", cgid);
+		pr_err(" verb=0x%02x code=0x%02x\n", verb, rslt);
+		return -EIO;
+	}
+
+	if (*frame_cnt) {
+		hi = qb_attr_code_decode(&code_cgr_stat_frame_cnt_hi, p);
+		lo = qb_attr_code_decode(&code_cgr_stat_frame_cnt_lo, p);
+		*frame_cnt = ((uint64_t)hi << 32) | (uint64_t)lo;
+	}
+	if (*byte_cnt) {
+		hi = qb_attr_code_decode(&code_cgr_stat_byte_cnt_hi, p);
+		lo = qb_attr_code_decode(&code_cgr_stat_byte_cnt_lo, p);
+		*byte_cnt = ((uint64_t)hi << 32) | (uint64_t)lo;
+	}
+
+	return 0;
+}
+
+int qbman_cgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+				uint64_t *frame_cnt, uint64_t *byte_cnt)
+{
+	return qbman_cgr_statistics_query(s, cgid, clear, 0xff,
+					  frame_cnt, byte_cnt);
+}
+
+int qbman_ccgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+				 uint64_t *frame_cnt, uint64_t *byte_cnt)
+{
+	return qbman_cgr_statistics_query(s, cgid, clear, 1,
+					  frame_cnt, byte_cnt);
+}
+
+int qbman_cq_dequeue_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+				uint64_t *frame_cnt, uint64_t *byte_cnt)
+{
+	return qbman_cgr_statistics_query(s, cgid, clear, 0,
+					  frame_cnt, byte_cnt);
+}
--- /dev/null
+++ b/drivers/staging/fsl-mc/bus/dpio/qbman_debug.h
@@ -0,0 +1,136 @@
+/* Copyright (C) 2015 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+struct qbman_attr {
+	uint32_t dont_manipulate_directly[40];
+};
+
+/* Buffer pool query commands */
+int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
+		   struct qbman_attr *a);
+void qbman_bp_attr_get_bdi(struct qbman_attr *a, int *bdi, int *va, int *wae);
+void qbman_bp_attr_get_swdet(struct qbman_attr *a, uint32_t *swdet);
+void qbman_bp_attr_get_swdxt(struct qbman_attr *a, uint32_t *swdxt);
+void qbman_bp_attr_get_hwdet(struct qbman_attr *a, uint32_t *hwdet);
+void qbman_bp_attr_get_hwdxt(struct qbman_attr *a, uint32_t *hwdxt);
+void qbman_bp_attr_get_swset(struct qbman_attr *a, uint32_t *swset);
+void qbman_bp_attr_get_swsxt(struct qbman_attr *a, uint32_t *swsxt);
+void qbman_bp_attr_get_vbpid(struct qbman_attr *a, uint32_t *vbpid);
+void qbman_bp_attr_get_icid(struct qbman_attr *a, uint32_t *icid, int *pl);
+void qbman_bp_attr_get_bpscn_addr(struct qbman_attr *a, uint64_t *bpscn_addr);
+void qbman_bp_attr_get_bpscn_ctx(struct qbman_attr *a, uint64_t *bpscn_ctx);
+void qbman_bp_attr_get_hw_targ(struct qbman_attr *a, uint32_t *hw_targ);
+int qbman_bp_info_has_free_bufs(struct qbman_attr *a);
+int qbman_bp_info_is_depleted(struct qbman_attr *a);
+int qbman_bp_info_is_surplus(struct qbman_attr *a);
+uint32_t qbman_bp_info_num_free_bufs(struct qbman_attr *a);
+uint32_t qbman_bp_info_hdptr(struct qbman_attr *a);
+uint32_t qbman_bp_info_sdcnt(struct qbman_attr *a);
+uint32_t qbman_bp_info_hdcnt(struct qbman_attr *a);
+uint32_t qbman_bp_info_sscnt(struct qbman_attr *a);
+
+/* FQ query function for programmable fields */
+int qbman_fq_query(struct qbman_swp *s, uint32_t fqid,
+		   struct qbman_attr *desc);
+void qbman_fq_attr_get_fqctrl(struct qbman_attr *d, uint32_t *fqctrl);
+void qbman_fq_attr_get_cgrid(struct qbman_attr *d, uint32_t *cgrid);
+void qbman_fq_attr_get_destwq(struct qbman_attr *d, uint32_t *destwq);
+void qbman_fq_attr_get_icscred(struct qbman_attr *d, uint32_t *icscred);
+void qbman_fq_attr_get_tdthresh(struct qbman_attr *d, uint32_t *tdthresh);
+void qbman_fq_attr_get_oa(struct qbman_attr *d,
+			  int *oa_ics, int *oa_cgr, int32_t *oa_len);
+void qbman_fq_attr_get_mctl(struct qbman_attr *d,
+			    int *bdi, int *ff, int *va, int *ps);
+void qbman_fq_attr_get_ctx(struct qbman_attr *d, uint32_t *hi, uint32_t *lo);
+void qbman_fq_attr_get_icid(struct qbman_attr *d, uint32_t *icid, int *pl);
+void qbman_fq_attr_get_vfqid(struct qbman_attr *d, uint32_t *vfqid);
+void qbman_fq_attr_get_erfqid(struct qbman_attr *d, uint32_t *erfqid);
+
+/* FQ query command for non-programmable fields*/
+enum qbman_fq_schedstate_e {
+	qbman_fq_schedstate_oos = 0,
+	qbman_fq_schedstate_retired,
+	qbman_fq_schedstate_tentatively_scheduled,
+	qbman_fq_schedstate_truly_scheduled,
+	qbman_fq_schedstate_parked,
+	qbman_fq_schedstate_held_active,
+};
+
+int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
+			 struct qbman_attr *state);
+uint32_t qbman_fq_state_schedstate(const struct qbman_attr *state);
+int qbman_fq_state_force_eligible(const struct qbman_attr *state);
+int qbman_fq_state_xoff(const struct qbman_attr *state);
+int qbman_fq_state_retirement_pending(const struct qbman_attr *state);
+int qbman_fq_state_overflow_error(const struct qbman_attr *state);
+uint32_t qbman_fq_state_frame_count(const struct qbman_attr *state);
+uint32_t qbman_fq_state_byte_count(const struct qbman_attr *state);
+
+/* CGR query */
+int qbman_cgr_query(struct qbman_swp *s, uint32_t cgid,
+		    struct qbman_attr *attr);
+void qbman_cgr_attr_get_ctl1(struct qbman_attr *d, int *cscn_wq_en_enter,
+			     int *cscn_wq_en_exit, int *cscn_wq_icd);
+void qbman_cgr_attr_get_mode(struct qbman_attr *d, uint32_t *mode,
+			     int *rej_cnt_mode, int *cscn_bdi);
+void qbman_cgr_attr_get_ctl2(struct qbman_attr *d, int *cscn_wr_en_enter,
+			     int *cscn_wr_en_exit, int *cg_wr_ae,
+			     int *cscn_dcp_en, int *cg_wr_va);
+void qbman_cgr_attr_get_iwc(struct qbman_attr *d, int *i_cnt_wr_en,
+			    uint32_t *i_cnt_wr_bnd);
+void qbman_cgr_attr_get_tdc(struct qbman_attr *d, int *td_en);
+void qbman_cgr_attr_get_cs_thres(struct qbman_attr *d, uint32_t *cs_thres);
+void qbman_cgr_attr_get_cs_thres_x(struct qbman_attr *d,
+				   uint32_t *cs_thres_x);
+void qbman_cgr_attr_get_td_thres(struct qbman_attr *d, uint32_t *td_thres);
+void qbman_cgr_attr_get_cscn_tdcp(struct qbman_attr *d, uint32_t *cscn_tdcp);
+void qbman_cgr_attr_get_cscn_wqid(struct qbman_attr *d, uint32_t *cscn_wqid);
+void qbman_cgr_attr_get_cscn_vcgid(struct qbman_attr *d,
+				   uint32_t *cscn_vcgid);
+void qbman_cgr_attr_get_cg_icid(struct qbman_attr *d, uint32_t *icid,
+				int *pl);
+void qbman_cgr_attr_get_cg_wr_addr(struct qbman_attr *d,
+				   uint64_t *cg_wr_addr);
+void qbman_cgr_attr_get_cscn_ctx(struct qbman_attr *d, uint64_t *cscn_ctx);
+void qbman_cgr_attr_wred_get_edp(struct qbman_attr *d, uint32_t idx,
+				 int *edp);
+void qbman_cgr_attr_wred_dp_decompose(uint32_t dp, uint64_t *minth,
+				      uint64_t *maxth, uint8_t *maxp);
+void qbman_cgr_attr_wred_get_parm_dp(struct qbman_attr *d, uint32_t idx,
+				     uint32_t *dp);
+
+/* CGR/CCGR/CQ statistics query */
+int qbman_cgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+				uint64_t *frame_cnt, uint64_t *byte_cnt);
+int qbman_ccgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+				 uint64_t *frame_cnt, uint64_t *byte_cnt);
+int qbman_cq_dequeue_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+				uint64_t *frame_cnt, uint64_t *byte_cnt);
--- /dev/null
+++ b/drivers/staging/fsl-mc/bus/dpio/qbman_portal.c
@@ -0,0 +1,1212 @@
+/* Copyright (C) 2014 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "qbman_portal.h"
+
+/* QBMan portal management command codes */
+#define QBMAN_MC_ACQUIRE       0x30
+#define QBMAN_WQCHAN_CONFIGURE 0x46
+
+/* CINH register offsets */
+#define QBMAN_CINH_SWP_EQAR    0x8c0
+#define QBMAN_CINH_SWP_DQPI    0xa00
+#define QBMAN_CINH_SWP_DCAP    0xac0
+#define QBMAN_CINH_SWP_SDQCR   0xb00
+#define QBMAN_CINH_SWP_RAR     0xcc0
+#define QBMAN_CINH_SWP_ISR     0xe00
+#define QBMAN_CINH_SWP_IER     0xe40
+#define QBMAN_CINH_SWP_ISDR    0xe80
+#define QBMAN_CINH_SWP_IIR     0xec0
+
+/* CENA register offsets */
+#define QBMAN_CENA_SWP_EQCR(n) (0x000 + ((uint32_t)(n) << 6))
+#define QBMAN_CENA_SWP_DQRR(n) (0x200 + ((uint32_t)(n) << 6))
+#define QBMAN_CENA_SWP_RCR(n)  (0x400 + ((uint32_t)(n) << 6))
+#define QBMAN_CENA_SWP_CR      0x600
+#define QBMAN_CENA_SWP_RR(vb)  (0x700 + ((uint32_t)(vb) >> 1))
+#define QBMAN_CENA_SWP_VDQCR   0x780
+
+/* Reverse mapping of QBMAN_CENA_SWP_DQRR() */
+#define QBMAN_IDX_FROM_DQRR(p) (((unsigned long)p & 0x1ff) >> 6)
+
+/* QBMan FQ management command codes */
+#define QBMAN_FQ_SCHEDULE	0x48
+#define QBMAN_FQ_FORCE		0x49
+#define QBMAN_FQ_XON		0x4d
+#define QBMAN_FQ_XOFF		0x4e
+
+/*******************************/
+/* Pre-defined attribute codes */
+/*******************************/
+
+struct qb_attr_code code_generic_verb = QB_CODE(0, 0, 7);
+struct qb_attr_code code_generic_rslt = QB_CODE(0, 8, 8);
+
+/*************************/
+/* SDQCR attribute codes */
+/*************************/
+
+/* we put these here because at least some of them are required by
+ * qbman_swp_init() */
+struct qb_attr_code code_sdqcr_dct = QB_CODE(0, 24, 2);
+struct qb_attr_code code_sdqcr_fc = QB_CODE(0, 29, 1);
+struct qb_attr_code code_sdqcr_tok = QB_CODE(0, 16, 8);
+#define CODE_SDQCR_DQSRC(n) QB_CODE(0, n, 1)
+enum qbman_sdqcr_dct {
+	qbman_sdqcr_dct_null = 0,
+	qbman_sdqcr_dct_prio_ics,
+	qbman_sdqcr_dct_active_ics,
+	qbman_sdqcr_dct_active
+};
+enum qbman_sdqcr_fc {
+	qbman_sdqcr_fc_one = 0,
+	qbman_sdqcr_fc_up_to_3 = 1
+};
+struct qb_attr_code code_sdqcr_dqsrc = QB_CODE(0, 0, 16);
+
+/*********************************/
+/* Portal constructor/destructor */
+/*********************************/
+
+/* Software portals should always be in the power-on state when we initialise,
+ * due to the CCSR-based portal reset functionality that MC has.
+ *
+ * Erk! Turns out that QMan versions prior to 4.1 do not correctly reset DQRR
+ * valid-bits, so we need to support a workaround where we don't trust
+ * valid-bits when detecting new entries until any stale ring entries have been
+ * overwritten at least once. The idea is that we read PI for the first few
+ * entries, then switch to valid-bit after that. The trick is to clear the
+ * bug-work-around boolean once the PI wraps around the ring for the first time.
+ *
+ * Note: this still carries a slight additional cost once the decrementer hits
+ * zero, so ideally the workaround should only be compiled in if the compiled
+ * image needs to support affected chips. We use WORKAROUND_DQRR_RESET_BUG for
+ * this.
+ */
+struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d)
+{
+	int ret;
+	struct qbman_swp *p = kmalloc(sizeof(*p), GFP_KERNEL);
+
+	if (!p)
+		return NULL;
+	p->desc = d;
+#ifdef QBMAN_CHECKING
+	p->mc.check = swp_mc_can_start;
+#endif
+	p->mc.valid_bit = QB_VALID_BIT;
+	p->sdq = 0;
+	qb_attr_code_encode(&code_sdqcr_dct, &p->sdq, qbman_sdqcr_dct_prio_ics);
+	qb_attr_code_encode(&code_sdqcr_fc, &p->sdq, qbman_sdqcr_fc_up_to_3);
+	qb_attr_code_encode(&code_sdqcr_tok, &p->sdq, 0xbb);
+	atomic_set(&p->vdq.busy, 1);
+	p->vdq.valid_bit = QB_VALID_BIT;
+	p->dqrr.next_idx = 0;
+	p->dqrr.valid_bit = QB_VALID_BIT;
+	/* TODO: should also read PI/CI type registers and check that they're on
+	 * PoR values. If we're asked to initialise portals that aren't in reset
+	 * state, bad things will follow. */
+#ifdef WORKAROUND_DQRR_RESET_BUG
+	p->dqrr.reset_bug = 1;
+#endif
+	if ((p->desc->qman_version & 0xFFFF0000) < QMAN_REV_4100)
+		p->dqrr.dqrr_size = 4;
+	else
+		p->dqrr.dqrr_size = 8;
+	ret = qbman_swp_sys_init(&p->sys, d, p->dqrr.dqrr_size);
+	if (ret) {
+		kfree(p);
+		pr_err("qbman_swp_sys_init() failed %d\n", ret);
+		return NULL;
+	}
+	/* SDQCR needs to be initialized to 0 when no channels are
+	   being dequeued from or else the QMan HW will indicate an
+	   error.  The values that were calculated above will be
+	   applied when dequeues from a specific channel are enabled */
+	qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_SDQCR, 0);
+	return p;
+}
+
+void qbman_swp_finish(struct qbman_swp *p)
+{
+#ifdef QBMAN_CHECKING
+	BUG_ON(p->mc.check != swp_mc_can_start);
+#endif
+	qbman_swp_sys_finish(&p->sys);
+	kfree(p);
+}
+
+const struct qbman_swp_desc *qbman_swp_get_desc(struct qbman_swp *p)
+{
+	return p->desc;
+}
+
+/**************/
+/* Interrupts */
+/**************/
+
+uint32_t qbman_swp_interrupt_get_vanish(struct qbman_swp *p)
+{
+	return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_ISDR);
+}
+
+void qbman_swp_interrupt_set_vanish(struct qbman_swp *p, uint32_t mask)
+{
+	qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_ISDR, mask);
+}
+
+uint32_t qbman_swp_interrupt_read_status(struct qbman_swp *p)
+{
+	return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_ISR);
+}
+
+void qbman_swp_interrupt_clear_status(struct qbman_swp *p, uint32_t mask)
+{
+	qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_ISR, mask);
+}
+
+uint32_t qbman_swp_interrupt_get_trigger(struct qbman_swp *p)
+{
+	return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_IER);
+}
+
+void qbman_swp_interrupt_set_trigger(struct qbman_swp *p, uint32_t mask)
+{
+	qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_IER, mask);
+}
+
+int qbman_swp_interrupt_get_inhibit(struct qbman_swp *p)
+{
+	return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_IIR);
+}
+
+void qbman_swp_interrupt_set_inhibit(struct qbman_swp *p, int inhibit)
+{
+	qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_IIR, inhibit ? 0xffffffff : 0);
+}
+
+/***********************/
+/* Management commands */
+/***********************/
+
+/*
+ * Internal code common to all types of management commands.
+ */
+
+void *qbman_swp_mc_start(struct qbman_swp *p)
+{
+	void *ret;
+#ifdef QBMAN_CHECKING
+	BUG_ON(p->mc.check != swp_mc_can_start);
+#endif
+	ret = qbman_cena_write_start(&p->sys, QBMAN_CENA_SWP_CR);
+#ifdef QBMAN_CHECKING
+	if (!ret)
+		p->mc.check = swp_mc_can_submit;
+#endif
+	return ret;
+}
+
+void qbman_swp_mc_submit(struct qbman_swp *p, void *cmd, uint32_t cmd_verb)
+{
+	uint32_t *v = cmd;
+#ifdef QBMAN_CHECKING
+	BUG_ON(!p->mc.check != swp_mc_can_submit);
+#endif
+	/* TBD: "|=" is going to hurt performance. Need to move as many fields
+	 * out of word zero, and for those that remain, the "OR" needs to occur
+	 * at the caller side. This debug check helps to catch cases where the
+	 * caller wants to OR but has forgotten to do so. */
+	BUG_ON((*v & cmd_verb) != *v);
+	*v = cmd_verb | p->mc.valid_bit;
+	qbman_cena_write_complete(&p->sys, QBMAN_CENA_SWP_CR, cmd);
+#ifdef QBMAN_CHECKING
+	p->mc.check = swp_mc_can_poll;
+#endif
+}
+
+void *qbman_swp_mc_result(struct qbman_swp *p)
+{
+	uint32_t *ret, verb;
+#ifdef QBMAN_CHECKING
+	BUG_ON(p->mc.check != swp_mc_can_poll);
+#endif
+	qbman_cena_invalidate_prefetch(&p->sys,
+				 QBMAN_CENA_SWP_RR(p->mc.valid_bit));
+	ret = qbman_cena_read(&p->sys, QBMAN_CENA_SWP_RR(p->mc.valid_bit));
+	/* Remove the valid-bit - command completed iff the rest is non-zero */
+	verb = ret[0] & ~QB_VALID_BIT;
+	if (!verb)
+		return NULL;
+#ifdef QBMAN_CHECKING
+	p->mc.check = swp_mc_can_start;
+#endif
+	p->mc.valid_bit ^= QB_VALID_BIT;
+	return ret;
+}
+
+/***********/
+/* Enqueue */
+/***********/
+
+/* These should be const, eventually */
+static struct qb_attr_code code_eq_cmd = QB_CODE(0, 0, 2);
+static struct qb_attr_code code_eq_eqdi = QB_CODE(0, 3, 1);
+static struct qb_attr_code code_eq_dca_en = QB_CODE(0, 15, 1);
+static struct qb_attr_code code_eq_dca_pk = QB_CODE(0, 14, 1);
+static struct qb_attr_code code_eq_dca_idx = QB_CODE(0, 8, 2);
+static struct qb_attr_code code_eq_orp_en = QB_CODE(0, 2, 1);
+static struct qb_attr_code code_eq_orp_is_nesn = QB_CODE(0, 31, 1);
+static struct qb_attr_code code_eq_orp_nlis = QB_CODE(0, 30, 1);
+static struct qb_attr_code code_eq_orp_seqnum = QB_CODE(0, 16, 14);
+static struct qb_attr_code code_eq_opr_id = QB_CODE(1, 0, 16);
+static struct qb_attr_code code_eq_tgt_id = QB_CODE(2, 0, 24);
+/* static struct qb_attr_code code_eq_tag = QB_CODE(3, 0, 32); */
+static struct qb_attr_code code_eq_qd_en = QB_CODE(0, 4, 1);
+static struct qb_attr_code code_eq_qd_bin = QB_CODE(4, 0, 16);
+static struct qb_attr_code code_eq_qd_pri = QB_CODE(4, 16, 4);
+static struct qb_attr_code code_eq_rsp_stash = QB_CODE(5, 16, 1);
+static struct qb_attr_code code_eq_rsp_id = QB_CODE(5, 24, 8);
+static struct qb_attr_code code_eq_rsp_lo = QB_CODE(6, 0, 32);
+
+enum qbman_eq_cmd_e {
+	/* No enqueue, primarily for plugging ORP gaps for dropped frames */
+	qbman_eq_cmd_empty,
+	/* DMA an enqueue response once complete */
+	qbman_eq_cmd_respond,
+	/* DMA an enqueue response only if the enqueue fails */
+	qbman_eq_cmd_respond_reject
+};
+
+void qbman_eq_desc_clear(struct qbman_eq_desc *d)
+{
+	memset(d, 0, sizeof(*d));
+}
+
+void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_eq_orp_en, cl, 0);
+	qb_attr_code_encode(&code_eq_cmd, cl,
+			    respond_success ? qbman_eq_cmd_respond :
+					      qbman_eq_cmd_respond_reject);
+}
+
+void qbman_eq_desc_set_orp(struct qbman_eq_desc *d, int respond_success,
+			   uint32_t opr_id, uint32_t seqnum, int incomplete)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_eq_orp_en, cl, 1);
+	qb_attr_code_encode(&code_eq_cmd, cl,
+			    respond_success ? qbman_eq_cmd_respond :
+					      qbman_eq_cmd_respond_reject);
+	qb_attr_code_encode(&code_eq_opr_id, cl, opr_id);
+	qb_attr_code_encode(&code_eq_orp_seqnum, cl, seqnum);
+	qb_attr_code_encode(&code_eq_orp_nlis, cl, !!incomplete);
+}
+
+void qbman_eq_desc_set_orp_hole(struct qbman_eq_desc *d, uint32_t opr_id,
+				uint32_t seqnum)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_eq_orp_en, cl, 1);
+	qb_attr_code_encode(&code_eq_cmd, cl, qbman_eq_cmd_empty);
+	qb_attr_code_encode(&code_eq_opr_id, cl, opr_id);
+	qb_attr_code_encode(&code_eq_orp_seqnum, cl, seqnum);
+	qb_attr_code_encode(&code_eq_orp_nlis, cl, 0);
+	qb_attr_code_encode(&code_eq_orp_is_nesn, cl, 0);
+}
+
+void qbman_eq_desc_set_orp_nesn(struct qbman_eq_desc *d, uint32_t opr_id,
+				uint32_t seqnum)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_eq_orp_en, cl, 1);
+	qb_attr_code_encode(&code_eq_cmd, cl, qbman_eq_cmd_empty);
+	qb_attr_code_encode(&code_eq_opr_id, cl, opr_id);
+	qb_attr_code_encode(&code_eq_orp_seqnum, cl, seqnum);
+	qb_attr_code_encode(&code_eq_orp_nlis, cl, 0);
+	qb_attr_code_encode(&code_eq_orp_is_nesn, cl, 1);
+}
+
+void qbman_eq_desc_set_response(struct qbman_eq_desc *d,
+				dma_addr_t storage_phys,
+				int stash)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode_64(&code_eq_rsp_lo, (uint64_t *)cl, storage_phys);
+	qb_attr_code_encode(&code_eq_rsp_stash, cl, !!stash);
+}
+
+void qbman_eq_desc_set_token(struct qbman_eq_desc *d, uint8_t token)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_eq_rsp_id, cl, (uint32_t)token);
+}
+
+void qbman_eq_desc_set_fq(struct qbman_eq_desc *d, uint32_t fqid)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_eq_qd_en, cl, 0);
+	qb_attr_code_encode(&code_eq_tgt_id, cl, fqid);
+}
+
+void qbman_eq_desc_set_qd(struct qbman_eq_desc *d, uint32_t qdid,
+			  uint32_t qd_bin, uint32_t qd_prio)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_eq_qd_en, cl, 1);
+	qb_attr_code_encode(&code_eq_tgt_id, cl, qdid);
+	qb_attr_code_encode(&code_eq_qd_bin, cl, qd_bin);
+	qb_attr_code_encode(&code_eq_qd_pri, cl, qd_prio);
+}
+
+void qbman_eq_desc_set_eqdi(struct qbman_eq_desc *d, int enable)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_eq_eqdi, cl, !!enable);
+}
+
+void qbman_eq_desc_set_dca(struct qbman_eq_desc *d, int enable,
+				uint32_t dqrr_idx, int park)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_eq_dca_en, cl, !!enable);
+	if (enable) {
+		qb_attr_code_encode(&code_eq_dca_pk, cl, !!park);
+		qb_attr_code_encode(&code_eq_dca_idx, cl, dqrr_idx);
+	}
+}
+
+#define EQAR_IDX(eqar)     ((eqar) & 0x7)
+#define EQAR_VB(eqar)      ((eqar) & 0x80)
+#define EQAR_SUCCESS(eqar) ((eqar) & 0x100)
+
+int qbman_swp_enqueue(struct qbman_swp *s, const struct qbman_eq_desc *d,
+		      const struct qbman_fd *fd)
+{
+	uint32_t *p;
+	const uint32_t *cl = qb_cl(d);
+	uint32_t eqar = qbman_cinh_read(&s->sys, QBMAN_CINH_SWP_EQAR);
+
+	pr_debug("EQAR=%08x\n", eqar);
+	if (!EQAR_SUCCESS(eqar))
+		return -EBUSY;
+	p = qbman_cena_write_start(&s->sys,
+				   QBMAN_CENA_SWP_EQCR(EQAR_IDX(eqar)));
+	word_copy(&p[1], &cl[1], 7);
+	word_copy(&p[8], fd, sizeof(*fd) >> 2);
+	/* Set the verb byte, have to substitute in the valid-bit */
+	p[0] = cl[0] | EQAR_VB(eqar);
+	qbman_cena_write_complete(&s->sys,
+				  QBMAN_CENA_SWP_EQCR(EQAR_IDX(eqar)),
+				  p);
+	return 0;
+}
+
+/*************************/
+/* Static (push) dequeue */
+/*************************/
+
+void qbman_swp_push_get(struct qbman_swp *s, uint8_t channel_idx, int *enabled)
+{
+	struct qb_attr_code code = CODE_SDQCR_DQSRC(channel_idx);
+
+	BUG_ON(channel_idx > 15);
+	*enabled = (int)qb_attr_code_decode(&code, &s->sdq);
+}
+
+void qbman_swp_push_set(struct qbman_swp *s, uint8_t channel_idx, int enable)
+{
+	uint16_t dqsrc;
+	struct qb_attr_code code = CODE_SDQCR_DQSRC(channel_idx);
+
+	BUG_ON(channel_idx > 15);
+	qb_attr_code_encode(&code, &s->sdq, !!enable);
+	/* Read make the complete src map.  If no channels are enabled
+	   the SDQCR must be 0 or else QMan will assert errors */
+	dqsrc = (uint16_t)qb_attr_code_decode(&code_sdqcr_dqsrc, &s->sdq);
+	if (dqsrc != 0)
+		qbman_cinh_write(&s->sys, QBMAN_CINH_SWP_SDQCR, s->sdq);
+	else
+		qbman_cinh_write(&s->sys, QBMAN_CINH_SWP_SDQCR, 0);
+}
+
+/***************************/
+/* Volatile (pull) dequeue */
+/***************************/
+
+/* These should be const, eventually */
+static struct qb_attr_code code_pull_dct = QB_CODE(0, 0, 2);
+static struct qb_attr_code code_pull_dt = QB_CODE(0, 2, 2);
+static struct qb_attr_code code_pull_rls = QB_CODE(0, 4, 1);
+static struct qb_attr_code code_pull_stash = QB_CODE(0, 5, 1);
+static struct qb_attr_code code_pull_numframes = QB_CODE(0, 8, 4);
+static struct qb_attr_code code_pull_token = QB_CODE(0, 16, 8);
+static struct qb_attr_code code_pull_dqsource = QB_CODE(1, 0, 24);
+static struct qb_attr_code code_pull_rsp_lo = QB_CODE(2, 0, 32);
+
+enum qb_pull_dt_e {
+	qb_pull_dt_channel,
+	qb_pull_dt_workqueue,
+	qb_pull_dt_framequeue
+};
+
+void qbman_pull_desc_clear(struct qbman_pull_desc *d)
+{
+	memset(d, 0, sizeof(*d));
+}
+
+void qbman_pull_desc_set_storage(struct qbman_pull_desc *d,
+				 struct dpaa2_dq *storage,
+				 dma_addr_t storage_phys,
+				 int stash)
+{
+	uint32_t *cl = qb_cl(d);
+
+	/* Squiggle the pointer 'storage' into the extra 2 words of the
+	 * descriptor (which aren't copied to the hw command) */
+	*(void **)&cl[4] = storage;
+	if (!storage) {
+		qb_attr_code_encode(&code_pull_rls, cl, 0);
+		return;
+	}
+	qb_attr_code_encode(&code_pull_rls, cl, 1);
+	qb_attr_code_encode(&code_pull_stash, cl, !!stash);
+	qb_attr_code_encode_64(&code_pull_rsp_lo, (uint64_t *)cl, storage_phys);
+}
+
+void qbman_pull_desc_set_numframes(struct qbman_pull_desc *d, uint8_t numframes)
+{
+	uint32_t *cl = qb_cl(d);
+
+	BUG_ON(!numframes || (numframes > 16));
+	qb_attr_code_encode(&code_pull_numframes, cl,
+				(uint32_t)(numframes - 1));
+}
+
+void qbman_pull_desc_set_token(struct qbman_pull_desc *d, uint8_t token)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_pull_token, cl, token);
+}
+
+void qbman_pull_desc_set_fq(struct qbman_pull_desc *d, uint32_t fqid)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_pull_dct, cl, 1);
+	qb_attr_code_encode(&code_pull_dt, cl, qb_pull_dt_framequeue);
+	qb_attr_code_encode(&code_pull_dqsource, cl, fqid);
+}
+
+void qbman_pull_desc_set_wq(struct qbman_pull_desc *d, uint32_t wqid,
+			    enum qbman_pull_type_e dct)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_pull_dct, cl, dct);
+	qb_attr_code_encode(&code_pull_dt, cl, qb_pull_dt_workqueue);
+	qb_attr_code_encode(&code_pull_dqsource, cl, wqid);
+}
+
+void qbman_pull_desc_set_channel(struct qbman_pull_desc *d, uint32_t chid,
+				 enum qbman_pull_type_e dct)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_pull_dct, cl, dct);
+	qb_attr_code_encode(&code_pull_dt, cl, qb_pull_dt_channel);
+	qb_attr_code_encode(&code_pull_dqsource, cl, chid);
+}
+
+int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d)
+{
+	uint32_t *p;
+	uint32_t *cl = qb_cl(d);
+
+	if (!atomic_dec_and_test(&s->vdq.busy)) {
+		atomic_inc(&s->vdq.busy);
+		return -EBUSY;
+	}
+	s->vdq.storage = *(void **)&cl[4];
+	qb_attr_code_encode(&code_pull_token, cl, 1);
+	p = qbman_cena_write_start(&s->sys, QBMAN_CENA_SWP_VDQCR);
+	word_copy(&p[1], &cl[1], 3);
+	/* Set the verb byte, have to substitute in the valid-bit */
+	p[0] = cl[0] | s->vdq.valid_bit;
+	s->vdq.valid_bit ^= QB_VALID_BIT;
+	qbman_cena_write_complete(&s->sys, QBMAN_CENA_SWP_VDQCR, p);
+	return 0;
+}
+
+/****************/
+/* Polling DQRR */
+/****************/
+
+static struct qb_attr_code code_dqrr_verb = QB_CODE(0, 0, 8);
+static struct qb_attr_code code_dqrr_response = QB_CODE(0, 0, 7);
+static struct qb_attr_code code_dqrr_stat = QB_CODE(0, 8, 8);
+static struct qb_attr_code code_dqrr_seqnum = QB_CODE(0, 16, 14);
+static struct qb_attr_code code_dqrr_odpid = QB_CODE(1, 0, 16);
+/* static struct qb_attr_code code_dqrr_tok = QB_CODE(1, 24, 8); */
+static struct qb_attr_code code_dqrr_fqid = QB_CODE(2, 0, 24);
+static struct qb_attr_code code_dqrr_byte_count = QB_CODE(4, 0, 32);
+static struct qb_attr_code code_dqrr_frame_count = QB_CODE(5, 0, 24);
+static struct qb_attr_code code_dqrr_ctx_lo = QB_CODE(6, 0, 32);
+
+#define QBMAN_RESULT_DQ        0x60
+#define QBMAN_RESULT_FQRN      0x21
+#define QBMAN_RESULT_FQRNI     0x22
+#define QBMAN_RESULT_FQPN      0x24
+#define QBMAN_RESULT_FQDAN     0x25
+#define QBMAN_RESULT_CDAN      0x26
+#define QBMAN_RESULT_CSCN_MEM  0x27
+#define QBMAN_RESULT_CGCU      0x28
+#define QBMAN_RESULT_BPSCN     0x29
+#define QBMAN_RESULT_CSCN_WQ   0x2a
+
+static struct qb_attr_code code_dqpi_pi = QB_CODE(0, 0, 4);
+
+/* NULL return if there are no unconsumed DQRR entries. Returns a DQRR entry
+ * only once, so repeated calls can return a sequence of DQRR entries, without
+ * requiring they be consumed immediately or in any particular order. */
+const struct dpaa2_dq *qbman_swp_dqrr_next(struct qbman_swp *s)
+{
+	uint32_t verb;
+	uint32_t response_verb;
+	uint32_t flags;
+	const struct dpaa2_dq *dq;
+	const uint32_t *p;
+
+	/* Before using valid-bit to detect if something is there, we have to
+	 * handle the case of the DQRR reset bug... */
+#ifdef WORKAROUND_DQRR_RESET_BUG
+	if (unlikely(s->dqrr.reset_bug)) {
+		/* We pick up new entries by cache-inhibited producer index,
+		 * which means that a non-coherent mapping would require us to
+		 * invalidate and read *only* once that PI has indicated that
+		 * there's an entry here. The first trip around the DQRR ring
+		 * will be much less efficient than all subsequent trips around
+		 * it...
+		 */
+		uint32_t dqpi = qbman_cinh_read(&s->sys, QBMAN_CINH_SWP_DQPI);
+		uint32_t pi = qb_attr_code_decode(&code_dqpi_pi, &dqpi);
+		/* there are new entries iff pi != next_idx */
+		if (pi == s->dqrr.next_idx)
+			return NULL;
+		/* if next_idx is/was the last ring index, and 'pi' is
+		 * different, we can disable the workaround as all the ring
+		 * entries have now been DMA'd to so valid-bit checking is
+		 * repaired. Note: this logic needs to be based on next_idx
+		 * (which increments one at a time), rather than on pi (which
+		 * can burst and wrap-around between our snapshots of it).
+		 */
+		if (s->dqrr.next_idx == (s->dqrr.dqrr_size - 1)) {
+			pr_debug("DEBUG: next_idx=%d, pi=%d, clear reset bug\n",
+				s->dqrr.next_idx, pi);
+			s->dqrr.reset_bug = 0;
+		}
+		qbman_cena_invalidate_prefetch(&s->sys,
+					QBMAN_CENA_SWP_DQRR(s->dqrr.next_idx));
+	}
+#endif
+
+	dq = qbman_cena_read(&s->sys, QBMAN_CENA_SWP_DQRR(s->dqrr.next_idx));
+	p = qb_cl(dq);
+	verb = qb_attr_code_decode(&code_dqrr_verb, p);
+
+	/* If the valid-bit isn't of the expected polarity, nothing there. Note,
+	 * in the DQRR reset bug workaround, we shouldn't need to skip these
+	 * check, because we've already determined that a new entry is available
+	 * and we've invalidated the cacheline before reading it, so the
+	 * valid-bit behaviour is repaired and should tell us what we already
+	 * knew from reading PI.
+	 */
+	if ((verb & QB_VALID_BIT) != s->dqrr.valid_bit) {
+		qbman_cena_invalidate_prefetch(&s->sys,
+					QBMAN_CENA_SWP_DQRR(s->dqrr.next_idx));
+		return NULL;
+	}
+	/* There's something there. Move "next_idx" attention to the next ring
+	 * entry (and prefetch it) before returning what we found. */
+	s->dqrr.next_idx++;
+	s->dqrr.next_idx &= s->dqrr.dqrr_size - 1; /* Wrap around */
+	/* TODO: it's possible to do all this without conditionals, optimise it
+	 * later. */
+	if (!s->dqrr.next_idx)
+		s->dqrr.valid_bit ^= QB_VALID_BIT;
+
+	/* If this is the final response to a volatile dequeue command
+	   indicate that the vdq is no longer busy */
+	flags = dpaa2_dq_flags(dq);
+	response_verb = qb_attr_code_decode(&code_dqrr_response, &verb);
+	if ((response_verb == QBMAN_RESULT_DQ) &&
+	    (flags & DPAA2_DQ_STAT_VOLATILE) &&
+	    (flags & DPAA2_DQ_STAT_EXPIRED))
+		atomic_inc(&s->vdq.busy);
+
+	qbman_cena_invalidate_prefetch(&s->sys,
+				       QBMAN_CENA_SWP_DQRR(s->dqrr.next_idx));
+	return dq;
+}
+
+/* Consume DQRR entries previously returned from qbman_swp_dqrr_next(). */
+void qbman_swp_dqrr_consume(struct qbman_swp *s, const struct dpaa2_dq *dq)
+{
+	qbman_cinh_write(&s->sys, QBMAN_CINH_SWP_DCAP, QBMAN_IDX_FROM_DQRR(dq));
+}
+
+/*********************************/
+/* Polling user-provided storage */
+/*********************************/
+
+int qbman_result_has_new_result(struct qbman_swp *s,
+				  const struct dpaa2_dq *dq)
+{
+	/* To avoid converting the little-endian DQ entry to host-endian prior
+	 * to us knowing whether there is a valid entry or not (and run the
+	 * risk of corrupting the incoming hardware LE write), we detect in
+	 * hardware endianness rather than host. This means we need a different
+	 * "code" depending on whether we are BE or LE in software, which is
+	 * where DQRR_TOK_OFFSET comes in... */
+	static struct qb_attr_code code_dqrr_tok_detect =
+					QB_CODE(0, DQRR_TOK_OFFSET, 8);
+	/* The user trying to poll for a result treats "dq" as const. It is
+	 * however the same address that was provided to us non-const in the
+	 * first place, for directing hardware DMA to. So we can cast away the
+	 * const because it is mutable from our perspective. */
+	uint32_t *p = qb_cl((struct dpaa2_dq *)dq);
+	uint32_t token;
+
+	token = qb_attr_code_decode(&code_dqrr_tok_detect, &p[1]);
+	if (token != 1)
+		return 0;
+	qb_attr_code_encode(&code_dqrr_tok_detect, &p[1], 0);
+
+	/* Only now do we convert from hardware to host endianness. Also, as we
+	 * are returning success, the user has promised not to call us again, so
+	 * there's no risk of us converting the endianness twice... */
+	make_le32_n(p, 16);
+
+	/* VDQCR "no longer busy" hook - not quite the same as DQRR, because the
+	 * fact "VDQCR" shows busy doesn't mean that the result we're looking at
+	 * is from the same command. Eg. we may be looking at our 10th dequeue
+	 * result from our first VDQCR command, yet the second dequeue command
+	 * could have been kicked off already, after seeing the 1st result. Ie.
+	 * the result we're looking at is not necessarily proof that we can
+	 * reset "busy".  We instead base the decision on whether the current
+	 * result is sitting at the first 'storage' location of the busy
+	 * command. */
+	if (s->vdq.storage == dq) {
+		s->vdq.storage = NULL;
+		atomic_inc(&s->vdq.busy);
+	}
+	return 1;
+}
+
+/********************************/
+/* Categorising qbman_result */
+/********************************/
+
+static struct qb_attr_code code_result_in_mem =
+			QB_CODE(0, QBMAN_RESULT_VERB_OFFSET_IN_MEM, 7);
+
+static inline int __qbman_result_is_x(const struct dpaa2_dq *dq, uint32_t x)
+{
+	const uint32_t *p = qb_cl(dq);
+	uint32_t response_verb = qb_attr_code_decode(&code_dqrr_response, p);
+
+	return response_verb == x;
+}
+
+static inline int __qbman_result_is_x_in_mem(const struct dpaa2_dq *dq,
+					     uint32_t x)
+{
+	const uint32_t *p = qb_cl(dq);
+	uint32_t response_verb = qb_attr_code_decode(&code_result_in_mem, p);
+
+	return (response_verb == x);
+}
+
+int qbman_result_is_DQ(const struct dpaa2_dq *dq)
+{
+	return __qbman_result_is_x(dq, QBMAN_RESULT_DQ);
+}
+
+int qbman_result_is_FQDAN(const struct dpaa2_dq *dq)
+{
+	return __qbman_result_is_x(dq, QBMAN_RESULT_FQDAN);
+}
+
+int qbman_result_is_CDAN(const struct dpaa2_dq *dq)
+{
+	return __qbman_result_is_x(dq, QBMAN_RESULT_CDAN);
+}
+
+int qbman_result_is_CSCN(const struct dpaa2_dq *dq)
+{
+	return __qbman_result_is_x_in_mem(dq, QBMAN_RESULT_CSCN_MEM) ||
+		__qbman_result_is_x(dq, QBMAN_RESULT_CSCN_WQ);
+}
+
+int qbman_result_is_BPSCN(const struct dpaa2_dq *dq)
+{
+	return __qbman_result_is_x_in_mem(dq, QBMAN_RESULT_BPSCN);
+}
+
+int qbman_result_is_CGCU(const struct dpaa2_dq *dq)
+{
+	return __qbman_result_is_x_in_mem(dq, QBMAN_RESULT_CGCU);
+}
+
+int qbman_result_is_FQRN(const struct dpaa2_dq *dq)
+{
+	return __qbman_result_is_x_in_mem(dq, QBMAN_RESULT_FQRN);
+}
+
+int qbman_result_is_FQRNI(const struct dpaa2_dq *dq)
+{
+	return __qbman_result_is_x_in_mem(dq, QBMAN_RESULT_FQRNI);
+}
+
+int qbman_result_is_FQPN(const struct dpaa2_dq *dq)
+{
+	return __qbman_result_is_x(dq, QBMAN_RESULT_FQPN);
+}
+
+/*********************************/
+/* Parsing frame dequeue results */
+/*********************************/
+
+/* These APIs assume qbman_result_is_DQ() is TRUE */
+
+uint32_t dpaa2_dq_flags(const struct dpaa2_dq *dq)
+{
+	const uint32_t *p = qb_cl(dq);
+
+	return qb_attr_code_decode(&code_dqrr_stat, p);
+}
+
+uint16_t dpaa2_dq_seqnum(const struct dpaa2_dq *dq)
+{
+	const uint32_t *p = qb_cl(dq);
+
+	return (uint16_t)qb_attr_code_decode(&code_dqrr_seqnum, p);
+}
+
+uint16_t dpaa2_dq_odpid(const struct dpaa2_dq *dq)
+{
+	const uint32_t *p = qb_cl(dq);
+
+	return (uint16_t)qb_attr_code_decode(&code_dqrr_odpid, p);
+}
+
+uint32_t dpaa2_dq_fqid(const struct dpaa2_dq *dq)
+{
+	const uint32_t *p = qb_cl(dq);
+
+	return qb_attr_code_decode(&code_dqrr_fqid, p);
+}
+
+uint32_t dpaa2_dq_byte_count(const struct dpaa2_dq *dq)
+{
+	const uint32_t *p = qb_cl(dq);
+
+	return qb_attr_code_decode(&code_dqrr_byte_count, p);
+}
+
+uint32_t dpaa2_dq_frame_count(const struct dpaa2_dq *dq)
+{
+	const uint32_t *p = qb_cl(dq);
+
+	return qb_attr_code_decode(&code_dqrr_frame_count, p);
+}
+
+uint64_t dpaa2_dq_fqd_ctx(const struct dpaa2_dq *dq)
+{
+	const uint64_t *p = (uint64_t *)qb_cl(dq);
+
+	return qb_attr_code_decode_64(&code_dqrr_ctx_lo, p);
+}
+EXPORT_SYMBOL(dpaa2_dq_fqd_ctx);
+
+const struct dpaa2_fd *dpaa2_dq_fd(const struct dpaa2_dq *dq)
+{
+	const uint32_t *p = qb_cl(dq);
+
+	return (const struct dpaa2_fd *)&p[8];
+}
+EXPORT_SYMBOL(dpaa2_dq_fd);
+
+/**************************************/
+/* Parsing state-change notifications */
+/**************************************/
+
+static struct qb_attr_code code_scn_state = QB_CODE(0, 16, 8);
+static struct qb_attr_code code_scn_rid = QB_CODE(1, 0, 24);
+static struct qb_attr_code code_scn_state_in_mem =
+			QB_CODE(0, SCN_STATE_OFFSET_IN_MEM, 8);
+static struct qb_attr_code code_scn_rid_in_mem =
+			QB_CODE(1, SCN_RID_OFFSET_IN_MEM, 24);
+static struct qb_attr_code code_scn_ctx_lo = QB_CODE(2, 0, 32);
+
+uint8_t qbman_result_SCN_state(const struct dpaa2_dq *scn)
+{
+	const uint32_t *p = qb_cl(scn);
+
+	return (uint8_t)qb_attr_code_decode(&code_scn_state, p);
+}
+
+uint32_t qbman_result_SCN_rid(const struct dpaa2_dq *scn)
+{
+	const uint32_t *p = qb_cl(scn);
+
+	return qb_attr_code_decode(&code_scn_rid, p);
+}
+
+uint64_t qbman_result_SCN_ctx(const struct dpaa2_dq *scn)
+{
+	const uint64_t *p = (uint64_t *)qb_cl(scn);
+
+	return qb_attr_code_decode_64(&code_scn_ctx_lo, p);
+}
+
+uint8_t qbman_result_SCN_state_in_mem(const struct dpaa2_dq *scn)
+{
+	const uint32_t *p = qb_cl(scn);
+
+	return (uint8_t)qb_attr_code_decode(&code_scn_state_in_mem, p);
+}
+
+uint32_t qbman_result_SCN_rid_in_mem(const struct dpaa2_dq *scn)
+{
+	const uint32_t *p = qb_cl(scn);
+	uint32_t result_rid;
+
+	result_rid = qb_attr_code_decode(&code_scn_rid_in_mem, p);
+	return make_le24(result_rid);
+}
+
+/*****************/
+/* Parsing BPSCN */
+/*****************/
+uint16_t qbman_result_bpscn_bpid(const struct dpaa2_dq *scn)
+{
+	return (uint16_t)qbman_result_SCN_rid_in_mem(scn) & 0x3FFF;
+}
+
+int qbman_result_bpscn_has_free_bufs(const struct dpaa2_dq *scn)
+{
+	return !(int)(qbman_result_SCN_state_in_mem(scn) & 0x1);
+}
+
+int qbman_result_bpscn_is_depleted(const struct dpaa2_dq *scn)
+{
+	return (int)(qbman_result_SCN_state_in_mem(scn) & 0x2);
+}
+
+int qbman_result_bpscn_is_surplus(const struct dpaa2_dq *scn)
+{
+	return (int)(qbman_result_SCN_state_in_mem(scn) & 0x4);
+}
+
+uint64_t qbman_result_bpscn_ctx(const struct dpaa2_dq *scn)
+{
+	return qbman_result_SCN_ctx(scn);
+}
+
+/*****************/
+/* Parsing CGCU  */
+/*****************/
+uint16_t qbman_result_cgcu_cgid(const struct dpaa2_dq *scn)
+{
+	return (uint16_t)qbman_result_SCN_rid_in_mem(scn) & 0xFFFF;
+}
+
+uint64_t qbman_result_cgcu_icnt(const struct dpaa2_dq *scn)
+{
+	return qbman_result_SCN_ctx(scn) & 0xFFFFFFFFFF;
+}
+
+/******************/
+/* Buffer release */
+/******************/
+
+/* These should be const, eventually */
+/* static struct qb_attr_code code_release_num = QB_CODE(0, 0, 3); */
+static struct qb_attr_code code_release_set_me = QB_CODE(0, 5, 1);
+static struct qb_attr_code code_release_rcdi = QB_CODE(0, 6, 1);
+static struct qb_attr_code code_release_bpid = QB_CODE(0, 16, 16);
+
+void qbman_release_desc_clear(struct qbman_release_desc *d)
+{
+	uint32_t *cl;
+
+	memset(d, 0, sizeof(*d));
+	cl = qb_cl(d);
+	qb_attr_code_encode(&code_release_set_me, cl, 1);
+}
+
+void qbman_release_desc_set_bpid(struct qbman_release_desc *d, uint32_t bpid)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_release_bpid, cl, bpid);
+}
+
+void qbman_release_desc_set_rcdi(struct qbman_release_desc *d, int enable)
+{
+	uint32_t *cl = qb_cl(d);
+
+	qb_attr_code_encode(&code_release_rcdi, cl, !!enable);
+}
+
+#define RAR_IDX(rar)     ((rar) & 0x7)
+#define RAR_VB(rar)      ((rar) & 0x80)
+#define RAR_SUCCESS(rar) ((rar) & 0x100)
+
+int qbman_swp_release(struct qbman_swp *s, const struct qbman_release_desc *d,
+		      const uint64_t *buffers, unsigned int num_buffers)
+{
+	uint32_t *p;
+	const uint32_t *cl = qb_cl(d);
+	uint32_t rar = qbman_cinh_read(&s->sys, QBMAN_CINH_SWP_RAR);
+
+	pr_debug("RAR=%08x\n", rar);
+	if (!RAR_SUCCESS(rar))
+		return -EBUSY;
+	BUG_ON(!num_buffers || (num_buffers > 7));
+	/* Start the release command */
+	p = qbman_cena_write_start(&s->sys,
+				   QBMAN_CENA_SWP_RCR(RAR_IDX(rar)));
+	/* Copy the caller's buffer pointers to the command */
+	u64_to_le32_copy(&p[2], buffers, num_buffers);
+	/* Set the verb byte, have to substitute in the valid-bit and the number
+	 * of buffers. */
+	p[0] = cl[0] | RAR_VB(rar) | num_buffers;
+	qbman_cena_write_complete(&s->sys,
+				  QBMAN_CENA_SWP_RCR(RAR_IDX(rar)),
+				  p);
+	return 0;
+}
+
+/*******************/
+/* Buffer acquires */
+/*******************/
+
+/* These should be const, eventually */
+static struct qb_attr_code code_acquire_bpid = QB_CODE(0, 16, 16);
+static struct qb_attr_code code_acquire_num = QB_CODE(1, 0, 3);
+static struct qb_attr_code code_acquire_r_num = QB_CODE(1, 0, 3);
+
+int qbman_swp_acquire(struct qbman_swp *s, uint32_t bpid, uint64_t *buffers,
+		      unsigned int num_buffers)
+{
+	uint32_t *p;
+	uint32_t verb, rslt, num;
+
+	BUG_ON(!num_buffers || (num_buffers > 7));
+
+	/* Start the management command */
+	p = qbman_swp_mc_start(s);
+
+	if (!p)
+		return -EBUSY;
+
+	/* Encode the caller-provided attributes */
+	qb_attr_code_encode(&code_acquire_bpid, p, bpid);
+	qb_attr_code_encode(&code_acquire_num, p, num_buffers);
+
+	/* Complete the management command */
+	p = qbman_swp_mc_complete(s, p, p[0] | QBMAN_MC_ACQUIRE);
+
+	/* Decode the outcome */
+	verb = qb_attr_code_decode(&code_generic_verb, p);
+	rslt = qb_attr_code_decode(&code_generic_rslt, p);
+	num = qb_attr_code_decode(&code_acquire_r_num, p);
+	BUG_ON(verb != QBMAN_MC_ACQUIRE);
+
+	/* Determine success or failure */
+	if (unlikely(rslt != QBMAN_MC_RSLT_OK)) {
+		pr_err("Acquire buffers from BPID 0x%x failed, code=0x%02x\n",
+								bpid, rslt);
+		return -EIO;
+	}
+	BUG_ON(num > num_buffers);
+	/* Copy the acquired buffers to the caller's array */
+	u64_from_le32_copy(buffers, &p[2], num);
+	return (int)num;
+}
+
+/*****************/
+/* FQ management */
+/*****************/
+
+static struct qb_attr_code code_fqalt_fqid = QB_CODE(1, 0, 32);
+
+static int qbman_swp_alt_fq_state(struct qbman_swp *s, uint32_t fqid,
+				 uint8_t alt_fq_verb)
+{
+	uint32_t *p;
+	uint32_t verb, rslt;
+
+	/* Start the management command */
+	p = qbman_swp_mc_start(s);
+	if (!p)
+		return -EBUSY;
+
+	qb_attr_code_encode(&code_fqalt_fqid, p, fqid);
+	/* Complete the management command */
+	p = qbman_swp_mc_complete(s, p, p[0] | alt_fq_verb);
+
+	/* Decode the outcome */
+	verb = qb_attr_code_decode(&code_generic_verb, p);
+	rslt = qb_attr_code_decode(&code_generic_rslt, p);
+	BUG_ON(verb != alt_fq_verb);
+
+	/* Determine success or failure */
+	if (unlikely(rslt != QBMAN_MC_RSLT_OK)) {
+		pr_err("ALT FQID %d failed: verb = 0x%08x, code = 0x%02x\n",
+		       fqid, alt_fq_verb, rslt);
+		return -EIO;
+	}
+
+	return 0;
+}
+
+int qbman_swp_fq_schedule(struct qbman_swp *s, uint32_t fqid)
+{
+	return qbman_swp_alt_fq_state(s, fqid, QBMAN_FQ_SCHEDULE);
+}
+
+int qbman_swp_fq_force(struct qbman_swp *s, uint32_t fqid)
+{
+	return qbman_swp_alt_fq_state(s, fqid, QBMAN_FQ_FORCE);
+}
+
+int qbman_swp_fq_xon(struct qbman_swp *s, uint32_t fqid)
+{
+	return qbman_swp_alt_fq_state(s, fqid, QBMAN_FQ_XON);
+}
+
+int qbman_swp_fq_xoff(struct qbman_swp *s, uint32_t fqid)
+{
+	return qbman_swp_alt_fq_state(s, fqid, QBMAN_FQ_XOFF);
+}
+
+/**********************/
+/* Channel management */
+/**********************/
+
+static struct qb_attr_code code_cdan_cid = QB_CODE(0, 16, 12);
+static struct qb_attr_code code_cdan_we = QB_CODE(1, 0, 8);
+static struct qb_attr_code code_cdan_en = QB_CODE(1, 8, 1);
+static struct qb_attr_code code_cdan_ctx_lo = QB_CODE(2, 0, 32);
+
+/* Hide "ICD" for now as we don't use it, don't set it, and don't test it, so it
+ * would be irresponsible to expose it. */
+#define CODE_CDAN_WE_EN    0x1
+#define CODE_CDAN_WE_CTX   0x4
+
+static int qbman_swp_CDAN_set(struct qbman_swp *s, uint16_t channelid,
+			      uint8_t we_mask, uint8_t cdan_en,
+			      uint64_t ctx)
+{
+	uint32_t *p;
+	uint32_t verb, rslt;
+
+	/* Start the management command */
+	p = qbman_swp_mc_start(s);
+	if (!p)
+		return -EBUSY;
+
+	/* Encode the caller-provided attributes */
+	qb_attr_code_encode(&code_cdan_cid, p, channelid);
+	qb_attr_code_encode(&code_cdan_we, p, we_mask);
+	qb_attr_code_encode(&code_cdan_en, p, cdan_en);
+	qb_attr_code_encode_64(&code_cdan_ctx_lo, (uint64_t *)p, ctx);
+	/* Complete the management command */
+	p = qbman_swp_mc_complete(s, p, p[0] | QBMAN_WQCHAN_CONFIGURE);
+
+	/* Decode the outcome */
+	verb = qb_attr_code_decode(&code_generic_verb, p);
+	rslt = qb_attr_code_decode(&code_generic_rslt, p);
+	BUG_ON(verb != QBMAN_WQCHAN_CONFIGURE);
+
+	/* Determine success or failure */
+	if (unlikely(rslt != QBMAN_MC_RSLT_OK)) {
+		pr_err("CDAN cQID %d failed: code = 0x%02x\n",
+		       channelid, rslt);
+		return -EIO;
+	}
+
+	return 0;
+}
+
+int qbman_swp_CDAN_set_context(struct qbman_swp *s, uint16_t channelid,
+			       uint64_t ctx)
+{
+	return qbman_swp_CDAN_set(s, channelid,
+				  CODE_CDAN_WE_CTX,
+				  0, ctx);
+}
+
+int qbman_swp_CDAN_enable(struct qbman_swp *s, uint16_t channelid)
+{
+	return qbman_swp_CDAN_set(s, channelid,
+				  CODE_CDAN_WE_EN,
+				  1, 0);
+}
+int qbman_swp_CDAN_disable(struct qbman_swp *s, uint16_t channelid)
+{
+	return qbman_swp_CDAN_set(s, channelid,
+				  CODE_CDAN_WE_EN,
+				  0, 0);
+}
+int qbman_swp_CDAN_set_context_enable(struct qbman_swp *s, uint16_t channelid,
+				      uint64_t ctx)
+{
+	return qbman_swp_CDAN_set(s, channelid,
+				  CODE_CDAN_WE_EN | CODE_CDAN_WE_CTX,
+				  1, ctx);
+}
--- /dev/null
+++ b/drivers/staging/fsl-mc/bus/dpio/qbman_portal.h
@@ -0,0 +1,261 @@
+/* Copyright (C) 2014 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "qbman_private.h"
+#include "fsl_qbman_portal.h"
+#include "../../include/fsl_dpaa2_fd.h"
+
+/* All QBMan command and result structures use this "valid bit" encoding */
+#define QB_VALID_BIT ((uint32_t)0x80)
+
+/* Management command result codes */
+#define QBMAN_MC_RSLT_OK      0xf0
+
+/* TBD: as of QBMan 4.1, DQRR will be 8 rather than 4! */
+#define QBMAN_DQRR_SIZE 4
+
+/* DQRR valid-bit reset bug. See qbman_portal.c::qbman_swp_init(). */
+#define WORKAROUND_DQRR_RESET_BUG
+
+/* --------------------- */
+/* portal data structure */
+/* --------------------- */
+
+struct qbman_swp {
+	const struct qbman_swp_desc *desc;
+	/* The qbman_sys (ie. arch/OS-specific) support code can put anything it
+	 * needs in here. */
+	struct qbman_swp_sys sys;
+	/* Management commands */
+	struct {
+#ifdef QBMAN_CHECKING
+		enum swp_mc_check {
+			swp_mc_can_start, /* call __qbman_swp_mc_start() */
+			swp_mc_can_submit, /* call __qbman_swp_mc_submit() */
+			swp_mc_can_poll, /* call __qbman_swp_mc_result() */
+		} check;
+#endif
+		uint32_t valid_bit; /* 0x00 or 0x80 */
+	} mc;
+	/* Push dequeues */
+	uint32_t sdq;
+	/* Volatile dequeues */
+	struct {
+		/* VDQCR supports a "1 deep pipeline", meaning that if you know
+		 * the last-submitted command is already executing in the
+		 * hardware (as evidenced by at least 1 valid dequeue result),
+		 * you can write another dequeue command to the register, the
+		 * hardware will start executing it as soon as the
+		 * already-executing command terminates. (This minimises latency
+		 * and stalls.) With that in mind, this "busy" variable refers
+		 * to whether or not a command can be submitted, not whether or
+		 * not a previously-submitted command is still executing. In
+		 * other words, once proof is seen that the previously-submitted
+		 * command is executing, "vdq" is no longer "busy".
+		 */
+		atomic_t busy;
+		uint32_t valid_bit; /* 0x00 or 0x80 */
+		/* We need to determine when vdq is no longer busy. This depends
+		 * on whether the "busy" (last-submitted) dequeue command is
+		 * targeting DQRR or main-memory, and detected is based on the
+		 * presence of the dequeue command's "token" showing up in
+		 * dequeue entries in DQRR or main-memory (respectively). */
+		struct dpaa2_dq *storage; /* NULL if DQRR */
+	} vdq;
+	/* DQRR */
+	struct {
+		uint32_t next_idx;
+		uint32_t valid_bit;
+		uint8_t dqrr_size;
+#ifdef WORKAROUND_DQRR_RESET_BUG
+		int reset_bug;
+#endif
+	} dqrr;
+};
+
+/* -------------------------- */
+/* portal management commands */
+/* -------------------------- */
+
+/* Different management commands all use this common base layer of code to issue
+ * commands and poll for results. The first function returns a pointer to where
+ * the caller should fill in their MC command (though they should ignore the
+ * verb byte), the second function commits merges in the caller-supplied command
+ * verb (which should not include the valid-bit) and submits the command to
+ * hardware, and the third function checks for a completed response (returns
+ * non-NULL if only if the response is complete). */
+void *qbman_swp_mc_start(struct qbman_swp *p);
+void qbman_swp_mc_submit(struct qbman_swp *p, void *cmd, uint32_t cmd_verb);
+void *qbman_swp_mc_result(struct qbman_swp *p);
+
+/* Wraps up submit + poll-for-result */
+static inline void *qbman_swp_mc_complete(struct qbman_swp *swp, void *cmd,
+					  uint32_t cmd_verb)
+{
+	int loopvar;
+
+	qbman_swp_mc_submit(swp, cmd, cmd_verb);
+	DBG_POLL_START(loopvar);
+	do {
+		DBG_POLL_CHECK(loopvar);
+		cmd = qbman_swp_mc_result(swp);
+	} while (!cmd);
+	return cmd;
+}
+
+/* ------------ */
+/* qb_attr_code */
+/* ------------ */
+
+/* This struct locates a sub-field within a QBMan portal (CENA) cacheline which
+ * is either serving as a configuration command or a query result. The
+ * representation is inherently little-endian, as the indexing of the words is
+ * itself little-endian in nature and layerscape is little endian for anything
+ * that crosses a word boundary too (64-bit fields are the obvious examples).
+ */
+struct qb_attr_code {
+	unsigned int word; /* which uint32_t[] array member encodes the field */
+	unsigned int lsoffset; /* encoding offset from ls-bit */
+	unsigned int width; /* encoding width. (bool must be 1.) */
+};
+
+/* Some pre-defined codes */
+extern struct qb_attr_code code_generic_verb;
+extern struct qb_attr_code code_generic_rslt;
+
+/* Macros to define codes */
+#define QB_CODE(a, b, c) { a, b, c}
+#define QB_CODE_NULL \
+	QB_CODE((unsigned int)-1, (unsigned int)-1, (unsigned int)-1)
+
+/* Rotate a code "ms", meaning that it moves from less-significant bytes to
+ * more-significant, from less-significant words to more-significant, etc. The
+ * "ls" version does the inverse, from more-significant towards
+ * less-significant.
+ */
+static inline void qb_attr_code_rotate_ms(struct qb_attr_code *code,
+					  unsigned int bits)
+{
+	code->lsoffset += bits;
+	while (code->lsoffset > 31) {
+		code->word++;
+		code->lsoffset -= 32;
+	}
+}
+static inline void qb_attr_code_rotate_ls(struct qb_attr_code *code,
+					  unsigned int bits)
+{
+	/* Don't be fooled, this trick should work because the types are
+	 * unsigned. So the case that interests the while loop (the rotate has
+	 * gone too far and the word count needs to compensate for it), is
+	 * manifested when lsoffset is negative. But that equates to a really
+	 * large unsigned value, starting with lots of "F"s. As such, we can
+	 * continue adding 32 back to it until it wraps back round above zero,
+	 * to a value of 31 or less...
+	 */
+	code->lsoffset -= bits;
+	while (code->lsoffset > 31) {
+		code->word--;
+		code->lsoffset += 32;
+	}
+}
+/* Implement a loop of code rotations until 'expr' evaluates to FALSE (0). */
+#define qb_attr_code_for_ms(code, bits, expr) \
+		for (; expr; qb_attr_code_rotate_ms(code, bits))
+#define qb_attr_code_for_ls(code, bits, expr) \
+		for (; expr; qb_attr_code_rotate_ls(code, bits))
+
+/* decode a field from a cacheline */
+static inline uint32_t qb_attr_code_decode(const struct qb_attr_code *code,
+				      const uint32_t *cacheline)
+{
+	return d32_uint32_t(code->lsoffset, code->width, cacheline[code->word]);
+}
+static inline uint64_t qb_attr_code_decode_64(const struct qb_attr_code *code,
+				      const uint64_t *cacheline)
+{
+	uint64_t res;
+	u64_from_le32_copy(&res, &cacheline[code->word/2], 1);
+	return res;
+}
+
+/* encode a field to a cacheline */
+static inline void qb_attr_code_encode(const struct qb_attr_code *code,
+				       uint32_t *cacheline, uint32_t val)
+{
+	cacheline[code->word] =
+		r32_uint32_t(code->lsoffset, code->width, cacheline[code->word])
+		| e32_uint32_t(code->lsoffset, code->width, val);
+}
+static inline void qb_attr_code_encode_64(const struct qb_attr_code *code,
+				       uint64_t *cacheline, uint64_t val)
+{
+	u64_to_le32_copy(&cacheline[code->word/2], &val, 1);
+}
+
+/* Small-width signed values (two's-complement) will decode into medium-width
+ * positives. (Eg. for an 8-bit signed field, which stores values from -128 to
+ * +127, a setting of -7 would appear to decode to the 32-bit unsigned value
+ * 249. Likewise -120 would decode as 136.) This function allows the caller to
+ * "re-sign" such fields to 32-bit signed. (Eg. -7, which was 249 with an 8-bit
+ * encoding, will become 0xfffffff9 if you cast the return value to uint32_t).
+ */
+static inline int32_t qb_attr_code_makesigned(const struct qb_attr_code *code,
+					  uint32_t val)
+{
+	BUG_ON(val >= (1 << code->width));
+	/* If the high bit was set, it was encoding a negative */
+	if (val >= (1 << (code->width - 1)))
+		return (int32_t)0 - (int32_t)(((uint32_t)1 << code->width) -
+			val);
+	/* Otherwise, it was encoding a positive */
+	return (int32_t)val;
+}
+
+/* ---------------------- */
+/* Descriptors/cachelines */
+/* ---------------------- */
+
+/* To avoid needless dynamic allocation, the driver API often gives the caller
+ * a "descriptor" type that the caller can instantiate however they like.
+ * Ultimately though, it is just a cacheline of binary storage (or something
+ * smaller when it is known that the descriptor doesn't need all 64 bytes) for
+ * holding pre-formatted pieces of hardware commands. The performance-critical
+ * code can then copy these descriptors directly into hardware command
+ * registers more efficiently than trying to construct/format commands
+ * on-the-fly. The API user sees the descriptor as an array of 32-bit words in
+ * order for the compiler to know its size, but the internal details are not
+ * exposed. The following macro is used within the driver for converting *any*
+ * descriptor pointer to a usable array pointer. The use of a macro (instead of
+ * an inline) is necessary to work with different descriptor types and to work
+ * correctly with const and non-const inputs (and similarly-qualified outputs).
+ */
+#define qb_cl(d) (&(d)->dont_manipulate_directly[0])
--- /dev/null
+++ b/drivers/staging/fsl-mc/bus/dpio/qbman_private.h
@@ -0,0 +1,173 @@
+/* Copyright (C) 2014 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+/* Perform extra checking */
+#define QBMAN_CHECKING
+
+/* To maximise the amount of logic that is common between the Linux driver and
+ * other targets (such as the embedded MC firmware), we pivot here between the
+ * inclusion of two platform-specific headers.
+ *
+ * The first, qbman_sys_decl.h, includes any and all required system headers as
+ * well as providing any definitions for the purposes of compatibility. The
+ * second, qbman_sys.h, is where platform-specific routines go.
+ *
+ * The point of the split is that the platform-independent code (including this
+ * header) may depend on platform-specific declarations, yet other
+ * platform-specific routines may depend on platform-independent definitions.
+ */
+
+#include "qbman_sys_decl.h"
+
+#define QMAN_REV_4000   0x04000000
+#define QMAN_REV_4100   0x04010000
+#define QMAN_REV_4101   0x04010001
+
+/* When things go wrong, it is a convenient trick to insert a few FOO()
+ * statements in the code to trace progress. TODO: remove this once we are
+ * hacking the code less actively.
+ */
+#define FOO() fsl_os_print("FOO: %s:%d\n", __FILE__, __LINE__)
+
+/* Any time there is a register interface which we poll on, this provides a
+ * "break after x iterations" scheme for it. It's handy for debugging, eg.
+ * where you don't want millions of lines of log output from a polling loop
+ * that won't, because such things tend to drown out the earlier log output
+ * that might explain what caused the problem. (NB: put ";" after each macro!)
+ * TODO: we should probably remove this once we're done sanitising the
+ * simulator...
+ */
+#define DBG_POLL_START(loopvar) (loopvar = 10)
+#define DBG_POLL_CHECK(loopvar) \
+	do {if (!(loopvar--)) BUG_ON(1); } while (0)
+
+/* For CCSR or portal-CINH registers that contain fields at arbitrary offsets
+ * and widths, these macro-generated encode/decode/isolate/remove inlines can
+ * be used.
+ *
+ * Eg. to "d"ecode a 14-bit field out of a register (into a "uint16_t" type),
+ * where the field is located 3 bits "up" from the least-significant bit of the
+ * register (ie. the field location within the 32-bit register corresponds to a
+ * mask of 0x0001fff8), you would do;
+ *                uint16_t field = d32_uint16_t(3, 14, reg_value);
+ *
+ * Or to "e"ncode a 1-bit boolean value (input type is "int", zero is FALSE,
+ * non-zero is TRUE, so must convert all non-zero inputs to 1, hence the "!!"
+ * operator) into a register at bit location 0x00080000 (19 bits "in" from the
+ * LS bit), do;
+ *                reg_value |= e32_int(19, 1, !!field);
+ *
+ * If you wish to read-modify-write a register, such that you leave the 14-bit
+ * field as-is but have all other fields set to zero, then "i"solate the 14-bit
+ * value using;
+ *                reg_value = i32_uint16_t(3, 14, reg_value);
+ *
+ * Alternatively, you could "r"emove the 1-bit boolean field (setting it to
+ * zero) but leaving all other fields as-is;
+ *                reg_val = r32_int(19, 1, reg_value);
+ *
+ */
+#define MAKE_MASK32(width) (width == 32 ? 0xffffffff : \
+				 (uint32_t)((1 << width) - 1))
+#define DECLARE_CODEC32(t) \
+static inline uint32_t e32_##t(uint32_t lsoffset, uint32_t width, t val) \
+{ \
+	BUG_ON(width > (sizeof(t) * 8)); \
+	return ((uint32_t)val & MAKE_MASK32(width)) << lsoffset; \
+} \
+static inline t d32_##t(uint32_t lsoffset, uint32_t width, uint32_t val) \
+{ \
+	BUG_ON(width > (sizeof(t) * 8)); \
+	return (t)((val >> lsoffset) & MAKE_MASK32(width)); \
+} \
+static inline uint32_t i32_##t(uint32_t lsoffset, uint32_t width, \
+				uint32_t val) \
+{ \
+	BUG_ON(width > (sizeof(t) * 8)); \
+	return e32_##t(lsoffset, width, d32_##t(lsoffset, width, val)); \
+} \
+static inline uint32_t r32_##t(uint32_t lsoffset, uint32_t width, \
+				uint32_t val) \
+{ \
+	BUG_ON(width > (sizeof(t) * 8)); \
+	return ~(MAKE_MASK32(width) << lsoffset) & val; \
+}
+DECLARE_CODEC32(uint32_t)
+DECLARE_CODEC32(uint16_t)
+DECLARE_CODEC32(uint8_t)
+DECLARE_CODEC32(int)
+
+	/*********************/
+	/* Debugging assists */
+	/*********************/
+
+static inline void __hexdump(unsigned long start, unsigned long end,
+			unsigned long p, size_t sz, const unsigned char *c)
+{
+	while (start < end) {
+		unsigned int pos = 0;
+		char buf[64];
+		int nl = 0;
+
+		pos += sprintf(buf + pos, "%08lx: ", start);
+		do {
+			if ((start < p) || (start >= (p + sz)))
+				pos += sprintf(buf + pos, "..");
+			else
+				pos += sprintf(buf + pos, "%02x", *(c++));
+			if (!(++start & 15)) {
+				buf[pos++] = '\n';
+				nl = 1;
+			} else {
+				nl = 0;
+				if (!(start & 1))
+					buf[pos++] = ' ';
+				if (!(start & 3))
+					buf[pos++] = ' ';
+			}
+		} while (start & 15);
+		if (!nl)
+			buf[pos++] = '\n';
+		buf[pos] = '\0';
+		pr_info("%s", buf);
+	}
+}
+static inline void hexdump(const void *ptr, size_t sz)
+{
+	unsigned long p = (unsigned long)ptr;
+	unsigned long start = p & ~(unsigned long)15;
+	unsigned long end = (p + sz + 15) & ~(unsigned long)15;
+	const unsigned char *c = ptr;
+
+	__hexdump(start, end, p, sz, c);
+}
+
+#include "qbman_sys.h"
--- /dev/null
+++ b/drivers/staging/fsl-mc/bus/dpio/qbman_sys.h
@@ -0,0 +1,307 @@
+/* Copyright (C) 2014 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+/* qbman_sys_decl.h and qbman_sys.h are the two platform-specific files in the
+ * driver. They are only included via qbman_private.h, which is itself a
+ * platform-independent file and is included by all the other driver source.
+ *
+ * qbman_sys_decl.h is included prior to all other declarations and logic, and
+ * it exists to provide compatibility with any linux interfaces our
+ * single-source driver code is dependent on (eg. kmalloc). Ie. this file
+ * provides linux compatibility.
+ *
+ * This qbman_sys.h header, on the other hand, is included *after* any common
+ * and platform-neutral declarations and logic in qbman_private.h, and exists to
+ * implement any platform-specific logic of the qbman driver itself. Ie. it is
+ * *not* to provide linux compatibility.
+ */
+
+/* Trace the 3 different classes of read/write access to QBMan. #undef as
+ * required. */
+#undef QBMAN_CCSR_TRACE
+#undef QBMAN_CINH_TRACE
+#undef QBMAN_CENA_TRACE
+
+static inline void word_copy(void *d, const void *s, unsigned int cnt)
+{
+	uint32_t *dd = d;
+	const uint32_t *ss = s;
+
+	while (cnt--)
+		*(dd++) = *(ss++);
+}
+
+/* Currently, the CENA support code expects each 32-bit word to be written in
+ * host order, and these are converted to hardware (little-endian) order on
+ * command submission. However, 64-bit quantities are must be written (and read)
+ * as two 32-bit words with the least-significant word first, irrespective of
+ * host endianness. */
+static inline void u64_to_le32_copy(void *d, const uint64_t *s,
+					unsigned int cnt)
+{
+	uint32_t *dd = d;
+	const uint32_t *ss = (const uint32_t *)s;
+
+	while (cnt--) {
+		/* TBD: the toolchain was choking on the use of 64-bit types up
+		 * until recently so this works entirely with 32-bit variables.
+		 * When 64-bit types become usable again, investigate better
+		 * ways of doing this. */
+#if defined(__BIG_ENDIAN)
+		*(dd++) = ss[1];
+		*(dd++) = ss[0];
+		ss += 2;
+#else
+		*(dd++) = *(ss++);
+		*(dd++) = *(ss++);
+#endif
+	}
+}
+static inline void u64_from_le32_copy(uint64_t *d, const void *s,
+					unsigned int cnt)
+{
+	const uint32_t *ss = s;
+	uint32_t *dd = (uint32_t *)d;
+
+	while (cnt--) {
+#if defined(__BIG_ENDIAN)
+		dd[1] = *(ss++);
+		dd[0] = *(ss++);
+		dd += 2;
+#else
+		*(dd++) = *(ss++);
+		*(dd++) = *(ss++);
+#endif
+	}
+}
+
+/* Convert a host-native 32bit value into little endian */
+#if defined(__BIG_ENDIAN)
+static inline uint32_t make_le32(uint32_t val)
+{
+	return ((val & 0xff) << 24) | ((val & 0xff00) << 8) |
+		((val & 0xff0000) >> 8) | ((val & 0xff000000) >> 24);
+}
+static inline uint32_t make_le24(uint32_t val)
+{
+	return (((val & 0xff) << 16) | (val & 0xff00) |
+		((val & 0xff0000) >> 16));
+}
+#else
+#define make_le32(val) (val)
+#define make_le24(val) (val)
+#endif
+static inline void make_le32_n(uint32_t *val, unsigned int num)
+{
+	while (num--) {
+		*val = make_le32(*val);
+		val++;
+	}
+}
+
+	/******************/
+	/* Portal access  */
+	/******************/
+struct qbman_swp_sys {
+	/* On GPP, the sys support for qbman_swp is here. The CENA region isi
+	 * not an mmap() of the real portal registers, but an allocated
+	 * place-holder, because the actual writes/reads to/from the portal are
+	 * marshalled from these allocated areas using QBMan's "MC access
+	 * registers". CINH accesses are atomic so there's no need for a
+	 * place-holder. */
+	void *cena;
+	void __iomem *addr_cena;
+	void __iomem *addr_cinh;
+};
+
+/* P_OFFSET is (ACCESS_CMD,0,12) - offset within the portal
+ * C is (ACCESS_CMD,12,1) - is inhibited? (0==CENA, 1==CINH)
+ * SWP_IDX is (ACCESS_CMD,16,10) - Software portal index
+ * P is (ACCESS_CMD,28,1) - (0==special portal, 1==any portal)
+ * T is (ACCESS_CMD,29,1) - Command type (0==READ, 1==WRITE)
+ * E is (ACCESS_CMD,31,1) - Command execute (1 to issue, poll for 0==complete)
+ */
+
+static inline void qbman_cinh_write(struct qbman_swp_sys *s, uint32_t offset,
+				    uint32_t val)
+{
+
+	writel_relaxed(val, s->addr_cinh + offset);
+#ifdef QBMAN_CINH_TRACE
+	pr_info("qbman_cinh_write(%p:0x%03x) 0x%08x\n",
+		s->addr_cinh, offset, val);
+#endif
+}
+
+static inline uint32_t qbman_cinh_read(struct qbman_swp_sys *s, uint32_t offset)
+{
+	uint32_t reg = readl_relaxed(s->addr_cinh + offset);
+
+#ifdef QBMAN_CINH_TRACE
+	pr_info("qbman_cinh_read(%p:0x%03x) 0x%08x\n",
+		s->addr_cinh, offset, reg);
+#endif
+	return reg;
+}
+
+static inline void *qbman_cena_write_start(struct qbman_swp_sys *s,
+						uint32_t offset)
+{
+	void *shadow = s->cena + offset;
+
+#ifdef QBMAN_CENA_TRACE
+	pr_info("qbman_cena_write_start(%p:0x%03x) %p\n",
+		s->addr_cena, offset, shadow);
+#endif
+	BUG_ON(offset & 63);
+	dcbz(shadow);
+	return shadow;
+}
+
+static inline void qbman_cena_write_complete(struct qbman_swp_sys *s,
+						uint32_t offset, void *cmd)
+{
+	const uint32_t *shadow = cmd;
+	int loop;
+
+#ifdef QBMAN_CENA_TRACE
+	pr_info("qbman_cena_write_complete(%p:0x%03x) %p\n",
+		s->addr_cena, offset, shadow);
+	hexdump(cmd, 64);
+#endif
+	for (loop = 15; loop >= 1; loop--)
+		writel_relaxed(shadow[loop], s->addr_cena +
+					 offset + loop * 4);
+	lwsync();
+	writel_relaxed(shadow[0], s->addr_cena + offset);
+	dcbf(s->addr_cena + offset);
+}
+
+static inline void *qbman_cena_read(struct qbman_swp_sys *s, uint32_t offset)
+{
+	uint32_t *shadow = s->cena + offset;
+	unsigned int loop;
+
+#ifdef QBMAN_CENA_TRACE
+	pr_info("qbman_cena_read(%p:0x%03x) %p\n",
+		s->addr_cena, offset, shadow);
+#endif
+
+	for (loop = 0; loop < 16; loop++)
+		shadow[loop] = readl_relaxed(s->addr_cena + offset
+					+ loop * 4);
+#ifdef QBMAN_CENA_TRACE
+	hexdump(shadow, 64);
+#endif
+	return shadow;
+}
+
+static inline void qbman_cena_invalidate_prefetch(struct qbman_swp_sys *s,
+						  uint32_t offset)
+{
+	dcivac(s->addr_cena + offset);
+	prefetch_for_load(s->addr_cena + offset);
+}
+
+	/******************/
+	/* Portal support */
+	/******************/
+
+/* The SWP_CFG portal register is special, in that it is used by the
+ * platform-specific code rather than the platform-independent code in
+ * qbman_portal.c. So use of it is declared locally here. */
+#define QBMAN_CINH_SWP_CFG   0xd00
+
+/* For MC portal use, we always configure with
+ * DQRR_MF is (SWP_CFG,20,3) - DQRR max fill (<- 0x4)
+ * EST is (SWP_CFG,16,3) - EQCR_CI stashing threshold (<- 0x0)
+ * RPM is (SWP_CFG,12,2) - RCR production notification mode (<- 0x3)
+ * DCM is (SWP_CFG,10,2) - DQRR consumption notification mode (<- 0x2)
+ * EPM is (SWP_CFG,8,2) - EQCR production notification mode (<- 0x3)
+ * SD is (SWP_CFG,5,1) - memory stashing drop enable (<- FALSE)
+ * SP is (SWP_CFG,4,1) - memory stashing priority (<- TRUE)
+ * SE is (SWP_CFG,3,1) - memory stashing enable (<- 0x0)
+ * DP is (SWP_CFG,2,1) - dequeue stashing priority (<- TRUE)
+ * DE is (SWP_CFG,1,1) - dequeue stashing enable (<- 0x0)
+ * EP is (SWP_CFG,0,1) - EQCR_CI stashing priority (<- FALSE)
+ */
+static inline uint32_t qbman_set_swp_cfg(uint8_t max_fill, uint8_t wn,
+					uint8_t est, uint8_t rpm, uint8_t dcm,
+					uint8_t epm, int sd, int sp, int se,
+					int dp, int de, int ep)
+{
+	uint32_t reg;
+
+	reg = e32_uint8_t(20, (uint32_t)(3 + (max_fill >> 3)), max_fill) |
+		e32_uint8_t(16, 3, est) | e32_uint8_t(12, 2, rpm) |
+		e32_uint8_t(10, 2, dcm) | e32_uint8_t(8, 2, epm) |
+		e32_int(5, 1, sd) | e32_int(4, 1, sp) | e32_int(3, 1, se) |
+		e32_int(2, 1, dp) | e32_int(1, 1, de) | e32_int(0, 1, ep) |
+		e32_uint8_t(14, 1, wn);
+	return reg;
+}
+
+static inline int qbman_swp_sys_init(struct qbman_swp_sys *s,
+				     const struct qbman_swp_desc *d,
+				     uint8_t dqrr_size)
+{
+	uint32_t reg;
+
+	s->addr_cena = d->cena_bar;
+	s->addr_cinh = d->cinh_bar;
+	s->cena = (void *)get_zeroed_page(GFP_KERNEL);
+	if (!s->cena) {
+		pr_err("Could not allocate page for cena shadow\n");
+		return -1;
+	}
+
+#ifdef QBMAN_CHECKING
+	/* We should never be asked to initialise for a portal that isn't in
+	 * the power-on state. (Ie. don't forget to reset portals when they are
+	 * decommissioned!)
+	 */
+	reg = qbman_cinh_read(s, QBMAN_CINH_SWP_CFG);
+	BUG_ON(reg);
+#endif
+	reg = qbman_set_swp_cfg(dqrr_size, 0, 0, 3, 2, 3, 0, 1, 0, 1, 0, 0);
+	qbman_cinh_write(s, QBMAN_CINH_SWP_CFG, reg);
+	reg = qbman_cinh_read(s, QBMAN_CINH_SWP_CFG);
+	if (!reg) {
+		pr_err("The portal is not enabled!\n");
+		kfree(s->cena);
+		return -1;
+	}
+	return 0;
+}
+
+static inline void qbman_swp_sys_finish(struct qbman_swp_sys *s)
+{
+	free_page((unsigned long)s->cena);
+}
--- /dev/null
+++ b/drivers/staging/fsl-mc/bus/dpio/qbman_sys_decl.h
@@ -0,0 +1,86 @@
+/* Copyright (C) 2014 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/io.h>
+#include <linux/dma-mapping.h>
+#include <linux/bootmem.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/delay.h>
+#include <linux/memblock.h>
+#include <linux/completion.h>
+#include <linux/log2.h>
+#include <linux/types.h>
+#include <linux/ioctl.h>
+#include <linux/device.h>
+#include <linux/smp.h>
+#include <linux/vmalloc.h>
+#include "fsl_qbman_base.h"
+
+/* The platform-independent code shouldn't need endianness, except for
+ * weird/fast-path cases like qbman_result_has_token(), which needs to
+ * perform a passive and endianness-specific test on a read-only data structure
+ * very quickly. It's an exception, and this symbol is used for that case. */
+#if defined(__BIG_ENDIAN)
+#define DQRR_TOK_OFFSET 0
+#define QBMAN_RESULT_VERB_OFFSET_IN_MEM 24
+#define SCN_STATE_OFFSET_IN_MEM 8
+#define SCN_RID_OFFSET_IN_MEM 8
+#else
+#define DQRR_TOK_OFFSET 24
+#define QBMAN_RESULT_VERB_OFFSET_IN_MEM 0
+#define SCN_STATE_OFFSET_IN_MEM 16
+#define SCN_RID_OFFSET_IN_MEM 0
+#endif
+
+/* Similarly-named functions */
+#define upper32(a) upper_32_bits(a)
+#define lower32(a) lower_32_bits(a)
+
+	/****************/
+	/* arch assists */
+	/****************/
+
+#define dcbz(p) { asm volatile("dc zva, %0" : : "r" (p) : "memory"); }
+#define lwsync() { asm volatile("dmb st" : : : "memory"); }
+#define dcbf(p) { asm volatile("dc cvac, %0;" : : "r" (p) : "memory"); }
+#define dcivac(p) { asm volatile("dc ivac, %0" : : "r"(p) : "memory"); }
+static inline void prefetch_for_load(void *p)
+{
+	asm volatile("prfm pldl1keep, [%0, #64]" : : "r" (p));
+}
+static inline void prefetch_for_store(void *p)
+{
+	asm volatile("prfm pstl1keep, [%0, #64]" : : "r" (p));
+}
--- /dev/null
+++ b/drivers/staging/fsl-mc/bus/dpio/qbman_test.c
@@ -0,0 +1,664 @@
+/* Copyright (C) 2014 Freescale Semiconductor, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/kernel.h>
+#include <linux/io.h>
+#include <linux/module.h>
+
+#include "qbman_private.h"
+#include "fsl_qbman_portal.h"
+#include "qbman_debug.h"
+#include "../../include/fsl_dpaa2_fd.h"
+
+#define QBMAN_SWP_CENA_BASE 0x818000000
+#define QBMAN_SWP_CINH_BASE 0x81c000000
+
+#define QBMAN_PORTAL_IDX 2
+#define QBMAN_TEST_FQID 19
+#define QBMAN_TEST_BPID 23
+#define QBMAN_USE_QD
+#ifdef QBMAN_USE_QD
+#define QBMAN_TEST_QDID 1
+#endif
+#define QBMAN_TEST_LFQID 0xf00010
+
+#define NUM_EQ_FRAME 10
+#define NUM_DQ_FRAME 10
+#define NUM_DQ_IN_DQRR 5
+#define NUM_DQ_IN_MEM   (NUM_DQ_FRAME - NUM_DQ_IN_DQRR)
+
+static struct qbman_swp *swp;
+static struct qbman_eq_desc eqdesc;
+static struct qbman_pull_desc pulldesc;
+static struct qbman_release_desc releasedesc;
+static struct qbman_eq_response eq_storage[1];
+static struct dpaa2_dq dq_storage[NUM_DQ_IN_MEM] __aligned(64);
+static dma_addr_t eq_storage_phys;
+static dma_addr_t dq_storage_phys;
+
+/* FQ ctx attribute values for the test code. */
+#define FQCTX_HI 0xabbaf00d
+#define FQCTX_LO 0x98765432
+#define FQ_VFQID 0x123456
+
+/* Sample frame descriptor */
+static struct qbman_fd_simple fd = {
+	.addr_lo = 0xbabaf33d,
+	.addr_hi = 0x01234567,
+	.len = 0x7777,
+	.frc = 0xdeadbeef,
+	.flc_lo = 0xcafecafe,
+	.flc_hi = 0xbeadabba
+};
+
+static void fd_inc(struct qbman_fd_simple *_fd)
+{
+	_fd->addr_lo += _fd->len;
+	_fd->flc_lo += 0x100;
+	_fd->frc += 0x10;
+}
+
+static int fd_cmp(struct qbman_fd *fda, struct qbman_fd *fdb)
+{
+	int i;
+
+	for (i = 0; i < 8; i++)
+		if (fda->words[i] - fdb->words[i])
+			return 1;
+	return 0;
+}
+
+struct qbman_fd fd_eq[NUM_EQ_FRAME];
+struct qbman_fd fd_dq[NUM_DQ_FRAME];
+
+/* "Buffers" to be released (and storage for buffers to be acquired) */
+static uint64_t rbufs[320];
+static uint64_t abufs[320];
+
+static void do_enqueue(struct qbman_swp *swp)
+{
+	int i, j, ret;
+
+#ifdef QBMAN_USE_QD
+	pr_info("*****QBMan_test: Enqueue %d frames to QD %d\n",
+					NUM_EQ_FRAME, QBMAN_TEST_QDID);
+#else
+	pr_info("*****QBMan_test: Enqueue %d frames to FQ %d\n",
+					NUM_EQ_FRAME, QBMAN_TEST_FQID);
+#endif
+	for (i = 0; i < NUM_EQ_FRAME; i++) {
+		/*********************************/
+		/* Prepare a enqueue descriptor */
+		/*********************************/
+		memset(eq_storage, 0, sizeof(eq_storage));
+		eq_storage_phys = virt_to_phys(eq_storage);
+		qbman_eq_desc_clear(&eqdesc);
+		qbman_eq_desc_set_no_orp(&eqdesc, 0);
+		qbman_eq_desc_set_response(&eqdesc, eq_storage_phys, 0);
+		qbman_eq_desc_set_token(&eqdesc, 0x99);
+#ifdef QBMAN_USE_QD
+		/**********************************/
+		/* Prepare a Queueing Destination */
+		/**********************************/
+		qbman_eq_desc_set_qd(&eqdesc, QBMAN_TEST_QDID, 0, 3);
+#else
+		qbman_eq_desc_set_fq(&eqdesc, QBMAN_TEST_FQID);
+#endif
+
+		/******************/
+		/* Try an enqueue */
+		/******************/
+		ret = qbman_swp_enqueue(swp, &eqdesc,
+					(const struct qbman_fd *)&fd);
+		BUG_ON(ret);
+		for (j = 0; j < 8; j++)
+			fd_eq[i].words[j] = *((uint32_t *)&fd + j);
+		fd_inc(&fd);
+	}
+}
+
+static void do_push_dequeue(struct qbman_swp *swp)
+{
+	int i, j;
+	const struct dpaa2_dq *dq_storage1;
+	const struct qbman_fd *__fd;
+	int loopvar;
+
+	pr_info("*****QBMan_test: Start push dequeue\n");
+	for (i = 0; i < NUM_DQ_FRAME; i++) {
+		DBG_POLL_START(loopvar);
+		do {
+			DBG_POLL_CHECK(loopvar);
+			dq_storage1 = qbman_swp_dqrr_next(swp);
+		} while (!dq_storage1);
+		if (dq_storage1) {
+			__fd = (const struct qbman_fd *)
+					dpaa2_dq_fd(dq_storage1);
+			for (j = 0; j < 8; j++)
+				fd_dq[i].words[j] = __fd->words[j];
+			if (fd_cmp(&fd_eq[i], &fd_dq[i])) {
+				pr_info("enqueue FD is\n");
+				hexdump(&fd_eq[i], 32);
+				pr_info("dequeue FD is\n");
+				hexdump(&fd_dq[i], 32);
+			}
+			qbman_swp_dqrr_consume(swp, dq_storage1);
+		} else {
+			pr_info("The push dequeue fails\n");
+		}
+	}
+}
+
+static void do_pull_dequeue(struct qbman_swp *swp)
+{
+	int i, j, ret;
+	const struct dpaa2_dq *dq_storage1;
+	const struct qbman_fd *__fd;
+	int loopvar;
+
+	pr_info("*****QBMan_test: Dequeue %d frames with dq entry in DQRR\n",
+							NUM_DQ_IN_DQRR);
+	for (i = 0; i < NUM_DQ_IN_DQRR; i++) {
+		qbman_pull_desc_clear(&pulldesc);
+		qbman_pull_desc_set_storage(&pulldesc, NULL, 0, 0);
+		qbman_pull_desc_set_numframes(&pulldesc, 1);
+		qbman_pull_desc_set_fq(&pulldesc, QBMAN_TEST_FQID);
+
+		ret = qbman_swp_pull(swp, &pulldesc);
+		BUG_ON(ret);
+		DBG_POLL_START(loopvar);
+		do {
+			DBG_POLL_CHECK(loopvar);
+			dq_storage1 = qbman_swp_dqrr_next(swp);
+		} while (!dq_storage1);
+
+		if (dq_storage1) {
+			__fd = (const struct qbman_fd *)
+					dpaa2_dq_fd(dq_storage1);
+			for (j = 0; j < 8; j++)
+				fd_dq[i].words[j] = __fd->words[j];
+			if (fd_cmp(&fd_eq[i], &fd_dq[i])) {
+				pr_info("enqueue FD is\n");
+				hexdump(&fd_eq[i], 32);
+				pr_info("dequeue FD is\n");
+				hexdump(&fd_dq[i], 32);
+			}
+			qbman_swp_dqrr_consume(swp, dq_storage1);
+		} else {
+			pr_info("Dequeue with dq entry in DQRR fails\n");
+		}
+	}
+
+	pr_info("*****QBMan_test: Dequeue %d frames with dq entry in memory\n",
+								NUM_DQ_IN_MEM);
+	for (i = 0; i < NUM_DQ_IN_MEM; i++) {
+		dq_storage_phys = virt_to_phys(&dq_storage[i]);
+		qbman_pull_desc_clear(&pulldesc);
+		qbman_pull_desc_set_storage(&pulldesc, &dq_storage[i],
+						dq_storage_phys, 1);
+		qbman_pull_desc_set_numframes(&pulldesc, 1);
+		qbman_pull_desc_set_fq(&pulldesc, QBMAN_TEST_FQID);
+		ret = qbman_swp_pull(swp, &pulldesc);
+		BUG_ON(ret);
+
+		DBG_POLL_START(loopvar);
+		do {
+			DBG_POLL_CHECK(loopvar);
+			ret = qbman_result_has_new_result(swp,
+							    &dq_storage[i]);
+		} while (!ret);
+
+		if (ret) {
+			for (j = 0; j < 8; j++)
+				fd_dq[i + NUM_DQ_IN_DQRR].words[j] =
+				dq_storage[i].dont_manipulate_directly[j + 8];
+			j = i + NUM_DQ_IN_DQRR;
+			if (fd_cmp(&fd_eq[j], &fd_dq[j])) {
+				pr_info("enqueue FD is\n");
+				hexdump(&fd_eq[i + NUM_DQ_IN_DQRR], 32);
+				pr_info("dequeue FD is\n");
+				hexdump(&fd_dq[i + NUM_DQ_IN_DQRR], 32);
+				hexdump(&dq_storage[i], 64);
+			}
+		} else {
+			pr_info("Dequeue with dq entry in memory fails\n");
+		}
+	}
+}
+
+static void release_buffer(struct qbman_swp *swp, unsigned int num)
+{
+	int ret;
+	unsigned int i, j;
+
+	qbman_release_desc_clear(&releasedesc);
+	qbman_release_desc_set_bpid(&releasedesc, QBMAN_TEST_BPID);
+	pr_info("*****QBMan_test: Release %d buffers to BP %d\n",
+					num, QBMAN_TEST_BPID);
+	for (i = 0; i < (num / 7 + 1); i++) {
+		j = ((num - i * 7) > 7) ? 7 : (num - i * 7);
+		ret = qbman_swp_release(swp, &releasedesc, &rbufs[i * 7], j);
+		BUG_ON(ret);
+	}
+}
+
+static void acquire_buffer(struct qbman_swp *swp, unsigned int num)
+{
+	int ret;
+	unsigned int i, j;
+
+	pr_info("*****QBMan_test: Acquire %d buffers from BP %d\n",
+					num, QBMAN_TEST_BPID);
+
+	for (i = 0; i < (num / 7 + 1); i++) {
+		j = ((num - i * 7) > 7) ? 7 : (num - i * 7);
+		ret = qbman_swp_acquire(swp, QBMAN_TEST_BPID, &abufs[i * 7], j);
+		BUG_ON(ret != j);
+	}
+}
+
+static void buffer_pool_test(struct qbman_swp *swp)
+{
+	struct qbman_attr info;
+	struct dpaa2_dq *bpscn_message;
+	dma_addr_t bpscn_phys;
+	uint64_t bpscn_ctx;
+	uint64_t ctx = 0xbbccddaadeadbeefull;
+	int i, ret;
+	uint32_t hw_targ;
+
+	pr_info("*****QBMan_test: test buffer pool management\n");
+	ret = qbman_bp_query(swp, QBMAN_TEST_BPID, &info);
+	qbman_bp_attr_get_bpscn_addr(&info, &bpscn_phys);
+	pr_info("The bpscn is %llx, info_phys is %llx\n", bpscn_phys,
+			virt_to_phys(&info));
+	bpscn_message = phys_to_virt(bpscn_phys);
+
+	for (i = 0; i < 320; i++)
+		rbufs[i] = 0xf00dabba01234567ull + i * 0x40;
+
+	release_buffer(swp, 320);
+
+	pr_info("QBMan_test: query the buffer pool\n");
+	qbman_bp_query(swp, QBMAN_TEST_BPID, &info);
+	hexdump(&info, 64);
+	qbman_bp_attr_get_hw_targ(&info, &hw_targ);
+	pr_info("hw_targ is %d\n", hw_targ);
+
+	/* Acquire buffers to trigger BPSCN */
+	acquire_buffer(swp, 300);
+	/* BPSCN should be written to the memory */
+	qbman_bp_query(swp, QBMAN_TEST_BPID, &info);
+	hexdump(&info, 64);
+	hexdump(bpscn_message, 64);
+	BUG_ON(!qbman_result_is_BPSCN(bpscn_message));
+	/* There should be free buffers in the pool */
+	BUG_ON(!(qbman_result_bpscn_has_free_bufs(bpscn_message)));
+	/* Buffer pool is depleted */
+	BUG_ON(!qbman_result_bpscn_is_depleted(bpscn_message));
+	/* The ctx should match */
+	bpscn_ctx = qbman_result_bpscn_ctx(bpscn_message);
+	pr_info("BPSCN test: ctx %llx, bpscn_ctx %llx\n", ctx, bpscn_ctx);
+	BUG_ON(ctx != bpscn_ctx);
+	memset(bpscn_message, 0, sizeof(struct dpaa2_dq));
+
+	/* Re-seed the buffer pool to trigger BPSCN */
+	release_buffer(swp, 240);
+	/* BPSCN should be written to the memory */
+	BUG_ON(!qbman_result_is_BPSCN(bpscn_message));
+	/* There should be free buffers in the pool */
+	BUG_ON(!(qbman_result_bpscn_has_free_bufs(bpscn_message)));
+	/* Buffer pool is not depleted */
+	BUG_ON(qbman_result_bpscn_is_depleted(bpscn_message));
+	memset(bpscn_message, 0, sizeof(struct dpaa2_dq));
+
+	acquire_buffer(swp, 260);
+	/* BPSCN should be written to the memory */
+	BUG_ON(!qbman_result_is_BPSCN(bpscn_message));
+	/* There should be free buffers in the pool while BPSCN generated */
+	BUG_ON(!(qbman_result_bpscn_has_free_bufs(bpscn_message)));
+	/* Buffer pool is depletion */
+	BUG_ON(!qbman_result_bpscn_is_depleted(bpscn_message));
+}
+
+static void ceetm_test(struct qbman_swp *swp)
+{
+	int i, j, ret;
+
+	qbman_eq_desc_clear(&eqdesc);
+	qbman_eq_desc_set_no_orp(&eqdesc, 0);
+	qbman_eq_desc_set_fq(&eqdesc, QBMAN_TEST_LFQID);
+	pr_info("*****QBMan_test: Enqueue to LFQID %x\n",
+						QBMAN_TEST_LFQID);
+	for (i = 0; i < NUM_EQ_FRAME; i++) {
+		ret = qbman_swp_enqueue(swp, &eqdesc,
+					(const struct qbman_fd *)&fd);
+		BUG_ON(ret);
+		for (j = 0; j < 8; j++)
+			fd_eq[i].words[j] = *((uint32_t *)&fd + j);
+		fd_inc(&fd);
+	}
+}
+
+int qbman_test(void)
+{
+	struct qbman_swp_desc pd;
+	uint32_t reg;
+
+	pd.cena_bar = ioremap_cache_ns(QBMAN_SWP_CENA_BASE +
+				QBMAN_PORTAL_IDX * 0x10000, 0x10000);
+	pd.cinh_bar = ioremap(QBMAN_SWP_CINH_BASE +
+				QBMAN_PORTAL_IDX * 0x10000, 0x10000);
+
+	/* Detect whether the mc image is the test image with GPP setup */
+	reg = readl_relaxed(pd.cena_bar + 0x4);
+	if (reg != 0xdeadbeef) {
+		pr_err("The MC image doesn't have GPP test setup, stop!\n");
+		iounmap(pd.cena_bar);
+		iounmap(pd.cinh_bar);
+		return -1;
+	}
+
+	pr_info("*****QBMan_test: Init QBMan SWP %d\n", QBMAN_PORTAL_IDX);
+	swp = qbman_swp_init(&pd);
+	if (!swp) {
+		iounmap(pd.cena_bar);
+		iounmap(pd.cinh_bar);
+		return -1;
+	}
+
+	/*******************/
+	/* Enqueue frames  */
+	/*******************/
+	do_enqueue(swp);
+
+	/*******************/
+	/* Do pull dequeue */
+	/*******************/
+	do_pull_dequeue(swp);
+
+	/*******************/
+	/* Enqueue frames  */
+	/*******************/
+	qbman_swp_push_set(swp, 0, 1);
+	qbman_swp_fq_schedule(swp, QBMAN_TEST_FQID);
+	do_enqueue(swp);
+
+	/*******************/
+	/* Do push dequeue */
+	/*******************/
+	do_push_dequeue(swp);
+
+	/**************************/
+	/* Test buffer pool funcs */
+	/**************************/
+	buffer_pool_test(swp);
+
+	/******************/
+	/* CEETM test     */
+	/******************/
+	ceetm_test(swp);
+
+	qbman_swp_finish(swp);
+	pr_info("*****QBMan_test: Kernel test Passed\n");
+	return 0;
+}
+
+/* user-space test-case, definitions:
+ *
+ * 1 portal only, using portal index 3.
+ */
+
+#include <linux/uaccess.h>
+#include <linux/ioctl.h>
+#include <linux/miscdevice.h>
+#include <linux/fs.h>
+#include <linux/cdev.h>
+#include <linux/mm.h>
+#include <linux/mman.h>
+
+#define QBMAN_TEST_US_SWP 3 /* portal index for user space */
+
+#define QBMAN_TEST_MAGIC 'q'
+struct qbman_test_swp_ioctl {
+	unsigned long portal1_cinh;
+	unsigned long portal1_cena;
+};
+struct qbman_test_dma_ioctl {
+	unsigned long ptr;
+	uint64_t phys_addr;
+};
+
+struct qbman_test_priv {
+	int has_swp_map;
+	int has_dma_map;
+	unsigned long pgoff;
+};
+
+#define QBMAN_TEST_SWP_MAP \
+	_IOR(QBMAN_TEST_MAGIC, 0x01, struct qbman_test_swp_ioctl)
+#define QBMAN_TEST_SWP_UNMAP \
+	_IOR(QBMAN_TEST_MAGIC, 0x02, struct qbman_test_swp_ioctl)
+#define QBMAN_TEST_DMA_MAP \
+	_IOR(QBMAN_TEST_MAGIC, 0x03, struct qbman_test_dma_ioctl)
+#define QBMAN_TEST_DMA_UNMAP \
+	_IOR(QBMAN_TEST_MAGIC, 0x04, struct qbman_test_dma_ioctl)
+
+#define TEST_PORTAL1_CENA_PGOFF ((QBMAN_SWP_CENA_BASE + QBMAN_TEST_US_SWP * \
+						0x10000) >> PAGE_SHIFT)
+#define TEST_PORTAL1_CINH_PGOFF ((QBMAN_SWP_CINH_BASE + QBMAN_TEST_US_SWP * \
+						0x10000) >> PAGE_SHIFT)
+
+static int qbman_test_open(struct inode *inode, struct file *filp)
+{
+	struct qbman_test_priv *priv;
+
+	priv = kmalloc(sizeof(struct qbman_test_priv), GFP_KERNEL);
+	if (!priv)
+		return -EIO;
+	filp->private_data = priv;
+	priv->has_swp_map = 0;
+	priv->has_dma_map = 0;
+	priv->pgoff = 0;
+	return 0;
+}
+
+static int qbman_test_mmap(struct file *filp, struct vm_area_struct *vma)
+{
+	int ret;
+	struct qbman_test_priv *priv = filp->private_data;
+
+	BUG_ON(!priv);
+
+	if (vma->vm_pgoff == TEST_PORTAL1_CINH_PGOFF)
+		vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+	else if (vma->vm_pgoff == TEST_PORTAL1_CENA_PGOFF)
+		vma->vm_page_prot = pgprot_cached_ns(vma->vm_page_prot);
+	else if (vma->vm_pgoff == priv->pgoff)
+		vma->vm_page_prot = pgprot_cached(vma->vm_page_prot);
+	else {
+		pr_err("Damn, unrecognised pg_off!!\n");
+		return -EINVAL;
+	}
+	ret = remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
+				      vma->vm_end - vma->vm_start,
+				      vma->vm_page_prot);
+	return ret;
+}
+
+static long qbman_test_ioctl(struct file *fp, unsigned int cmd,
+				unsigned long arg)
+{
+	void __user *a = (void __user *)arg;
+	unsigned long longret, populate;
+	int ret = 0;
+	struct qbman_test_priv *priv = fp->private_data;
+
+	BUG_ON(!priv);
+
+	switch (cmd) {
+	case QBMAN_TEST_SWP_MAP:
+	{
+		struct qbman_test_swp_ioctl params;
+
+		if (priv->has_swp_map)
+			return -EINVAL;
+		down_write(&current->mm->mmap_sem);
+		/* Map portal1 CINH */
+		longret = do_mmap_pgoff(fp, PAGE_SIZE, 0x10000,
+				PROT_READ | PROT_WRITE, MAP_SHARED,
+				TEST_PORTAL1_CINH_PGOFF, &populate);
+		if (longret & ~PAGE_MASK) {
+			ret = (int)longret;
+			goto out;
+		}
+		params.portal1_cinh = longret;
+		/* Map portal1 CENA */
+		longret = do_mmap_pgoff(fp, PAGE_SIZE, 0x10000,
+				PROT_READ | PROT_WRITE, MAP_SHARED,
+				TEST_PORTAL1_CENA_PGOFF, &populate);
+		if (longret & ~PAGE_MASK) {
+			ret = (int)longret;
+			goto out;
+		}
+		params.portal1_cena = longret;
+		priv->has_swp_map = 1;
+out:
+		up_write(&current->mm->mmap_sem);
+		if (!ret && copy_to_user(a, &params, sizeof(params)))
+			return -EFAULT;
+		return ret;
+	}
+	case QBMAN_TEST_SWP_UNMAP:
+	{
+		struct qbman_test_swp_ioctl params;
+
+		if (!priv->has_swp_map)
+			return -EINVAL;
+
+		if (copy_from_user(&params, a, sizeof(params)))
+			return -EFAULT;
+		down_write(&current->mm->mmap_sem);
+		do_munmap(current->mm, params.portal1_cena, 0x10000);
+		do_munmap(current->mm, params.portal1_cinh, 0x10000);
+		up_write(&current->mm->mmap_sem);
+		priv->has_swp_map = 0;
+		return 0;
+	}
+	case QBMAN_TEST_DMA_MAP:
+	{
+		struct qbman_test_dma_ioctl params;
+		void *vaddr;
+
+		if (priv->has_dma_map)
+			return -EINVAL;
+		vaddr = (void *)get_zeroed_page(GFP_KERNEL);
+		params.phys_addr = virt_to_phys(vaddr);
+		priv->pgoff = (unsigned long)params.phys_addr >> PAGE_SHIFT;
+		down_write(&current->mm->mmap_sem);
+		longret = do_mmap_pgoff(fp, PAGE_SIZE, PAGE_SIZE,
+				PROT_READ | PROT_WRITE, MAP_SHARED,
+				priv->pgoff, &populate);
+		if (longret & ~PAGE_MASK) {
+			ret = (int)longret;
+			return ret;
+		}
+		params.ptr = longret;
+		priv->has_dma_map = 1;
+		up_write(&current->mm->mmap_sem);
+		if (copy_to_user(a, &params, sizeof(params)))
+			return -EFAULT;
+		return 0;
+	}
+	case QBMAN_TEST_DMA_UNMAP:
+	{
+		struct qbman_test_dma_ioctl params;
+
+		if (!priv->has_dma_map)
+			return -EINVAL;
+		if (copy_from_user(&params, a, sizeof(params)))
+			return -EFAULT;
+		down_write(&current->mm->mmap_sem);
+		do_munmap(current->mm, params.ptr, PAGE_SIZE);
+		up_write(&current->mm->mmap_sem);
+		free_page((unsigned long)phys_to_virt(params.phys_addr));
+		priv->has_dma_map = 0;
+		return 0;
+	}
+	default:
+		pr_err("Bad ioctl cmd!\n");
+	}
+	return -EINVAL;
+}
+
+static const struct file_operations qbman_fops = {
+	.open		   = qbman_test_open,
+	.mmap		   = qbman_test_mmap,
+	.unlocked_ioctl	   = qbman_test_ioctl
+};
+
+static struct miscdevice qbman_miscdev = {
+	.name = "qbman-test",
+	.fops = &qbman_fops,
+	.minor = MISC_DYNAMIC_MINOR,
+};
+
+static int qbman_miscdev_init;
+
+static int test_init(void)
+{
+	int ret = qbman_test();
+
+	if (!ret) {
+		/* MC image supports the test cases, so instantiate the
+		 * character devic that the user-space test case will use to do
+		 * its memory mappings. */
+		ret = misc_register(&qbman_miscdev);
+		if (ret) {
+			pr_err("qbman-test: failed to register misc device\n");
+			return ret;
+		}
+		pr_info("qbman-test: misc device registered!\n");
+		qbman_miscdev_init = 1;
+	}
+	return 0;
+}
+
+static void test_exit(void)
+{
+	if (qbman_miscdev_init) {
+		misc_deregister(&qbman_miscdev);
+		qbman_miscdev_init = 0;
+	}
+}
+
+module_init(test_init);
+module_exit(test_exit);
--- /dev/null
+++ b/drivers/staging/fsl-mc/include/fsl_dpaa2_fd.h
@@ -0,0 +1,774 @@
+/* Copyright 2014 Freescale Semiconductor Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __FSL_DPAA2_FD_H
+#define __FSL_DPAA2_FD_H
+
+/**
+ * DOC: DPAA2 FD - Frame Descriptor APIs for DPAA2
+ *
+ * Frame Descriptors (FDs) are used to describe frame data in the DPAA2.
+ * Frames can be enqueued and dequeued to Frame Queues which are consumed
+ * by the various DPAA accelerators (WRIOP, SEC, PME, DCE)
+ *
+ * There are three types of frames: Single, Scatter Gather and Frame Lists.
+ *
+ * The set of APIs in this file must be used to create, manipulate and
+ * query Frame Descriptor.
+ *
+ */
+
+/**
+ * struct dpaa2_fd - Place-holder for FDs.
+ * @words: for easier/faster copying the whole FD structure.
+ * @addr_lo: the lower 32 bits of the address in FD.
+ * @addr_hi: the upper 32 bits of the address in FD.
+ * @len: the length field in FD.
+ * @bpid_offset: represent the bpid and offset fields in FD
+ * @frc: frame context
+ * @ctrl: the 32bit control bits including dd, sc,... va, err.
+ * @flc_lo: the lower 32bit of flow context.
+ * @flc_hi: the upper 32bits of flow context.
+ *
+ * This structure represents the basic Frame Descriptor used in the system.
+ * We represent it via the simplest form that we need for now. Different
+ * overlays may be needed to support different options, etc. (It is impractical
+ * to define One True Struct, because the resulting encoding routines (lots of
+ * read-modify-writes) would be worst-case performance whether or not
+ * circumstances required them.)
+ */
+struct dpaa2_fd {
+	union {
+		u32 words[8];
+		struct dpaa2_fd_simple {
+			u32 addr_lo;
+			u32 addr_hi;
+			u32 len;
+			/* offset in the MS 16 bits, BPID in the LS 16 bits */
+			u32 bpid_offset;
+			u32 frc; /* frame context */
+			/* "err", "va", "cbmt", "asal", [...] */
+			u32 ctrl;
+			/* flow context */
+			u32 flc_lo;
+			u32 flc_hi;
+		} simple;
+	};
+};
+
+enum dpaa2_fd_format {
+	dpaa2_fd_single = 0,
+	dpaa2_fd_list,
+	dpaa2_fd_sg
+};
+
+/* Accessors for SG entry fields
+ *
+ * These setters and getters assume little endian format. For converting
+ * between LE and cpu endianness, the specific conversion functions must be
+ * called before the SGE contents are accessed by the core (on Rx),
+ * respectively before the SG table is sent to hardware (on Tx)
+ */
+
+/**
+ * dpaa2_fd_get_addr() - get the addr field of frame descriptor
+ * @fd: the given frame descriptor.
+ *
+ * Return the address in the frame descriptor.
+ */
+static inline dma_addr_t dpaa2_fd_get_addr(const struct dpaa2_fd *fd)
+{
+	return (dma_addr_t)((((uint64_t)fd->simple.addr_hi) << 32)
+				+ fd->simple.addr_lo);
+}
+
+/**
+ * dpaa2_fd_set_addr() - Set the addr field of frame descriptor
+ * @fd: the given frame descriptor.
+ * @addr: the address needs to be set in frame descriptor.
+ */
+static inline void dpaa2_fd_set_addr(struct dpaa2_fd *fd, dma_addr_t addr)
+{
+	fd->simple.addr_hi = upper_32_bits(addr);
+	fd->simple.addr_lo = lower_32_bits(addr);
+}
+
+/**
+ * dpaa2_fd_get_frc() - Get the frame context in the frame descriptor
+ * @fd: the given frame descriptor.
+ *
+ * Return the frame context field in the frame descriptor.
+ */
+static inline u32 dpaa2_fd_get_frc(const struct dpaa2_fd *fd)
+{
+	return fd->simple.frc;
+}
+
+/**
+ * dpaa2_fd_set_frc() - Set the frame context in the frame descriptor
+ * @fd: the given frame descriptor.
+ * @frc: the frame context needs to be set in frame descriptor.
+ */
+static inline void dpaa2_fd_set_frc(struct dpaa2_fd *fd, u32 frc)
+{
+	fd->simple.frc = frc;
+}
+
+/**
+ * dpaa2_fd_get_flc() - Get the flow context in the frame descriptor
+ * @fd: the given frame descriptor.
+ *
+ * Return the flow context in the frame descriptor.
+ */
+static inline dma_addr_t dpaa2_fd_get_flc(const struct dpaa2_fd *fd)
+{
+	return (dma_addr_t)((((uint64_t)fd->simple.flc_hi) << 32) +
+			    fd->simple.flc_lo);
+}
+
+/**
+ * dpaa2_fd_set_flc() - Set the flow context field of frame descriptor
+ * @fd: the given frame descriptor.
+ * @flc_addr: the flow context needs to be set in frame descriptor.
+ */
+static inline void dpaa2_fd_set_flc(struct dpaa2_fd *fd,  dma_addr_t flc_addr)
+{
+	fd->simple.flc_hi = upper_32_bits(flc_addr);
+	fd->simple.flc_lo = lower_32_bits(flc_addr);
+}
+
+/**
+ * dpaa2_fd_get_len() - Get the length in the frame descriptor
+ * @fd: the given frame descriptor.
+ *
+ * Return the length field in the frame descriptor.
+ */
+static inline u32 dpaa2_fd_get_len(const struct dpaa2_fd *fd)
+{
+	return fd->simple.len;
+}
+
+/**
+ * dpaa2_fd_set_len() - Set the length field of frame descriptor
+ * @fd: the given frame descriptor.
+ * @len: the length needs to be set in frame descriptor.
+ */
+static inline void dpaa2_fd_set_len(struct dpaa2_fd *fd, u32 len)
+{
+	fd->simple.len = len;
+}
+
+/**
+ * dpaa2_fd_get_offset() - Get the offset field in the frame descriptor
+ * @fd: the given frame descriptor.
+ *
+ * Return the offset.
+ */
+static inline uint16_t dpaa2_fd_get_offset(const struct dpaa2_fd *fd)
+{
+	return (uint16_t)(fd->simple.bpid_offset >> 16) & 0x0FFF;
+}
+
+/**
+ * dpaa2_fd_set_offset() - Set the offset field of frame descriptor
+ *
+ * @fd: the given frame descriptor.
+ * @offset: the offset needs to be set in frame descriptor.
+ */
+static inline void dpaa2_fd_set_offset(struct dpaa2_fd *fd, uint16_t offset)
+{
+	fd->simple.bpid_offset &= 0xF000FFFF;
+	fd->simple.bpid_offset |= (u32)offset << 16;
+}
+
+/**
+ * dpaa2_fd_get_format() - Get the format field in the frame descriptor
+ * @fd: the given frame descriptor.
+ *
+ * Return the format.
+ */
+static inline enum dpaa2_fd_format dpaa2_fd_get_format(
+						const struct dpaa2_fd *fd)
+{
+	return (enum dpaa2_fd_format)((fd->simple.bpid_offset >> 28) & 0x3);
+}
+
+/**
+ * dpaa2_fd_set_format() - Set the format field of frame descriptor
+ *
+ * @fd: the given frame descriptor.
+ * @format: the format needs to be set in frame descriptor.
+ */
+static inline void dpaa2_fd_set_format(struct dpaa2_fd *fd,
+				       enum dpaa2_fd_format format)
+{
+	fd->simple.bpid_offset &= 0xCFFFFFFF;
+	fd->simple.bpid_offset |= (u32)format << 28;
+}
+
+/**
+ * dpaa2_fd_get_bpid() - Get the bpid field in the frame descriptor
+ * @fd: the given frame descriptor.
+ *
+ * Return the bpid.
+ */
+static inline uint16_t dpaa2_fd_get_bpid(const struct dpaa2_fd *fd)
+{
+	return (uint16_t)(fd->simple.bpid_offset & 0xFFFF);
+}
+
+/**
+ * dpaa2_fd_set_bpid() - Set the bpid field of frame descriptor
+ *
+ * @fd: the given frame descriptor.
+ * @bpid: the bpid needs to be set in frame descriptor.
+ */
+static inline void dpaa2_fd_set_bpid(struct dpaa2_fd *fd, uint16_t bpid)
+{
+	fd->simple.bpid_offset &= 0xFFFF0000;
+	fd->simple.bpid_offset |= (u32)bpid;
+}
+
+/**
+ * struct dpaa2_sg_entry - the scatter-gathering structure
+ * @addr_lo: the lower 32bit of address
+ * @addr_hi: the upper 32bit of address
+ * @len: the length in this sg entry.
+ * @bpid_offset: offset in the MS 16 bits, BPID in the LS 16 bits.
+ */
+struct dpaa2_sg_entry {
+	u32 addr_lo;
+	u32 addr_hi;
+	u32 len;
+	u32 bpid_offset;
+};
+
+enum dpaa2_sg_format {
+	dpaa2_sg_single = 0,
+	dpaa2_sg_frame_data,
+	dpaa2_sg_sgt_ext
+};
+
+/**
+ * dpaa2_sg_get_addr() - Get the address from SG entry
+ * @sg: the given scatter-gathering object.
+ *
+ * Return the address.
+ */
+static inline dma_addr_t dpaa2_sg_get_addr(const struct dpaa2_sg_entry *sg)
+{
+	return (dma_addr_t)((((u64)sg->addr_hi) << 32) + sg->addr_lo);
+}
+
+/**
+ * dpaa2_sg_set_addr() - Set the address in SG entry
+ * @sg: the given scatter-gathering object.
+ * @addr: the address to be set.
+ */
+static inline void dpaa2_sg_set_addr(struct dpaa2_sg_entry *sg, dma_addr_t addr)
+{
+	sg->addr_hi = upper_32_bits(addr);
+	sg->addr_lo = lower_32_bits(addr);
+}
+
+
+static inline bool dpaa2_sg_short_len(const struct dpaa2_sg_entry *sg)
+{
+	return (sg->bpid_offset >> 30) & 0x1;
+}
+
+/**
+ * dpaa2_sg_get_len() - Get the length in SG entry
+ * @sg: the given scatter-gathering object.
+ *
+ * Return the length.
+ */
+static inline u32 dpaa2_sg_get_len(const struct dpaa2_sg_entry *sg)
+{
+	if (dpaa2_sg_short_len(sg))
+		return sg->len & 0x1FFFF;
+	return sg->len;
+}
+
+/**
+ * dpaa2_sg_set_len() - Set the length in SG entry
+ * @sg: the given scatter-gathering object.
+ * @len: the length to be set.
+ */
+static inline void dpaa2_sg_set_len(struct dpaa2_sg_entry *sg, u32 len)
+{
+	sg->len = len;
+}
+
+/**
+ * dpaa2_sg_get_offset() - Get the offset in SG entry
+ * @sg: the given scatter-gathering object.
+ *
+ * Return the offset.
+ */
+static inline u16 dpaa2_sg_get_offset(const struct dpaa2_sg_entry *sg)
+{
+	return (u16)(sg->bpid_offset >> 16) & 0x0FFF;
+}
+
+/**
+ * dpaa2_sg_set_offset() - Set the offset in SG entry
+ * @sg: the given scatter-gathering object.
+ * @offset: the offset to be set.
+ */
+static inline void dpaa2_sg_set_offset(struct dpaa2_sg_entry *sg,
+				       u16 offset)
+{
+	sg->bpid_offset &= 0xF000FFFF;
+	sg->bpid_offset |= (u32)offset << 16;
+}
+
+/**
+ * dpaa2_sg_get_format() - Get the SG format in SG entry
+ * @sg: the given scatter-gathering object.
+ *
+ * Return the format.
+ */
+static inline enum dpaa2_sg_format
+	dpaa2_sg_get_format(const struct dpaa2_sg_entry *sg)
+{
+	return (enum dpaa2_sg_format)((sg->bpid_offset >> 28) & 0x3);
+}
+
+/**
+ * dpaa2_sg_set_format() - Set the SG format in SG entry
+ * @sg: the given scatter-gathering object.
+ * @format: the format to be set.
+ */
+static inline void dpaa2_sg_set_format(struct dpaa2_sg_entry *sg,
+				       enum dpaa2_sg_format format)
+{
+	sg->bpid_offset &= 0xCFFFFFFF;
+	sg->bpid_offset |= (u32)format << 28;
+}
+
+/**
+ * dpaa2_sg_get_bpid() - Get the buffer pool id in SG entry
+ * @sg: the given scatter-gathering object.
+ *
+ * Return the bpid.
+ */
+static inline u16 dpaa2_sg_get_bpid(const struct dpaa2_sg_entry *sg)
+{
+	return (u16)(sg->bpid_offset & 0x3FFF);
+}
+
+/**
+ * dpaa2_sg_set_bpid() - Set the buffer pool id in SG entry
+ * @sg: the given scatter-gathering object.
+ * @bpid: the bpid to be set.
+ */
+static inline void dpaa2_sg_set_bpid(struct dpaa2_sg_entry *sg, u16 bpid)
+{
+	sg->bpid_offset &= 0xFFFFC000;
+	sg->bpid_offset |= (u32)bpid;
+}
+
+/**
+ * dpaa2_sg_is_final() - Check final bit in SG entry
+ * @sg: the given scatter-gathering object.
+ *
+ * Return bool.
+ */
+static inline bool dpaa2_sg_is_final(const struct dpaa2_sg_entry *sg)
+{
+	return !!(sg->bpid_offset >> 31);
+}
+
+/**
+ * dpaa2_sg_set_final() - Set the final bit in SG entry
+ * @sg: the given scatter-gathering object.
+ * @final: the final boolean to be set.
+ */
+static inline void dpaa2_sg_set_final(struct dpaa2_sg_entry *sg, bool final)
+{
+	sg->bpid_offset &= 0x7FFFFFFF;
+	sg->bpid_offset |= (u32)final << 31;
+}
+
+/* Endianness conversion helper functions
+ * The accelerator drivers which construct / read scatter gather entries
+ * need to call these in order to account for endianness mismatches between
+ * hardware and cpu
+ */
+#ifdef __BIG_ENDIAN
+/**
+ * dpaa2_sg_cpu_to_le() - convert scatter gather entry from native cpu
+ * format little endian format.
+ * @sg: the given scatter gather entry.
+ */
+static inline void dpaa2_sg_cpu_to_le(struct dpaa2_sg_entry *sg)
+{
+	uint32_t *p = (uint32_t *)sg;
+	int i;
+
+	for (i = 0; i < sizeof(*sg) / sizeof(u32); i++)
+		cpu_to_le32s(p++);
+}
+
+/**
+ * dpaa2_sg_le_to_cpu() - convert scatter gather entry from little endian
+ * format to native cpu format.
+ * @sg: the given scatter gather entry.
+ */
+static inline void dpaa2_sg_le_to_cpu(struct dpaa2_sg_entry *sg)
+{
+	uint32_t *p = (uint32_t *)sg;
+	int i;
+
+	for (i = 0; i < sizeof(*sg) / sizeof(u32); i++)
+		le32_to_cpus(p++);
+}
+#else
+#define dpaa2_sg_cpu_to_le(sg)
+#define dpaa2_sg_le_to_cpu(sg)
+#endif /* __BIG_ENDIAN */
+
+
+/**
+ * struct dpaa2_fl_entry - structure for frame list entry.
+ * @addr_lo: the lower 32bit of address
+ * @addr_hi: the upper 32bit of address
+ * @len: the length in this sg entry.
+ * @bpid_offset: offset in the MS 16 bits, BPID in the LS 16 bits.
+ * @frc: frame context
+ * @ctrl: the 32bit control bits including dd, sc,... va, err.
+ * @flc_lo: the lower 32bit of flow context.
+ * @flc_hi: the upper 32bits of flow context.
+ *
+ * Frame List Entry (FLE)
+ * Identical to dpaa2_fd.simple layout, but some bits are different
+ */
+struct dpaa2_fl_entry {
+	u32 addr_lo;
+	u32 addr_hi;
+	u32 len;
+	u32 bpid_offset;
+	u32 frc;
+	u32 ctrl;
+	u32 flc_lo;
+	u32 flc_hi;
+};
+
+enum dpaa2_fl_format {
+	dpaa2_fl_single = 0,
+	dpaa2_fl_res,
+	dpaa2_fl_sg
+};
+
+/**
+ * dpaa2_fl_get_addr() - Get address in the frame list entry
+ * @fle: the given frame list entry.
+ *
+ * Return address for the get function.
+ */
+static inline dma_addr_t dpaa2_fl_get_addr(const struct dpaa2_fl_entry *fle)
+{
+	return (dma_addr_t)((((uint64_t)fle->addr_hi) << 32) + fle->addr_lo);
+}
+
+/**
+ * dpaa2_fl_set_addr() - Set the address in the frame list entry
+ * @fle: the given frame list entry.
+ * @addr: the address needs to be set.
+ *
+ */
+static inline void dpaa2_fl_set_addr(struct dpaa2_fl_entry *fle,
+				     dma_addr_t addr)
+{
+	fle->addr_hi = upper_32_bits(addr);
+	fle->addr_lo = lower_32_bits(addr);
+}
+
+/**
+ * dpaa2_fl_get_flc() - Get the flow context in the frame list entry
+ * @fle: the given frame list entry.
+ *
+ * Return flow context for the get function.
+ */
+static inline dma_addr_t dpaa2_fl_get_flc(const struct dpaa2_fl_entry *fle)
+{
+	return (dma_addr_t)((((uint64_t)fle->flc_hi) << 32) + fle->flc_lo);
+}
+
+/**
+ * dpaa2_fl_set_flc() - Set the flow context in the frame list entry
+ * @fle: the given frame list entry.
+ * @flc_addr: the flow context address needs to be set.
+ *
+ */
+static inline void dpaa2_fl_set_flc(struct dpaa2_fl_entry *fle,
+				    dma_addr_t flc_addr)
+{
+	fle->flc_hi = upper_32_bits(flc_addr);
+	fle->flc_lo = lower_32_bits(flc_addr);
+}
+
+/**
+ * dpaa2_fl_get_len() - Get the length in the frame list entry
+ * @fle: the given frame list entry.
+ *
+ * Return length for the get function.
+ */
+static inline u32 dpaa2_fl_get_len(const struct dpaa2_fl_entry *fle)
+{
+	return fle->len;
+}
+
+/**
+ * dpaa2_fl_set_len() - Set the length in the frame list entry
+ * @fle: the given frame list entry.
+ * @len: the length needs to be set.
+ *
+ */
+static inline void dpaa2_fl_set_len(struct dpaa2_fl_entry *fle, u32 len)
+{
+	fle->len = len;
+}
+
+/**
+ * dpaa2_fl_get_offset() - Get/Set the offset in the frame list entry
+ * @fle: the given frame list entry.
+ *
+ * Return offset for the get function.
+ */
+static inline uint16_t dpaa2_fl_get_offset(const struct dpaa2_fl_entry *fle)
+{
+	return (uint16_t)(fle->bpid_offset >> 16) & 0x0FFF;
+}
+
+/**
+ * dpaa2_fl_set_offset() - Set the offset in the frame list entry
+ * @fle: the given frame list entry.
+ * @offset: the offset needs to be set.
+ *
+ */
+static inline void dpaa2_fl_set_offset(struct dpaa2_fl_entry *fle,
+				       uint16_t offset)
+{
+	fle->bpid_offset &= 0xF000FFFF;
+	fle->bpid_offset |= (u32)(offset & 0x0FFF) << 16;
+}
+
+/**
+ * dpaa2_fl_get_format() - Get the format in the frame list entry
+ * @fle: the given frame list entry.
+ *
+ * Return frame list format for the get function.
+ */
+static inline enum dpaa2_fl_format dpaa2_fl_get_format(
+	const struct dpaa2_fl_entry *fle)
+{
+	return (enum dpaa2_fl_format)((fle->bpid_offset >> 28) & 0x3);
+}
+
+/**
+ * dpaa2_fl_set_format() - Set the format in the frame list entry
+ * @fle: the given frame list entry.
+ * @format: the frame list format needs to be set.
+ *
+ */
+static inline void dpaa2_fl_set_format(struct dpaa2_fl_entry *fle,
+				       enum dpaa2_fl_format format)
+{
+	fle->bpid_offset &= 0xCFFFFFFF;
+	fle->bpid_offset |= (u32)(format & 0x3) << 28;
+}
+
+/**
+ * dpaa2_fl_get_bpid() - Get the buffer pool id in the frame list entry
+ * @fle: the given frame list entry.
+ *
+ * Return bpid for the get function.
+ */
+static inline uint16_t dpaa2_fl_get_bpid(const struct dpaa2_fl_entry *fle)
+{
+	return (uint16_t)(fle->bpid_offset & 0x3FFF);
+}
+
+/**
+ * dpaa2_fl_set_bpid() - Set the buffer pool id in the frame list entry
+ * @fle: the given frame list entry.
+ * @bpid: the buffer pool id needs to be set.
+ *
+ */
+static inline void dpaa2_fl_set_bpid(struct dpaa2_fl_entry *fle, uint16_t bpid)
+{
+	fle->bpid_offset &= 0xFFFFC000;
+	fle->bpid_offset |= (u32)bpid;
+}
+
+/** dpaa2_fl_is_final() - check the final bit is set or not in the frame list.
+ * @fle: the given frame list entry.
+ *
+ * Return final bit settting.
+ */
+static inline bool dpaa2_fl_is_final(const struct dpaa2_fl_entry *fle)
+{
+	return !!(fle->bpid_offset >> 31);
+}
+
+/**
+ * dpaa2_fl_set_final() - Set the final bit in the frame list entry
+ * @fle: the given frame list entry.
+ * @final: the final bit needs to be set.
+ *
+ */
+static inline void dpaa2_fl_set_final(struct dpaa2_fl_entry *fle, bool final)
+{
+	fle->bpid_offset &= 0x7FFFFFFF;
+	fle->bpid_offset |= (u32)final << 31;
+}
+
+/**
+ * struct dpaa2_dq - the qman result structure
+ * @dont_manipulate_directly: the 16 32bit data to represent the whole
+ * possible qman dequeue result.
+ *
+ * When frames are dequeued, the FDs show up inside "dequeue" result structures
+ * (if at all, not all dequeue results contain valid FDs). This structure type
+ * is intentionally defined without internal detail, and the only reason it
+ * isn't declared opaquely (without size) is to allow the user to provide
+ * suitably-sized (and aligned) memory for these entries.
+ */
+struct dpaa2_dq {
+	uint32_t dont_manipulate_directly[16];
+};
+
+/* Parsing frame dequeue results */
+/* FQ empty */
+#define DPAA2_DQ_STAT_FQEMPTY       0x80
+/* FQ held active */
+#define DPAA2_DQ_STAT_HELDACTIVE    0x40
+/* FQ force eligible */
+#define DPAA2_DQ_STAT_FORCEELIGIBLE 0x20
+/* Valid frame */
+#define DPAA2_DQ_STAT_VALIDFRAME    0x10
+/* FQ ODP enable */
+#define DPAA2_DQ_STAT_ODPVALID      0x04
+/* Volatile dequeue */
+#define DPAA2_DQ_STAT_VOLATILE      0x02
+/* volatile dequeue command is expired */
+#define DPAA2_DQ_STAT_EXPIRED       0x01
+
+/**
+ * dpaa2_dq_flags() - Get the stat field of dequeue response
+ * @dq: the dequeue result.
+ */
+uint32_t dpaa2_dq_flags(const struct dpaa2_dq *dq);
+
+/**
+ * dpaa2_dq_is_pull() - Check whether the dq response is from a pull
+ * command.
+ * @dq: the dequeue result.
+ *
+ * Return 1 for volatile(pull) dequeue, 0 for static dequeue.
+ */
+static inline int dpaa2_dq_is_pull(const struct dpaa2_dq *dq)
+{
+	return (int)(dpaa2_dq_flags(dq) & DPAA2_DQ_STAT_VOLATILE);
+}
+
+/**
+ * dpaa2_dq_is_pull_complete() - Check whether the pull command is completed.
+ * @dq: the dequeue result.
+ *
+ * Return boolean.
+ */
+static inline int dpaa2_dq_is_pull_complete(
+					const struct dpaa2_dq *dq)
+{
+	return (int)(dpaa2_dq_flags(dq) & DPAA2_DQ_STAT_EXPIRED);
+}
+
+/**
+ * dpaa2_dq_seqnum() - Get the seqnum field in dequeue response
+ * seqnum is valid only if VALIDFRAME flag is TRUE
+ * @dq: the dequeue result.
+ *
+ * Return seqnum.
+ */
+uint16_t dpaa2_dq_seqnum(const struct dpaa2_dq *dq);
+
+/**
+ * dpaa2_dq_odpid() - Get the seqnum field in dequeue response
+ * odpid is valid only if ODPVAILD flag is TRUE.
+ * @dq: the dequeue result.
+ *
+ * Return odpid.
+ */
+uint16_t dpaa2_dq_odpid(const struct dpaa2_dq *dq);
+
+/**
+ * dpaa2_dq_fqid() - Get the fqid in dequeue response
+ * @dq: the dequeue result.
+ *
+ * Return fqid.
+ */
+uint32_t dpaa2_dq_fqid(const struct dpaa2_dq *dq);
+
+/**
+ * dpaa2_dq_byte_count() - Get the byte count in dequeue response
+ * @dq: the dequeue result.
+ *
+ * Return the byte count remaining in the FQ.
+ */
+uint32_t dpaa2_dq_byte_count(const struct dpaa2_dq *dq);
+
+/**
+ * dpaa2_dq_frame_count() - Get the frame count in dequeue response
+ * @dq: the dequeue result.
+ *
+ * Return the frame count remaining in the FQ.
+ */
+uint32_t dpaa2_dq_frame_count(const struct dpaa2_dq *dq);
+
+/**
+ * dpaa2_dq_fd_ctx() - Get the frame queue context in dequeue response
+ * @dq: the dequeue result.
+ *
+ * Return the frame queue context.
+ */
+uint64_t dpaa2_dq_fqd_ctx(const struct dpaa2_dq *dq);
+
+/**
+ * dpaa2_dq_fd() - Get the frame descriptor in dequeue response
+ * @dq: the dequeue result.
+ *
+ * Return the frame descriptor.
+ */
+const struct dpaa2_fd *dpaa2_dq_fd(const struct dpaa2_dq *dq);
+
+#endif /* __FSL_DPAA2_FD_H */
--- /dev/null
+++ b/drivers/staging/fsl-mc/include/fsl_dpaa2_io.h
@@ -0,0 +1,619 @@
+/* Copyright 2014 Freescale Semiconductor Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Freescale Semiconductor nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef __FSL_DPAA2_IO_H
+#define __FSL_DPAA2_IO_H
+
+#include "fsl_dpaa2_fd.h"
+
+struct dpaa2_io;
+struct dpaa2_io_store;
+
+/**
+ * DOC: DPIO Service Management
+ *
+ * The DPIO service provides APIs for users to interact with the datapath
+ * by enqueueing and dequeing frame descriptors.
+ *
+ * The following set of APIs can be used to enqueue and dequeue frames
+ * as well as producing notification callbacks when data is available
+ * for dequeue.
+ */
+
+/**
+ * struct dpaa2_io_desc - The DPIO descriptor.
+ * @receives_notifications: Use notificaton mode.
+ * @has_irq: use irq-based proessing.
+ * @will_poll: use poll processing.
+ * @has_8prio: set for channel with 8 priority WQs.
+ * @cpu: the cpu index that at least interrupt handlers will execute on.
+ * @stash_affinity: the stash affinity for this portal favour 'cpu'
+ * @regs_cena: the cache enabled regs.
+ * @regs_cinh: the cache inhibited regs.
+ * @dpio_id: The dpio index.
+ * @qman_version: the qman version
+ *
+ * Describe the attributes and features of the DPIO object.
+ */
+struct dpaa2_io_desc {
+	/* non-zero iff the DPIO has a channel */
+	int receives_notifications;
+	/* non-zero if the DPIO portal interrupt is handled. If so, the
+	 * caller/OS handles the interrupt and calls dpaa2_io_service_irq(). */
+	int has_irq;
+	/* non-zero if the caller/OS is prepared to called the
+	 * dpaa2_io_service_poll() routine as part of its run-to-completion (or
+	 * scheduling) loop. If so, the DPIO service may dynamically switch some
+	 * of its processing between polling-based and irq-based. It is illegal
+	 * combination to have (!has_irq && !will_poll). */
+	int will_poll;
+	/* ignored unless 'receives_notifications'. Non-zero iff the channel has
+	 * 8 priority WQs, otherwise the channel has 2. */
+	int has_8prio;
+	/* the cpu index that at least interrupt handlers will execute on. And
+	 * if 'stash_affinity' is non-zero, the cache targeted by stash
+	 * transactions is affine to this cpu. */
+	int cpu;
+	/* non-zero if stash transactions for this portal favour 'cpu' over
+	 * other CPUs. (Eg. zero if there's no stashing, or stashing is to
+	 * shared cache.) */
+	int stash_affinity;
+	/* Caller-provided flags, determined by bus-scanning and/or creation of
+	 * DPIO objects via MC commands. */
+	void *regs_cena;
+	void *regs_cinh;
+	int dpio_id;
+	uint32_t qman_version;
+};
+
+/**
+ * dpaa2_io_create() - create a dpaa2_io object.
+ * @desc: the dpaa2_io descriptor
+ *
+ * Activates a "struct dpaa2_io" corresponding to the given config of an actual
+ * DPIO object. This handle can be used on it's own (like a one-portal "DPIO
+ * service") or later be added to a service-type "struct dpaa2_io" object. Note,
+ * the information required on 'cfg' is copied so the caller is free to do as
+ * they wish with the input parameter upon return.
+ *
+ * Return a valid dpaa2_io object for success, or NULL for failure.
+ */
+struct dpaa2_io *dpaa2_io_create(const struct dpaa2_io_desc *desc);
+
+/**
+ * dpaa2_io_create_service() -  Create an (initially empty) DPIO service.
+ *
+ * Return a valid dpaa2_io object for success, or NULL for failure.
+ */
+struct dpaa2_io *dpaa2_io_create_service(void);
+
+/**
+ * dpaa2_io_default_service() - Use the driver's own global (and initially
+ * empty) DPIO service.
+ *
+ * This increments the reference count, so don't forget to use dpaa2_io_down()
+ * for each time this function is called.
+ *
+ * Return a valid dpaa2_io object for success, or NULL for failure.
+ */
+struct dpaa2_io *dpaa2_io_default_service(void);
+
+/**
+ * dpaa2_io_down() - release the dpaa2_io object.
+ * @d: the dpaa2_io object to be released.
+ *
+ * The "struct dpaa2_io" type can represent an individual DPIO object (as
+ * described by "struct dpaa2_io_desc") or an instance of a "DPIO service",
+ * which can be used to group/encapsulate multiple DPIO objects. In all cases,
+ * each handle obtained should be released using this function.
+ */
+void dpaa2_io_down(struct dpaa2_io *d);
+
+/**
+ * dpaa2_io_service_add() - Add the given DPIO object to the given DPIO service.
+ * @service: the given DPIO service.
+ * @obj: the given DPIO object.
+ *
+ * 'service' must have been created by dpaa2_io_create_service() and 'obj'
+ * must have been created by dpaa2_io_create(). This increments the reference
+ * count on the object that 'obj' refers to, so the user could call
+ * dpaa2_io_down(obj) after this and the object will persist within the service
+ * (and will be destroyed when the service is destroyed).
+ *
+ * Return 0 for success, or -EINVAL for failure.
+ */
+int dpaa2_io_service_add(struct dpaa2_io *service, struct dpaa2_io *obj);
+
+/**
+ * dpaa2_io_get_descriptor() - Get the DPIO descriptor of the given DPIO object.
+ * @obj: the given DPIO object.
+ * @desc: the returned DPIO descriptor.
+ *
+ * This function will return failure if the given dpaa2_io struct represents a
+ * service rather than an individual DPIO object, otherwise it returns zero and
+ * the given 'cfg' structure is filled in.
+ *
+ * Return 0 for success, or -EINVAL for failure.
+ */
+int dpaa2_io_get_descriptor(struct dpaa2_io *obj, struct dpaa2_io_desc *desc);
+
+/**
+ * dpaa2_io_poll() -  Process any notifications and h/w-initiated events that
+ * are polling-driven.
+ * @obj: the given DPIO object.
+ *
+ * Obligatory for DPIO objects that have dpaa2_io_desc::will_poll non-zero.
+ *
+ * Return 0 for success, or -EINVAL for failure.
+ */
+int dpaa2_io_poll(struct dpaa2_io *obj);
+
+/**
+ * dpaa2_io_irq() - Process any notifications and h/w-initiated events that are
+ * irq-driven.
+ * @obj: the given DPIO object.
+ *
+ * Obligatory for DPIO objects that have dpaa2_io_desc::has_irq non-zero.
+ *
+ * Return IRQ_HANDLED for success, or -EINVAL for failure.
+ */
+int dpaa2_io_irq(struct dpaa2_io *obj);
+
+/**
+ * dpaa2_io_pause_poll() - Used to stop polling.
+ * @obj: the given DPIO object.
+ *
+ * If a polling application is going to stop polling for a period of time and
+ * supports interrupt processing, it can call this function to convert all
+ * processing to IRQ. (Eg. when sleeping.)
+ *
+ * Return -EINVAL.
+ */
+int dpaa2_io_pause_poll(struct dpaa2_io *obj);
+
+/**
+ * dpaa2_io_resume_poll() - Resume polling
+ * @obj: the given DPIO object.
+ *
+ * Return -EINVAL.
+ */
+int dpaa2_io_resume_poll(struct dpaa2_io *obj);
+
+/**
+ * dpaa2_io_service_notifications() - Get a mask of cpus that the DPIO service
+ * can receive notifications on.
+ * @s: the given DPIO object.
+ * @mask: the mask of cpus.
+ *
+ * Note that this is a run-time snapshot. If things like cpu-hotplug are
+ * supported in the target system, then an attempt to register notifications
+ * for a cpu that appears present in the given mask might fail if that cpu has
+ * gone offline in the mean time.
+ */
+void dpaa2_io_service_notifications(struct dpaa2_io *s, cpumask_t *mask);
+
+/**
+ * dpaa2_io_service_stashing - Get a mask of cpus that the DPIO service has stash
+ * affinity to.
+ * @s: the given DPIO object.
+ * @mask: the mask of cpus.
+ */
+void dpaa2_io_service_stashing(struct dpaa2_io *s, cpumask_t *mask);
+
+/**
+ * dpaa2_io_service_nonaffine() - Check the DPIO service's cpu affinity
+ * for stashing.
+ * @s: the given DPIO object.
+ *
+ * Return a boolean, whether or not the DPIO service has resources that have no
+ * particular cpu affinity for stashing. (Useful to know if you wish to operate
+ * on CPUs that the service has no affinity to, you would choose to use
+ * resources that are neutral, rather than affine to a different CPU.) Unlike
+ * other service-specific APIs, this one doesn't return an error if it is passed
+ * a non-service object. So don't do it.
+ */
+int dpaa2_io_service_has_nonaffine(struct dpaa2_io *s);
+
+/*************************/
+/* Notification handling */
+/*************************/
+
+/**
+ * struct dpaa2_io_notification_ctx - The DPIO notification context structure.
+ * @cb: the callback to be invoked when the notification arrives.
+ * @is_cdan: Zero/FALSE for FQDAN, non-zero/TRUE for CDAN.
+ * @id: FQID or channel ID, needed for rearm.
+ * @desired_cpu: the cpu on which the notifications will show up.
+ * @actual_cpu: the cpu the notification actually shows up.
+ * @migration_cb: callback function used for migration.
+ * @dpio_id: the dpio index.
+ * @qman64: the 64-bit context value shows up in the FQDAN/CDAN.
+ * @node: the list node.
+ * @dpio_private: the dpio object internal to dpio_service.
+ *
+ * When a FQDAN/CDAN registration is made (eg. by DPNI/DPCON/DPAI code), a
+ * context of the following type is used. The caller can embed it within a
+ * larger structure in order to add state that is tracked along with the
+ * notification (this may be useful when callbacks are invoked that pass this
+ * notification context as a parameter).
+ */
+struct dpaa2_io_notification_ctx {
+	void (*cb)(struct dpaa2_io_notification_ctx *);
+	int is_cdan;
+	uint32_t id;
+	/* This specifies which cpu the user wants notifications to show up on
+	 * (ie. to execute 'cb'). If notification-handling on that cpu is not
+	 * available at the time of notification registration, the registration
+	 * will fail. */
+	int desired_cpu;
+	/* If the target platform supports cpu-hotplug or other features
+	 * (related to power-management, one would expect) that can migrate IRQ
+	 * handling of a given DPIO object, then this value will potentially be
+	 * different to 'desired_cpu' at run-time. */
+	int actual_cpu;
+	/* And if migration does occur and this callback is non-NULL, it will
+	 * be invoked prior to any futher notification callbacks executing on
+	 * 'newcpu'. Note that 'oldcpu' is what 'actual_cpu' was prior to the
+	 * migration, and 'newcpu' is what it is now. Both could conceivably be
+	 * different to 'desired_cpu'. */
+	void (*migration_cb)(struct dpaa2_io_notification_ctx *,
+			     int oldcpu, int newcpu);
+	/* These are returned from dpaa2_io_service_register().
+	 * 'dpio_id' is the dpaa2_io_desc::dpio_id value of the DPIO object that
+	 * has been selected by the service for receiving the notifications. The
+	 * caller can use this value in the MC command that attaches the FQ (or
+	 * channel) of their DPNI (or DPCON, respectively) to this DPIO for
+	 * notification-generation.
+	 * 'qman64' is the 64-bit context value that needs to be sent in the
+	 * same MC command in order to be programmed into the FQ or channel -
+	 * this is the 64-bit value that shows up in the FQDAN/CDAN messages to
+	 * the DPIO object, and the DPIO service specifies this value back to
+	 * the caller so that the notifications that show up will be
+	 * comprensible/demux-able to the DPIO service. */
+	int dpio_id;
+	uint64_t qman64;
+	/* These fields are internal to the DPIO service once the context is
+	 * registered. TBD: may require more internal state fields. */
+	struct list_head node;
+	void *dpio_private;
+};
+
+/**
+ * dpaa2_io_service_register() - Prepare for servicing of FQDAN or CDAN
+ * notifications on the given DPIO service.
+ * @service: the given DPIO service.
+ * @ctx: the notification context.
+ *
+ * The MC command to attach the caller's DPNI/DPCON/DPAI device to a
+ * DPIO object is performed after this function is called. In that way, (a) the
+ * DPIO service is "ready" to handle a notification arrival (which might happen
+ * before the "attach" command to MC has returned control of execution back to
+ * the caller), and (b) the DPIO service can provide back to the caller the
+ * 'dpio_id' and 'qman64' parameters that it should pass along in the MC command
+ * in order for the DPNI/DPCON/DPAI resources to be configured to produce the
+ * right notification fields to the DPIO service.
+ *
+ * Return 0 for success, or -ENODEV for failure.
+ */
+int dpaa2_io_service_register(struct dpaa2_io *service,
+			     struct dpaa2_io_notification_ctx *ctx);
+
+/**
+ * dpaa2_io_service_deregister - The opposite of 'register'.
+ * @service: the given DPIO service.
+ * @ctx: the notification context.
+ *
+ * Note that 'register' should be called *before*
+ * making the MC call to attach the notification-producing device to the
+ * notification-handling DPIO service, the 'unregister' function should be
+ * called *after* making the MC call to detach the notification-producing
+ * device.
+ *
+ * Return 0 for success.
+ */
+int dpaa2_io_service_deregister(struct dpaa2_io *service,
+			       struct dpaa2_io_notification_ctx *ctx);
+
+/**
+ * dpaa2_io_service_rearm() - Rearm the notification for the given DPIO service.
+ * @service: the given DPIO service.
+ * @ctx: the notification context.
+ *
+ * Once a FQDAN/CDAN has been produced, the corresponding FQ/channel is
+ * considered "disarmed". Ie. the user can issue pull dequeue operations on that
+ * traffic source for as long as it likes. Eventually it may wish to "rearm"
+ * that source to allow it to produce another FQDAN/CDAN, that's what this
+ * function achieves.
+ *
+ * Return 0 for success, or -ENODEV if no service available, -EBUSY/-EIO for not
+ * being able to implement the rearm the notifiaton due to setting CDAN or
+ * scheduling fq.
+ */
+int dpaa2_io_service_rearm(struct dpaa2_io *service,
+			  struct dpaa2_io_notification_ctx *ctx);
+
+/**
+ * dpaa2_io_from_registration() - Get the DPIO object from the given notification
+ * context.
+ * @ctx: the given notifiation context.
+ * @ret: the returned DPIO object.
+ *
+ * Like 'dpaa2_io_service_get_persistent()' (see below), except that the
+ * returned handle is not selected based on a 'cpu' argument, but is the same
+ * DPIO object that the given notification context is registered against. The
+ * returned handle carries a reference count, so a corresponding dpaa2_io_down()
+ * would be required when the reference is no longer needed.
+ *
+ * Return 0 for success, or -EINVAL for failure.
+ */
+int dpaa2_io_from_registration(struct dpaa2_io_notification_ctx *ctx,
+			      struct dpaa2_io **ret);
+
+/**********************************/
+/* General usage of DPIO services */
+/**********************************/
+
+/**
+ * dpaa2_io_service_get_persistent() - Get the DPIO resource from the given
+ * notification context and cpu.
+ * @service: the DPIO service.
+ * @cpu: the cpu that the DPIO resource has stashing affinity to.
+ * @ret: the returned DPIO resource.
+ *
+ * The various DPIO interfaces can accept a "struct dpaa2_io" handle that refers
+ * to an individual DPIO object or to a whole service. In the latter case, an
+ * internal choice is made for each operation. This function supports the former
+ * case, by selecting an individual DPIO object *from* the service in order for
+ * it to be used multiple times to provide "persistence". The returned handle
+ * also carries a reference count, so a corresponding dpaa2_io_down() would be
+ * required when the reference is no longer needed. Note, a parameter of -1 for
+ * 'cpu' will select a DPIO resource that has no particular stashing affinity to
+ * any cpu (eg. one that stashes to platform cache).
+ *
+ * Return 0 for success, or -ENODEV for failure.
+ */
+int dpaa2_io_service_get_persistent(struct dpaa2_io *service, int cpu,
+				   struct dpaa2_io **ret);
+
+/*****************/
+/* Pull dequeues */
+/*****************/
+
+/**
+ * dpaa2_io_service_pull_fq() - pull dequeue functions from a fq.
+ * @d: the given DPIO service.
+ * @fqid: the given frame queue id.
+ * @s: the dpaa2_io_store object for the result.
+ *
+ * To support DCA/order-preservation, it will be necessary to support an
+ * alternative form, because they must ultimately dequeue to DQRR rather than a
+ * user-supplied dpaa2_io_store. Furthermore, those dequeue results will
+ * "complete" using a caller-provided callback (from DQRR processing) rather
+ * than the caller explicitly looking at their dpaa2_io_store for results. Eg.
+ * the alternative form will likely take a callback parameter rather than a
+ * store parameter. Ignoring it for now to keep the picture clearer.
+ *
+ * Return 0 for success, or error code for failure.
+ */
+int dpaa2_io_service_pull_fq(struct dpaa2_io *d, uint32_t fqid,
+			    struct dpaa2_io_store *s);
+
+/**
+ * dpaa2_io_service_pull_channel() - pull dequeue functions from a channel.
+ * @d: the given DPIO service.
+ * @channelid: the given channel id.
+ * @s: the dpaa2_io_store object for the result.
+ *
+ * To support DCA/order-preservation, it will be necessary to support an
+ * alternative form, because they must ultimately dequeue to DQRR rather than a
+ * user-supplied dpaa2_io_store. Furthermore, those dequeue results will
+ * "complete" using a caller-provided callback (from DQRR processing) rather
+ * than the caller explicitly looking at their dpaa2_io_store for results. Eg.
+ * the alternative form will likely take a callback parameter rather than a
+ * store parameter. Ignoring it for now to keep the picture clearer.
+ *
+ * Return 0 for success, or error code for failure.
+ */
+int dpaa2_io_service_pull_channel(struct dpaa2_io *d, uint32_t channelid,
+				 struct dpaa2_io_store *s);
+
+/************/
+/* Enqueues */
+/************/
+
+/**
+ * dpaa2_io_service_enqueue_fq() - Enqueue a frame to a frame queue.
+ * @d: the given DPIO service.
+ * @fqid: the given frame queue id.
+ * @fd: the frame descriptor which is enqueued.
+ *
+ * This definition bypasses some features that are not expected to be priority-1
+ * features, and may not be needed at all via current assumptions (QBMan's
+ * feature set is wider than the MC object model is intendeding to support,
+ * initially at least). Plus, keeping them out (for now) keeps the API view
+ * simpler. Missing features are;
+ *  - enqueue confirmation (results DMA'd back to the user)
+ *  - ORP
+ *  - DCA/order-preservation (see note in "pull dequeues")
+ *  - enqueue consumption interrupts
+ *
+ * Return 0 for successful enqueue, or -EBUSY if the enqueue ring is not ready,
+ * or -ENODEV if there is no dpio service.
+ */
+int dpaa2_io_service_enqueue_fq(struct dpaa2_io *d,
+			       uint32_t fqid,
+			       const struct dpaa2_fd *fd);
+
+/**
+ * dpaa2_io_service_enqueue_qd() - Enqueue a frame to a QD.
+ * @d: the given DPIO service.
+ * @qdid: the given queuing destination id.
+ * @prio: the given queuing priority.
+ * @qdbin: the given queuing destination bin.
+ * @fd: the frame descriptor which is enqueued.
+ *
+ * This definition bypasses some features that are not expected to be priority-1
+ * features, and may not be needed at all via current assumptions (QBMan's
+ * feature set is wider than the MC object model is intendeding to support,
+ * initially at least). Plus, keeping them out (for now) keeps the API view
+ * simpler. Missing features are;
+ *  - enqueue confirmation (results DMA'd back to the user)
+ *  - ORP
+ *  - DCA/order-preservation (see note in "pull dequeues")
+ *  - enqueue consumption interrupts
+ *
+ * Return 0 for successful enqueue, or -EBUSY if the enqueue ring is not ready,
+ * or -ENODEV if there is no dpio service.
+ */
+int dpaa2_io_service_enqueue_qd(struct dpaa2_io *d,
+			       uint32_t qdid, uint8_t prio, uint16_t qdbin,
+			       const struct dpaa2_fd *fd);
+
+/*******************/
+/* Buffer handling */
+/*******************/
+
+/**
+ * dpaa2_io_service_release() - Release buffers to a buffer pool.
+ * @d: the given DPIO object.
+ * @bpid: the buffer pool id.
+ * @buffers: the buffers to be released.
+ * @num_buffers: the number of the buffers to be released.
+ *
+ * Return 0 for success, and negative error code for failure.
+ */
+int dpaa2_io_service_release(struct dpaa2_io *d,
+			    uint32_t bpid,
+			    const uint64_t *buffers,
+			    unsigned int num_buffers);
+
+/**
+ * dpaa2_io_service_acquire() - Acquire buffers from a buffer pool.
+ * @d: the given DPIO object.
+ * @bpid: the buffer pool id.
+ * @buffers: the buffer addresses for acquired buffers.
+ * @num_buffers: the expected number of the buffers to acquire.
+ *
+ * Return a negative error code if the command failed, otherwise it returns
+ * the number of buffers acquired, which may be less than the number requested.
+ * Eg. if the buffer pool is empty, this will return zero.
+ */
+int dpaa2_io_service_acquire(struct dpaa2_io *d,
+			    uint32_t bpid,
+			    uint64_t *buffers,
+			    unsigned int num_buffers);
+
+/***************/
+/* DPIO stores */
+/***************/
+
+/* These are reusable memory blocks for retrieving dequeue results into, and to
+ * assist with parsing those results once they show up. They also hide the
+ * details of how to use "tokens" to make detection of DMA results possible (ie.
+ * comparing memory before the DMA and after it) while minimising the needless
+ * clearing/rewriting of those memory locations between uses.
+ */
+
+/**
+ * dpaa2_io_store_create() - Create the dma memory storage for dequeue
+ * result.
+ * @max_frames: the maximum number of dequeued result for frames, must be <= 16.
+ * @dev: the device to allow mapping/unmapping the DMAable region.
+ *
+ * Constructor - max_frames must be <= 16. The user provides the
+ * device struct to allow mapping/unmapping of the DMAable region. Area for
+ * storage will be allocated during create. The size of this storage is
+ * "max_frames*sizeof(struct dpaa2_dq)". The 'dpaa2_io_store' returned is a
+ * wrapper structure allocated within the DPIO code, which owns and manages
+ * allocated store.
+ *
+ * Return dpaa2_io_store struct for successfuly created storage memory, or NULL
+ * if not getting the stroage for dequeue result in create API.
+ */
+struct dpaa2_io_store *dpaa2_io_store_create(unsigned int max_frames,
+					   struct device *dev);
+
+/**
+ * dpaa2_io_store_destroy() - Destroy the dma memory storage for dequeue
+ * result.
+ * @s: the storage memory to be destroyed.
+ *
+ * Frees to specified storage memory.
+ */
+void dpaa2_io_store_destroy(struct dpaa2_io_store *s);
+
+/**
+ * dpaa2_io_store_next() - Determine when the next dequeue result is available.
+ * @s: the dpaa2_io_store object.
+ * @is_last: indicate whether this is the last frame in the pull command.
+ *
+ * Once dpaa2_io_store has been passed to a function that performs dequeues to
+ * it, like dpaa2_ni_rx(), this function can be used to determine when the next
+ * frame result is available. Once this function returns non-NULL, a subsequent
+ * call to it will try to find the *next* dequeue result.
+ *
+ * Note that if a pull-dequeue has a null result because the target FQ/channel
+ * was empty, then this function will return NULL rather than expect the caller
+ * to always check for this on his own side. As such, "is_last" can be used to
+ * differentiate between "end-of-empty-dequeue" and "still-waiting".
+ *
+ * Return dequeue result for a valid dequeue result, or NULL for empty dequeue.
+ */
+struct dpaa2_dq *dpaa2_io_store_next(struct dpaa2_io_store *s, int *is_last);
+
+#ifdef CONFIG_FSL_QBMAN_DEBUG
+/**
+ * dpaa2_io_query_fq_count() - Get the frame and byte count for a given fq.
+ * @d: the given DPIO object.
+ * @fqid: the id of frame queue to be queried.
+ * @fcnt: the queried frame count.
+ * @bcnt: the queried byte count.
+ *
+ * Knowing the FQ count at run-time can be useful in debugging situations.
+ * The instantaneous frame- and byte-count are hereby returned.
+ *
+ * Return 0 for a successful query, and negative error code if query fails.
+ */
+int dpaa2_io_query_fq_count(struct dpaa2_io *d, uint32_t fqid,
+			   uint32_t *fcnt, uint32_t *bcnt);
+
+/**
+ * dpaa2_io_query_bp_count() - Query the number of buffers currenty in a
+ * buffer pool.
+ * @d: the given DPIO object.
+ * @bpid: the index of buffer pool to be queried.
+ * @num: the queried number of buffers in the buffer pool.
+ *
+ * Return 0 for a sucessful query, and negative error code if query fails.
+ */
+int dpaa2_io_query_bp_count(struct dpaa2_io *d, uint32_t bpid,
+			   uint32_t *num);
+#endif
+#endif /* __FSL_DPAA2_IO_H */
