1
0
Fork 0
mirror of synced 2025-03-06 20:59:54 +01:00
Commit graph

154 commits

Author SHA1 Message Date
Alex Elder
d15180b4ea net: ipa: kill gsi_trans_commit_wait_timeout()
Since the beginning gsi_trans_commit_wait_timeout() has existed to
provide a way to allow waiting a limited time for a transaction
to complete.  But that function has never been used.

In fact, there is no use for this function, because a transaction
committed to hardware should *always* complete.  The only reason it
might not complete is if there were a hardware failure, or perhaps a
system configuration error.

Furthermore, if a timeout ever did occur, the IPA hardware would be
in an indeterminate state, from which there is no recovery.  It
would require some sort of complete IPA reset, and would require the
participation of the modem, and at this time there is no such
sequence defined.

So get rid of the definition of gsi_trans_commit_wait_timeout(), and
update a few comments accordingly.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22 20:46:12 +01:00
Alex Elder
beb90cba60 net: ipa: specify RX aggregation time limit in config data
Don't assume that a 500 microsecond time limit should be used for
all receive endpoints that support aggregation.  Instead, specify
the time limit to use in the configuration data.

Set a 500 microsecond limit for all existing RX endpoints, as before.

Checking for overflow for the time limit field is a bit complicated.
Rather than duplicate a lot of code in ipa_endpoint_data_valid_one(),
call WARN() if any value is found to be too large when encoding it.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22 20:46:12 +01:00
Alex Elder
3cebb7c2ed net: ipa: support hard aggregation limits
Add a new flag for AP receive endpoints that indicates whether
a "hard limit" is used as a criterion for closing aggregation.
Add comments explaining the difference between "hard" and "soft"
aggregation limits.

Pass a flag to ipa_aggr_size_kb() so it computes the proper
aggregation size value whether using hard or soft limits.  Move
that function earlier in "ipa_endpoint.c" so it can be used
without a forward-reference.

Update ipa_endpoint_data_valid_one() so it validates endpoints whose
data indicate a hard aggregation limit is used, and so it reports
set aggregation flags for endpoints without aggregation enabled.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22 20:46:12 +01:00
Alex Elder
153213f055 net: ipa: make endpoint HOLB drop configurable
Add a new Boolean flag for RX endpoints defining whether HOLB drop
is initially enabled or disabled for the endpoint.  All existing AP
endpoints should have HOLB drop disabled.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22 20:46:12 +01:00
Alex Elder
660e52d651 net: ipa: save a copy of endpoint default config
All elements of the default endpoint configuration are used in the
code when programming an endpoint for use.  But none of the other
configuration data is ever needed once things are initialized.

So rather than saving a pointer to *all* of the configuration data,
save a copy of only the endpoint configuration portion.

This will eventually allow endpoint configuration to be modifiable
at runtime.  But even before that it means we won't keep a pointer
to configuration data after when no longer needed.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-20 11:12:24 +01:00
Alex Elder
cf4e73a166 net: ipa: rename a few endpoint config data types
Rename the just-moved data structure types to drop the "_data"
suffix, to make it more obvious they are no longer meant to be used
just as read-only initialization data.  Rename the fields and
variables of these types to use "config" instead of "data" in the
name.  This is another small step meant to facilitate review.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-20 11:12:24 +01:00
Alex Elder
332ef7c814 net: ipa: ignore endianness if there is no header
If we program an RX endpoint to have no header (header length is 0),
header-related endpoint configuration values are meaningless and are
ignored.

The only case we support that defines a header is QMAP endpoints.
In ipa_endpoint_init_hdr_ext() we set the endianness mask value
unconditionally, but it should not be done if there is no header
(meaning it is not configured for QMAP).

Set the endianness conditionally, and rearrange the logic in that
function slightly to avoid testing the qmap flag twice.

Delete an incorrect comment in ipa_endpoint_init_aggr().

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-20 11:12:23 +01:00
Jakub Kicinski
d7e6f58360 Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
drivers/net/ethernet/mellanox/mlx5/core/main.c
  b33886971d ("net/mlx5: Initialize flow steering during driver probe")
  40379a0084 ("net/mlx5_fpga: Drop INNOVA TLS support")
  f2b41b32cd ("net/mlx5: Remove ipsec_ops function table")
https://lore.kernel.org/all/20220519040345.6yrjromcdistu7vh@sx1/
  16d42d3133 ("net/mlx5: Drain fw_reset when removing device")
  8324a02c34 ("net/mlx5: Add exit route when waiting for FW")
https://lore.kernel.org/all/20220519114119.060ce014@canb.auug.org.au/

tools/testing/selftests/net/mptcp/mptcp_join.sh
  e274f71540 ("selftests: mptcp: add subflow limits test-cases")
  b6e074e171 ("selftests: mptcp: add infinite map testcase")
  5ac1d2d634 ("selftests: mptcp: Add tests for userspace PM type")
https://lore.kernel.org/all/20220516111918.366d747f@canb.auug.org.au/

net/mptcp/options.c
  ba2c89e0ea ("mptcp: fix checksum byte order")
  1e39e5a32a ("mptcp: infinite mapping sending")
  ea66758c17 ("tcp: allow MPTCP to update the announced window")
https://lore.kernel.org/all/20220519115146.751c3a37@canb.auug.org.au/

net/mptcp/pm.c
  95d6865178 ("mptcp: fix subflow accounting on close")
  4d25247d3a ("mptcp: bypass in-kernel PM restrictions for non-kernel PMs")
https://lore.kernel.org/all/20220516111435.72f35dca@canb.auug.org.au/

net/mptcp/subflow.c
  ae66fb2ba6 ("mptcp: Do TCP fallback on early DSS checksum failure")
  0348c690ed ("mptcp: add the fallback check")
  f8d4bcacff ("mptcp: infinite mapping receiving")
https://lore.kernel.org/all/20220519115837.380bb8d4@canb.auug.org.au/

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-19 11:23:59 -07:00
Alex Elder
30b338ff79 net: ipa: certain dropped packets aren't accounted for
If an RX endpoint receives packets containing status headers, and a
packet in the buffer is not dropped, ipa_endpoint_skb_copy() is
responsible for wrapping the packet data in an SKB and forwarding it
to ipa_modem_skb_rx() for further processing.

If ipa_endpoint_skb_copy() gets a null pointer from build_skb(), it
just returns early.  But in the process it doesn't record that as a
dropped packet in the network device statistics.

Instead, call ipa_modem_skb_rx() whether or not the SKB pointer is
NULL; that function ensures the statistics are properly updated.

Fixes: 1b65bbcc9a ("net: ipa: skip SKB copy if no netdev")
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-13 12:01:42 +01:00
Alex Elder
c5794097b2 net: ipa: compute proper aggregation limit
The aggregation byte limit for an endpoint is currently computed
based on the endpoint's receive buffer size.

However, some bytes at the front of each receive buffer are reserved
on the assumption that--as with SKBs--it might be useful to insert
data (such as headers) before what lands in the buffer.

The aggregation byte limit currently doesn't take into account that
reserved space, and as a result, aggregation could require space
past that which is available in the buffer.

Fix this by reducing the size used to compute the aggregation byte
limit by the NET_SKB_PAD offset reserved for each receive buffer.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-04-25 11:27:56 +01:00
Alex Elder
9654d8c462 net: ipa: determine replenish doorbell differently
Rather than tracking the number of receive buffer transactions that
have been submitted without a doorbell, just track the total number
of transactions that have been issued.  Then ring the doorbell when
that number modulo the replenish batch size is 0.

The effect is roughly the same, but the new count is slightly more
interesting, and this approach will someday allow the replenish
batch size to be tuned at runtime.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04 10:16:08 +00:00
Alex Elder
5d6ac24fb1 net: ipa: replenish after delivering payload
Replenishing is now solely driven by whether transactions are
available for a channel, and it doesn't really matter whether
we replenish before or after we deliver received packets to the
network stack.

Replenishing before delivering the payload adds a little latency.
Eliminate that by requesting a replenish after the payload is
delivered.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04 10:16:08 +00:00
Alex Elder
09b337deda net: ipa: kill replenish_backlog
We no longer use the replenish_backlog atomic variable to decide
when we've got work to do providing receive buffers to hardware.
Basically, we try to keep the hardware as full as possible, all the
time.  We keep supplying buffers until the hardware has no more
space for them.

As a result, we can get rid of the replenish_backlog field and the
atomic operations performed on it.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04 10:16:08 +00:00
Alex Elder
5fc7f9ba2e net: ipa: introduce gsi_channel_trans_idle()
Create a new function that returns true if all transactions for a
channel are available for use.

Use it in ipa_endpoint_replenish_enable() to see whether to start
replenishing, and in ipa_endpoint_replenish() to determine whether
it's necessary after a failure to schedule delayed work to ensure a
future replenish attempt occurs.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04 10:16:08 +00:00
Alex Elder
d0ac30e74e net: ipa: don't use replenish_backlog
Rather than determining when to stop replenishing using the
replenish backlog, just stop when we have exhausted all available
transactions.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04 10:16:08 +00:00
Alex Elder
6a606b9015 net: ipa: allocate transaction in replenish loop
When replenishing, have ipa_endpoint_replenish() allocate a
transaction, and pass that to ipa_endpoint_replenish_one() to fill.
Then, if that produces no error, commit the transaction within the
replenish loop as well.  In this way we can distinguish between
transaction failures and buffer allocation/mapping failures.

Failure to allocate a transaction simply means the hardware already
has as many receive buffers as it can hold.  In that case we can
break out of the replenish loop because there's nothing more to do.

If we fail to allocate or map pages for the receive buffer, just
try again later.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04 10:16:08 +00:00
Alex Elder
b9dbabc5ca net: ipa: decide on doorbell in replenish loop
Decide whether the doorbell should be signaled when committing a
replenish transaction in the main replenish loop, rather than in
ipa_endpoint_replenish_one().  This is a step to facilitate the
next patch.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04 10:16:08 +00:00
Alex Elder
4b22d84195 net: ipa: increment backlog in replenish caller
Three spots call ipa_endpoint_replenish(), and just one of those
requests that the backlog be incremented after completing the
replenish operation.

Instead, have the caller increment the backlog, and get rid of the
add_one argument to ipa_endpoint_replenish().

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04 10:16:07 +00:00
Alex Elder
b4061c136b net: ipa: allocate transaction before pages when replenishing
A transaction failure only occurs if no more transactions are
available for an endpoint.  It's a very cheap test.

When replenishing an RX endpoint buffer, there's no point in
allocating pages if transactions are exhausted.  So don't bother
doing so unless the transaction allocation succeeds.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04 10:16:07 +00:00
Alex Elder
a9bec7ae70 net: ipa: kill replenish_saved
The replenish_saved field keeps track of the number of times a new
buffer is added to the backlog when replenishing is disabled.  We
don't really use it though, so there's no need for us to track it
separately.  Whether replenishing is enabled or not, we can simply
increment the backlog.

Get rid of replenish_saved, and initialize and increment the backlog
where it would have otherwise been used.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04 10:16:07 +00:00
Alex Elder
ed23f02680 net: ipa: define per-endpoint receive buffer size
Allow RX endpoints to have differing receive buffer sizes.  Define
the receive buffer size in the configuration data, and use that
rather than IPA_RX_BUFFER_SIZE when configuring the endpoint.

Add verification in ipa_endpoint_data_valid_one() that the receive
buffer specified for AP RX endpoints is both big enough to handle at
least one full packet, and not so big in an aggregating endpoint
that its size can't be represented when programming the hardware.
Move aggr_byte_limit_max() up in "ipa_endpoint.c" so it can be used
earlier in the file without a forward-reference.

Initially we'll just keep the 8KB receive buffer size already in use
for all AP RX endpoints..

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-02 21:13:45 -08:00
Alex Elder
998c0bd2b3 net: ipa: prevent concurrent replenish
We have seen cases where an endpoint RX completion interrupt arrives
while replenishing for the endpoint is underway.  This causes another
instance of replenishing to begin as part of completing the receive
transaction.  If this occurs it can lead to transaction corruption.

Use a new flag to ensure only one replenish instance for an endpoint
executes at a time.

Fixes: 84f9bd12d4 ("soc: qcom: ipa: IPA endpoints")
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-12 14:39:53 +00:00
Alex Elder
c1aaa01dbf net: ipa: use a bitmap for endpoint replenish_enabled
Define a new replenish_flags bitmap to contain Boolean flags
associated with an endpoint's replenishing state.  Replace the
replenish_enabled field with a flag in that bitmap.  This is to
prepare for the next patch, which adds another flag.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-12 14:39:53 +00:00
Alex Elder
6c0e3b5ce9 net: ipa: fix atomic update in ipa_endpoint_replenish()
In ipa_endpoint_replenish(), if an error occurs when attempting to
replenish a receive buffer, we just quit and try again later.  In
that case we increment the backlog count to reflect that the attempt
was unsuccessful.  Then, if the add_one flag was true we increment
the backlog again.

This second increment is not included in the backlog local variable
though, and its value determines whether delayed work should be
scheduled.  This is a bug.

Fix this by determining whether 1 or 2 should be added to the
backlog before adding it in a atomic_add_return() call.

Reviewed-by: Matthias Kaehlcke <mka@chromium.org>
Fixes: 84f9bd12d4 ("soc: qcom: ipa: IPA endpoints")
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-12 14:39:53 +00:00
Jakub Kicinski
93d5404e89 Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
drivers/net/ipa/ipa_main.c
  8afc7e471a ("net: ipa: separate disabling setup from modem stop")
  76b5fbcd6b ("net: ipa: kill ipa_modem_init()")

Duplicated include, drop one.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-11-26 13:45:19 -08:00
Alex Elder
4c9d631adb net: ipa: introduce channel flow control
One quirk for certain versions of IPA is that endpoint DELAY mode
does not work properly.  IPA DELAY mode prevents any packets from
being delivered to the IPA core for processing on a TX endpoint.
The AP uses DELAY mode when the modem crashes, to prevent modem TX
endpoints from generating traffic during crash recovery.  Without
this, there is a chance the hardware will stall during recovery from
a modem crash.

To achieve a similar effect, a GSI FLOW_CONTROLLED channel state
was created.  A STARTED TX channel can be placed in FLOW_CONTROLLED
state, which prevents the transfer of any more packets.  A channel
in FLOW_CONTROLLED state can be either returned to STARTED state, or
can be transitioned to STOPPED state.

Because this operates on GSI channels, two generic commands were
added to allow the AP to control this state for modem channels
(similar to the ALLOCATE and HALT channel commands).

Previously the code assumed this quirk only applied to IPA v4.2.
In fact, channel flow control (rather than endpoint DELAY mode)
should be used for all versions *starting* with IPA v4.2.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-11-25 20:03:20 -08:00
Alex Elder
1b65bbcc9a net: ipa: skip SKB copy if no netdev
In ipa_endpoint_skb_copy(), a new socket buffer structure is
allocated so that some data can be copied into it.  However, after
doing this, if the endpoint has a null netdev pointer, we just drop
free the socket buffer.

Instead, check endpoint->netdev pointer first, and just return early
if it's null.  Also return early if the SKB allocation fails, to
avoid the deeper indentation in the normal path.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-11-25 19:37:34 -08:00
Alex Elder
01c36637ae net: ipa: explicitly disable HOLB drop during setup
During setup, ipa_endpoint_program() programs each endpoint with
various configuration parameters.  One of those registers defines
whether to drop packets when a head-of-line blocking condition is
detected on an RX endpoint.  We currently assume this is disabled;
instead, explicitly set it to be disabled.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-11-25 19:37:34 -08:00
Alex Elder
e6aab6b9b6 net: ipa: rework how HOL_BLOCK handling is specified
The head-of-line block (HOLB) drop timer is only meaningful when
dropping packets due to blocking is enabled.  Given that, redefine
the interface so the timer is specified when enabling HOLB drop, and
use a different function when disabling.

To enable and disable HOLB drop, these functions will now be used:
  ipa_endpoint_init_hol_block_enable(endpoint, milliseconds)
  ipa_endpoint_init_hol_block_disable(endpoint)

The existing ipa_endpoint_init_hol_block_enable() becomes a helper
function, renamed ipa_endpoint_init_hol_block_en(), and used with
ipa_endpoint_init_hol_block_timer() to enable HOLB block on an
endpoint.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-11-25 19:37:34 -08:00
Alex Elder
e4e9bfb7c9 net: ipa: kill ipa_cmd_pipeline_clear()
Calling ipa_cmd_pipeline_clear() after stopping the channel
underlying the AP<-modem RX endpoint can lead to a deadlock.

This occurs in the ->runtime_suspend device power operation for the
IPA driver.  While this callback is in progress, any other requests
for power will block until the callback returns.

Stopping the AP<-modem RX channel does not prevent the modem from
sending another packet to this endpoint.  If a packet arrives for an
RX channel when the channel is stopped, an SUSPEND IPA interrupt
condition will be pending.  Handling an IPA interrupt requires
power, so ipa_isr_thread() calls pm_runtime_get_sync() first thing.

The problem occurs because a "pipeline clear" command will not
complete while such a SUSPEND interrupt condition exists.  So the
SUSPEND IPA interrupt handler won't proceed until it gets power;
that won't happen until the ->runtime_suspend callback (and its
"pipeline clear" command) completes; and that can't happen while
the SUSPEND interrupt condition exists.

It turns out that in this case there is no need to use the "pipeline
clear" command.  There are scenarios in which clearing the pipeline
is required while suspending, but those are not (yet) supported
upstream.  So a simple fix, avoiding the potential deadlock, is to
stop calling ipa_cmd_pipeline_clear() in ipa_endpoint_suspend().
This removes the only user of ipa_cmd_pipeline_clear(), so get rid
of that function.  It can be restored again whenever it's needed.

This is basically a manual revert along with an explanation for
commit 6cb63ea6a3 ("net: ipa: introduce ipa_cmd_tag_process()").

Fixes: 6cb63ea6a3 ("net: ipa: introduce ipa_cmd_tag_process()")
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-11-23 12:26:40 +00:00
Alex Elder
816316caca net: ipa: disable HOLB drop when updating timer
The head-of-line blocking timer should only be modified when
head-of-line drop is disabled.

One of the steps in recovering from a modem crash is to enable
dropping of packets with timeout of 0 (immediate).  We don't know
how the modem configured its endpoints, so before we program the
timer, we need to ensure HOL_BLOCK is disabled.

Fixes: 84f9bd12d4 ("soc: qcom: ipa: IPA endpoints")
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-11-15 13:25:45 +00:00
Alex Elder
6e228d8cbb net: ipa: HOLB register sometimes must be written twice
Starting with IPA v4.5, the HOL_BLOCK_EN register must be written
twice when enabling head-of-line blocking avoidance.

Fixes: 84f9bd12d4 ("soc: qcom: ipa: IPA endpoints")
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-11-15 13:25:45 +00:00
Alex Elder
2775cbc5af net: ipa: rename "ipa_clock.c"
Finally, rename "ipa_clock.c" to be "ipa_power.c" and "ipa_clock.h"
to be "ipa_power.h".

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-22 09:44:17 +01:00
Alex Elder
7aa0e8b8bd net: ipa: rename ipa_clock_* symbols
Rename a number of functions to clarify that there is no longer a
notion of an "IPA clock," but rather that the functions are more
generally related to IPA power management.

  ipa_clock_enable() -> ipa_power_enable()
  ipa_clock_disable() -> ipa_power_disable()
  ipa_clock_rate() -> ipa_core_clock_rate()
  ipa_clock_init() -> ipa_power_init()
  ipa_clock_exit() -> ipa_power_exit()

Rename the ipa_clock structure to be ipa_power.  Rename all
variables and fields using that structure type "power" rather
than "clock".

Rename the ipa_clock_data structure to be ipa_power_data, and more
broadly, just substitute "power" for "clock" in places that
previously represented things related to the "IPA clock".

Update comments throughout.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-22 09:44:17 +01:00
Alex Elder
decfef0fa6 net: ipa: use gsi->version for channel suspend/resume
The GSI layer has the IPA version now, so there's no need for
version-specific flags to be passed from IPA.  One instance of
this is in gsi_channel_suspend() and gsi_channel_resume(), which
indicate whether or not the endpoint suspend is implemented by
GSI stopping the channel.  We can make that determination based
on gsi->version, eliminating the need for a Boolean flag in those
functions.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-04 10:12:05 +01:00
Alex Elder
5bc5588466 net: ipa: use WARN_ON() rather than assertions
I've added commented assertions to record certain properties that
can be assumed to hold in certain places in the IPA code.  Convert
these into real WARN_ON() calls so the assertions are actually
checked, using the standard WARN_ON() mechanism.

Where errors can be returned, return an error if a warning is
triggered.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-07-26 22:38:11 +01:00
Alex Elder
110971d1ee net: ipa: FLAVOR_0 register doesn't exist until IPA v3.5
The FLAVOR_0 version first appears in IPA v3.5, so avoid attempting
to read it for versions prior to that.

This register contains a concise definition of the number and
direction of endpoints supported by the hardware, and without it
we can't verify endpoint configuration in ipa_endpoint_config().
In this case, just indicate that any endpoint number is available
for use.

Originally proposed by AngeloGioacchino Del Regno.

Link: https://lore.kernel.org/netdev/20210211175015.200772-3-angelogioacchino.delregno@somainline.org
Signed-off-by: Alex Elder <elder@linaro.org>
Acked-by: AngeloGioacchino Del Regno
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-21 12:30:59 -07:00
Alex Elder
9e8fb7bf9c net: ipa: make endpoint data validation unconditional
The cost of validating the endpoint configuration data is not all
that high, so just do it unconditionally, rather than doing so only
when IPA_VALIDATAION is defined.

Suggested-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-11 14:13:18 -07:00
Alex Elder
d15ec19333 Revert "net: ipa: disable checksum offload for IPA v4.5+"
This reverts commit c88c34fcf8.

The RMNet driver now supports inline checksum offload.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-03 15:09:40 -07:00
Alex Elder
5567d4d9e7 net: ipa: add support for inline checksum offload
Starting with IPA v4.5, IP payload checksum offload is implemented
differently.

Prior to v4.5, the IPA hardware appends an rmnet_map_dl_csum_trailer
structure to each packet if checksum offload is enabled in the
download direction (modem->AP).  In the upload direction (AP->modem)
a rmnet_map_ul_csum_header structure is prepended before each sent
packet.

Starting with IPA v4.5, checksum offload is implemented using a
single new rmnet_map_v5_csum_header structure which sits between
the QMAP header and the packet data.  The same header structure
is used in both directions.

The new header contains a header type (CSUM_OFFLOAD); a checksum
flag; and a flag indicating whether any other headers follow this
one.  The checksum flag indicates whether the hardware should
compute (and insert) the checksum on a sent packet.  On a received
packet the checksum flag indicates whether the hardware confirms the
checksum value in the payload is correct.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-03 15:09:40 -07:00
Alex Elder
c88c34fcf8 net: ipa: disable checksum offload for IPA v4.5+
Checksum offload for IPA v4.5+ is implemented differently, using
"inline" offload (which uses a common header format for both upload
and download offload).

The IPA hardware must be programmed to enable MAP checksum offload,
but the RMNet driver is responsible for interpreting checksum
metadata supplied with messages.

Currently, the RMNet driver does not support inline checksum offload.
This support is imminent, but until it is available, do not allow
newer versions of IPA to specify checksum offload for endpoints.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-04-11 16:49:08 -07:00
Alex Elder
602a1c76f8 net: ipa: three small fixes
Some time ago changes were made to stop referring to clearing the
hardware pipeline as a "tag process."  Fix a comment to use the
newer terminology.

Get rid of a pointless double-negation of the Boolean toward_ipa
flag in ipa_endpoint_config().

make ipa_endpoint_exit_one() private; it's only referenced inside
"ipa_endpoint.c".

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-04-09 20:57:26 -07:00
Peng Li
497abc87cf net: ipa: remove repeated words
Remove repeated words "that" and "the".

Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Acked-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-30 16:56:39 -07:00
Alex Elder
647a05f3ae net: ipa: define the ENDP_INIT_NAT register
Define the ENDP_INIT_NAT register for setting up the NAT
configuration for an endpoint.  We aren't using NAT at this
time, so explicitly set the type to BYPASS for all endpoints.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-24 16:52:47 -07:00
Alex Elder
d7f3087b39 net: ipa: reduce IPA version assumptions
Modify conditional tests throughout the IPA code so they do not
assume that IPA v3.5.1 is the oldest version supported.  Also remove
assumptions that IPA v4.5 is the newest version of IPA supported.

Augment versions in comments with "+", to be clearer that the
comment applies to a version and subsequent versions.  (E.g.,
"present for IPA v4.2+" instead of just "present for v4.2".)

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-24 16:52:47 -07:00
Alex Elder
1690d8a75d net: ipa: sequencer type is for TX endpoints only
We only program the sequencer type for TX endpoints.  So move the
definition of the sequencer type fields into the TX-specific portion
of the endpoint configuration data.  There's no need to maintain
this in the IPA structure; we can extract it from the configuration
data it points to in the one spot it's needed.

We previously specified the sequencer type for RX endpoints with
INVALID values.  These are no longer needed, so get rid of them.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-20 18:56:18 -07:00
Alex Elder
8ee5df6598 net: ipa: split sequencer type in two
An IPA endpoint has a sequencer that must be configured based on how
the endpoint is to be used.  Currently the IPA code programs the
sequencer type by splitting a value into four 4-bit nibbles.  Doing
that doesn't really add much value, and regardless, a better way of
splitting the sequencer type is into two halves--the lower byte
describing how normal packet processing is handled, and the next
byte describing information about processing replicas.

So split the sequencer type into two sub-parts:  the sequencer type
and the replication sequencer type.  Define the values supported for
the "main" sequencer type, and define the values supported for the
replication part separately.

In addition, the sequencer type names are quite verbose, encoding
what the type includes, but also what it *excludes*.  Rename the
sequencer types in a way that mainly describes the number of passes
that a packet takes through the IPA processing pipeline, and how
many of those passes end by supplying the processed packet to the
microprocessor.

The result expands the supported types beyond what is required for
now, but simplifies the way these are defined.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-20 18:56:18 -07:00
Alex Elder
4873537430 net: ipa: get rid of status size constraint
There is a build-time check that the packet status structure is a
multiple of 4 bytes in size.  It's not clear where that constraint
comes from, but the structure defines what hardware provides so its
definition won't change.  Get rid of the check; it adds no value.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-02-06 14:57:19 -08:00
Alex Elder
9af5ccf323 net: ipa: use a Boolean rather than count when replenishing
The count argument to ipa_endpoint_replenish() is only ever 0 or 1,
and always will be (because we always handle each receive buffer in
a single transaction).  Rename the argument to be add_one and change
it to be Boolean.

Update the function description to reflect the current code.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-02-06 14:57:19 -08:00
Jakub Kicinski
d1e1355aef Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-02-02 14:21:31 -08:00