Including fixes from netfilter, wireless and bluetooth.
Kalle Valo steps down after serving as the WiFi driver maintainer for over a decade. Current release - fix to a fix: - vsock: orphan socket after transport release, avoid null-deref - Bluetooth: L2CAP: fix corrupted list in hci_chan_del Current release - regressions: - eth: stmmac: correct Rx buffer layout when SPH is enabled - rxrpc: fix alteration of headers whilst zerocopy pending - eth: iavf: fix a locking bug in an error path - s390/qeth: move netif_napi_add_tx() and napi_enable() from under BH - Revert "netfilter: flowtable: teardown flow if cached mtu is stale" Current release - new code bugs: - rxrpc: fix ipv6 path MTU discovery, only ipv4 worked - pse-pd: fix deadlock in current limit functions Previous releases - regressions: - rtnetlink: fix netns refleak with rtnl_setlink() - wifi: brcmfmac: use random seed flag for BCM4355 and BCM4364 firmware Previous releases - always broken: - add missing RCU protection of struct net throughout the stack - can: rockchip: bail out if skb cannot be allocated - eth: ti: am65-cpsw: base XDP support fixes Misc: - ethtool: tsconfig: update the format of hwtstamp flags, changes the uAPI but this uAPI was not in any release yet Signed-off-by: Jakub Kicinski <kuba@kernel.org> -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmeuOoMACgkQMUZtbf5S Irs43w/+M9ZRNqU7O1CK6kykgAkuWCZvfVvvKFCTKcxg+ibrelzn78m1CtFWfpci aZ6meM65XQ/9a8y4VQgjMs8dDPSQphLXjXTLRtMLCEbL6Wakg6pobj/Rb/Ya4p9T 4Ao7VrCRzAbGDv/M4NXSuqVxc1YYBXA3i3FSKR913cMsYeTMOYLRvjsB8ZvSJOgK Qs4iGYz3D/oO0KRHVWpzn1DUxQhwoqSjU1lFeuLc+1yxDmicqqkKnnP0A6DbN4zd /JMVkM1ysAqKh4HNDdurMhy5D42Gdc/W/QfLJiNtpohNO2wItR9cs+Nn4TMnCvpF DK4tS1z5V60S/t0G8isVAjtZYGcBL2hlC94H/8m/FztUdoVew2vaAlD3Fa2i0ED0 7Q9vzNHUfUSYfI2QLYJC+QHXBkzSiU18uK3zIFq/sIOZJco1Wz8Aevg78c2JV+Qi nZLyyeH73Yt1lINBK5/td+KBFSVlMI5gAAGMVuH4+djLOOQohiufe5uKVlU/vjZ0 o2YTiIXRwhvAe2QIbjbZV1y4xhWLrH+NXhKYzRZZfTgx2wjLFVovjCfodQjskjNk fVp5wg3m9qKO3MkEN+NTFcnZytUmpNgC1LqXLEMCOfccU+vDqQ+0xylNhEipSuWd PW5c/dcoVnDARvxGVp1wWtEy83kF1RefgIp++wqSzSqisr2l2l4= =7lE+ -----END PGP SIGNATURE----- Merge tag 'net-6.14-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Including fixes from netfilter, wireless and bluetooth. Kalle Valo steps down after serving as the WiFi driver maintainer for over a decade. Current release - fix to a fix: - vsock: orphan socket after transport release, avoid null-deref - Bluetooth: L2CAP: fix corrupted list in hci_chan_del Current release - regressions: - eth: - stmmac: correct Rx buffer layout when SPH is enabled - iavf: fix a locking bug in an error path - rxrpc: fix alteration of headers whilst zerocopy pending - s390/qeth: move netif_napi_add_tx() and napi_enable() from under BH - Revert "netfilter: flowtable: teardown flow if cached mtu is stale" Current release - new code bugs: - rxrpc: fix ipv6 path MTU discovery, only ipv4 worked - pse-pd: fix deadlock in current limit functions Previous releases - regressions: - rtnetlink: fix netns refleak with rtnl_setlink() - wifi: brcmfmac: use random seed flag for BCM4355 and BCM4364 firmware Previous releases - always broken: - add missing RCU protection of struct net throughout the stack - can: rockchip: bail out if skb cannot be allocated - eth: ti: am65-cpsw: base XDP support fixes Misc: - ethtool: tsconfig: update the format of hwtstamp flags, changes the uAPI but this uAPI was not in any release yet" * tag 'net-6.14-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (72 commits) net: pse-pd: Fix deadlock in current limit functions rxrpc: Fix ipv6 path MTU discovery Reapply "net: skb: introduce and use a single page frag cache" s390/qeth: move netif_napi_add_tx() and napi_enable() from under BH mlxsw: Add return value check for mlxsw_sp_port_get_stats_raw() ipv6: mcast: add RCU protection to mld_newpack() team: better TEAM_OPTION_TYPE_STRING validation Bluetooth: L2CAP: Fix corrupted list in hci_chan_del Bluetooth: btintel_pcie: Fix a potential race condition Bluetooth: L2CAP: Fix slab-use-after-free Read in l2cap_send_cmd net: ethernet: ti: am65_cpsw: fix tx_cleanup for XDP case net: ethernet: ti: am65-cpsw: fix RX & TX statistics for XDP_TX case net: ethernet: ti: am65-cpsw: fix memleak in certain XDP cases vsock/test: Add test for SO_LINGER null ptr deref vsock: Orphan socket after transport release MAINTAINERS: Add sctp headers to the general netdev entry Revert "netfilter: flowtable: teardown flow if cached mtu is stale" iavf: Fix a locking bug in an error path rxrpc: Fix alteration of headers whilst zerocopy pending net: phylink: make configuring clock-stop dependent on MAC support ...
This commit is contained in:
commit
348f968b89
75 changed files with 709 additions and 458 deletions
1
.mailmap
1
.mailmap
|
@ -376,6 +376,7 @@ Juha Yrjola <juha.yrjola@solidboot.com>
|
|||
Julien Thierry <julien.thierry.kdev@gmail.com> <julien.thierry@arm.com>
|
||||
Iskren Chernev <me@iskren.info> <iskren.chernev@gmail.com>
|
||||
Kalle Valo <kvalo@kernel.org> <kvalo@codeaurora.org>
|
||||
Kalle Valo <kvalo@kernel.org> <quic_kvalo@quicinc.com>
|
||||
Kalyan Thota <quic_kalyant@quicinc.com> <kalyan_t@codeaurora.org>
|
||||
Karthikeyan Periyasamy <quic_periyasa@quicinc.com> <periyasa@codeaurora.org>
|
||||
Kathiravan T <quic_kathirav@quicinc.com> <kathirav@codeaurora.org>
|
||||
|
|
|
@ -7,7 +7,6 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
|
|||
title: Qualcomm Technologies ath10k wireless devices
|
||||
|
||||
maintainers:
|
||||
- Kalle Valo <kvalo@kernel.org>
|
||||
- Jeff Johnson <jjohnson@kernel.org>
|
||||
|
||||
description:
|
||||
|
|
|
@ -8,7 +8,6 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
|
|||
title: Qualcomm Technologies ath11k wireless devices (PCIe)
|
||||
|
||||
maintainers:
|
||||
- Kalle Valo <kvalo@kernel.org>
|
||||
- Jeff Johnson <jjohnson@kernel.org>
|
||||
|
||||
description: |
|
||||
|
|
|
@ -8,7 +8,6 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
|
|||
title: Qualcomm Technologies ath11k wireless devices
|
||||
|
||||
maintainers:
|
||||
- Kalle Valo <kvalo@kernel.org>
|
||||
- Jeff Johnson <jjohnson@kernel.org>
|
||||
|
||||
description: |
|
||||
|
|
|
@ -9,7 +9,6 @@ title: Qualcomm Technologies ath12k wireless devices (PCIe) with WSI interface
|
|||
|
||||
maintainers:
|
||||
- Jeff Johnson <jjohnson@kernel.org>
|
||||
- Kalle Valo <kvalo@kernel.org>
|
||||
|
||||
description: |
|
||||
Qualcomm Technologies IEEE 802.11be PCIe devices with WSI interface.
|
||||
|
|
|
@ -9,7 +9,6 @@ title: Qualcomm Technologies ath12k wireless devices (PCIe)
|
|||
|
||||
maintainers:
|
||||
- Jeff Johnson <quic_jjohnson@quicinc.com>
|
||||
- Kalle Valo <kvalo@kernel.org>
|
||||
|
||||
description:
|
||||
Qualcomm Technologies IEEE 802.11be PCIe devices.
|
||||
|
|
|
@ -1524,7 +1524,8 @@ attribute-sets:
|
|||
nested-attributes: bitset
|
||||
-
|
||||
name: hwtstamp-flags
|
||||
type: u32
|
||||
type: nest
|
||||
nested-attributes: bitset
|
||||
|
||||
operations:
|
||||
enum-model: directional
|
||||
|
|
|
@ -369,8 +369,8 @@ to their default.
|
|||
|
||||
addr.can_family = AF_CAN;
|
||||
addr.can_ifindex = if_nametoindex("can0");
|
||||
addr.tp.tx_id = 0x18DA42F1 | CAN_EFF_FLAG;
|
||||
addr.tp.rx_id = 0x18DAF142 | CAN_EFF_FLAG;
|
||||
addr.can_addr.tp.tx_id = 0x18DA42F1 | CAN_EFF_FLAG;
|
||||
addr.can_addr.tp.rx_id = 0x18DAF142 | CAN_EFF_FLAG;
|
||||
|
||||
ret = bind(s, (struct sockaddr *)&addr, sizeof(addr));
|
||||
if (ret < 0)
|
||||
|
|
|
@ -3654,7 +3654,6 @@ F: Documentation/devicetree/bindings/phy/phy-ath79-usb.txt
|
|||
F: drivers/phy/qualcomm/phy-ath79-usb.c
|
||||
|
||||
ATHEROS ATH GENERIC UTILITIES
|
||||
M: Kalle Valo <kvalo@kernel.org>
|
||||
M: Jeff Johnson <jjohnson@kernel.org>
|
||||
L: linux-wireless@vger.kernel.org
|
||||
S: Supported
|
||||
|
@ -16438,7 +16437,7 @@ X: drivers/net/can/
|
|||
X: drivers/net/wireless/
|
||||
|
||||
NETWORKING DRIVERS (WIRELESS)
|
||||
M: Kalle Valo <kvalo@kernel.org>
|
||||
M: Johannes Berg <johannes@sipsolutions.net>
|
||||
L: linux-wireless@vger.kernel.org
|
||||
S: Maintained
|
||||
W: https://wireless.wiki.kernel.org/
|
||||
|
@ -16509,6 +16508,7 @@ F: include/linux/netdev*
|
|||
F: include/linux/netlink.h
|
||||
F: include/linux/netpoll.h
|
||||
F: include/linux/rtnetlink.h
|
||||
F: include/linux/sctp.h
|
||||
F: include/linux/seq_file_net.h
|
||||
F: include/linux/skbuff*
|
||||
F: include/net/
|
||||
|
@ -16525,6 +16525,7 @@ F: include/uapi/linux/netdev*
|
|||
F: include/uapi/linux/netlink.h
|
||||
F: include/uapi/linux/netlink_diag.h
|
||||
F: include/uapi/linux/rtnetlink.h
|
||||
F: include/uapi/linux/sctp.h
|
||||
F: lib/net_utils.c
|
||||
F: lib/random32.c
|
||||
F: net/
|
||||
|
@ -19355,7 +19356,6 @@ Q: http://patchwork.linuxtv.org/project/linux-media/list/
|
|||
F: drivers/media/tuners/qt1010*
|
||||
|
||||
QUALCOMM ATH12K WIRELESS DRIVER
|
||||
M: Kalle Valo <kvalo@kernel.org>
|
||||
M: Jeff Johnson <jjohnson@kernel.org>
|
||||
L: ath12k@lists.infradead.org
|
||||
S: Supported
|
||||
|
@ -19365,7 +19365,6 @@ F: drivers/net/wireless/ath/ath12k/
|
|||
N: ath12k
|
||||
|
||||
QUALCOMM ATHEROS ATH10K WIRELESS DRIVER
|
||||
M: Kalle Valo <kvalo@kernel.org>
|
||||
M: Jeff Johnson <jjohnson@kernel.org>
|
||||
L: ath10k@lists.infradead.org
|
||||
S: Supported
|
||||
|
@ -19375,7 +19374,6 @@ F: drivers/net/wireless/ath/ath10k/
|
|||
N: ath10k
|
||||
|
||||
QUALCOMM ATHEROS ATH11K WIRELESS DRIVER
|
||||
M: Kalle Valo <kvalo@kernel.org>
|
||||
M: Jeff Johnson <jjohnson@kernel.org>
|
||||
L: ath11k@lists.infradead.org
|
||||
S: Supported
|
||||
|
|
|
@ -1320,6 +1320,10 @@ static int btintel_pcie_send_frame(struct hci_dev *hdev,
|
|||
if (opcode == 0xfc01)
|
||||
btintel_pcie_inject_cmd_complete(hdev, opcode);
|
||||
}
|
||||
/* Firmware raises alive interrupt on HCI_OP_RESET */
|
||||
if (opcode == HCI_OP_RESET)
|
||||
data->gp0_received = false;
|
||||
|
||||
hdev->stat.cmd_tx++;
|
||||
break;
|
||||
case HCI_ACLDATA_PKT:
|
||||
|
@ -1357,7 +1361,6 @@ static int btintel_pcie_send_frame(struct hci_dev *hdev,
|
|||
opcode, btintel_pcie_alivectxt_state2str(old_ctxt),
|
||||
btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt));
|
||||
if (opcode == HCI_OP_RESET) {
|
||||
data->gp0_received = false;
|
||||
ret = wait_event_timeout(data->gp0_wait_q,
|
||||
data->gp0_received,
|
||||
msecs_to_jiffies(BTINTEL_DEFAULT_INTR_TIMEOUT_MS));
|
||||
|
|
|
@ -385,15 +385,16 @@ static int c_can_plat_probe(struct platform_device *pdev)
|
|||
if (ret) {
|
||||
dev_err(&pdev->dev, "registering %s failed (err=%d)\n",
|
||||
KBUILD_MODNAME, ret);
|
||||
goto exit_free_device;
|
||||
goto exit_pm_runtime;
|
||||
}
|
||||
|
||||
dev_info(&pdev->dev, "%s device registered (regs=%p, irq=%d)\n",
|
||||
KBUILD_MODNAME, priv->base, dev->irq);
|
||||
return 0;
|
||||
|
||||
exit_free_device:
|
||||
exit_pm_runtime:
|
||||
pm_runtime_disable(priv->device);
|
||||
exit_free_device:
|
||||
free_c_can_dev(dev);
|
||||
exit:
|
||||
dev_err(&pdev->dev, "probe failed\n");
|
||||
|
|
|
@ -867,10 +867,12 @@ static void ctucan_err_interrupt(struct net_device *ndev, u32 isr)
|
|||
}
|
||||
break;
|
||||
case CAN_STATE_ERROR_ACTIVE:
|
||||
cf->can_id |= CAN_ERR_CNT;
|
||||
cf->data[1] = CAN_ERR_CRTL_ACTIVE;
|
||||
cf->data[6] = bec.txerr;
|
||||
cf->data[7] = bec.rxerr;
|
||||
if (skb) {
|
||||
cf->can_id |= CAN_ERR_CNT;
|
||||
cf->data[1] = CAN_ERR_CRTL_ACTIVE;
|
||||
cf->data[6] = bec.txerr;
|
||||
cf->data[7] = bec.rxerr;
|
||||
}
|
||||
break;
|
||||
default:
|
||||
netdev_warn(ndev, "unhandled error state (%d:%s)!\n",
|
||||
|
|
|
@ -622,7 +622,7 @@ rkcanfd_handle_rx_fifo_overflow_int(struct rkcanfd_priv *priv)
|
|||
netdev_dbg(priv->ndev, "RX-FIFO overflow\n");
|
||||
|
||||
skb = rkcanfd_alloc_can_err_skb(priv, &cf, ×tamp);
|
||||
if (skb)
|
||||
if (!skb)
|
||||
return 0;
|
||||
|
||||
rkcanfd_get_berr_counter_corrected(priv, &bec);
|
||||
|
|
|
@ -248,7 +248,11 @@ static int es58x_devlink_info_get(struct devlink *devlink,
|
|||
return ret;
|
||||
}
|
||||
|
||||
return devlink_info_serial_number_put(req, es58x_dev->udev->serial);
|
||||
if (es58x_dev->udev->serial)
|
||||
ret = devlink_info_serial_number_put(req,
|
||||
es58x_dev->udev->serial);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
const struct devlink_ops es58x_dl_ops = {
|
||||
|
|
|
@ -2903,8 +2903,8 @@ static void iavf_watchdog_task(struct work_struct *work)
|
|||
}
|
||||
|
||||
mutex_unlock(&adapter->crit_lock);
|
||||
netdev_unlock(netdev);
|
||||
restart_watchdog:
|
||||
netdev_unlock(netdev);
|
||||
if (adapter->state >= __IAVF_DOWN)
|
||||
queue_work(adapter->wq, &adapter->adminq_task);
|
||||
if (adapter->aq_required)
|
||||
|
|
|
@ -2159,8 +2159,13 @@ static int idpf_open(struct net_device *netdev)
|
|||
idpf_vport_ctrl_lock(netdev);
|
||||
vport = idpf_netdev_to_vport(netdev);
|
||||
|
||||
err = idpf_set_real_num_queues(vport);
|
||||
if (err)
|
||||
goto unlock;
|
||||
|
||||
err = idpf_vport_open(vport);
|
||||
|
||||
unlock:
|
||||
idpf_vport_ctrl_unlock(netdev);
|
||||
|
||||
return err;
|
||||
|
|
|
@ -3008,8 +3008,6 @@ static int idpf_rx_rsc(struct idpf_rx_queue *rxq, struct sk_buff *skb,
|
|||
return -EINVAL;
|
||||
|
||||
rsc_segments = DIV_ROUND_UP(skb->data_len, rsc_seg_len);
|
||||
if (unlikely(rsc_segments == 1))
|
||||
return 0;
|
||||
|
||||
NAPI_GRO_CB(skb)->count = rsc_segments;
|
||||
skb_shinfo(skb)->gso_size = rsc_seg_len;
|
||||
|
@ -3072,6 +3070,7 @@ idpf_rx_process_skb_fields(struct idpf_rx_queue *rxq, struct sk_buff *skb,
|
|||
idpf_rx_hash(rxq, skb, rx_desc, decoded);
|
||||
|
||||
skb->protocol = eth_type_trans(skb, rxq->netdev);
|
||||
skb_record_rx_queue(skb, rxq->idx);
|
||||
|
||||
if (le16_get_bits(rx_desc->hdrlen_flags,
|
||||
VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M))
|
||||
|
@ -3080,8 +3079,6 @@ idpf_rx_process_skb_fields(struct idpf_rx_queue *rxq, struct sk_buff *skb,
|
|||
csum_bits = idpf_rx_splitq_extract_csum_bits(rx_desc);
|
||||
idpf_rx_csum(rxq, skb, csum_bits, decoded);
|
||||
|
||||
skb_record_rx_queue(skb, rxq->idx);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -1096,6 +1096,7 @@ static int igc_init_empty_frame(struct igc_ring *ring,
|
|||
return -ENOMEM;
|
||||
}
|
||||
|
||||
buffer->type = IGC_TX_BUFFER_TYPE_SKB;
|
||||
buffer->skb = skb;
|
||||
buffer->protocol = 0;
|
||||
buffer->bytecount = skb->len;
|
||||
|
@ -2701,8 +2702,9 @@ static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget)
|
|||
}
|
||||
|
||||
static struct sk_buff *igc_construct_skb_zc(struct igc_ring *ring,
|
||||
struct xdp_buff *xdp)
|
||||
struct igc_xdp_buff *ctx)
|
||||
{
|
||||
struct xdp_buff *xdp = &ctx->xdp;
|
||||
unsigned int totalsize = xdp->data_end - xdp->data_meta;
|
||||
unsigned int metasize = xdp->data - xdp->data_meta;
|
||||
struct sk_buff *skb;
|
||||
|
@ -2721,27 +2723,28 @@ static struct sk_buff *igc_construct_skb_zc(struct igc_ring *ring,
|
|||
__skb_pull(skb, metasize);
|
||||
}
|
||||
|
||||
if (ctx->rx_ts) {
|
||||
skb_shinfo(skb)->tx_flags |= SKBTX_HW_TSTAMP_NETDEV;
|
||||
skb_hwtstamps(skb)->netdev_data = ctx->rx_ts;
|
||||
}
|
||||
|
||||
return skb;
|
||||
}
|
||||
|
||||
static void igc_dispatch_skb_zc(struct igc_q_vector *q_vector,
|
||||
union igc_adv_rx_desc *desc,
|
||||
struct xdp_buff *xdp,
|
||||
ktime_t timestamp)
|
||||
struct igc_xdp_buff *ctx)
|
||||
{
|
||||
struct igc_ring *ring = q_vector->rx.ring;
|
||||
struct sk_buff *skb;
|
||||
|
||||
skb = igc_construct_skb_zc(ring, xdp);
|
||||
skb = igc_construct_skb_zc(ring, ctx);
|
||||
if (!skb) {
|
||||
ring->rx_stats.alloc_failed++;
|
||||
set_bit(IGC_RING_FLAG_RX_ALLOC_FAILED, &ring->flags);
|
||||
return;
|
||||
}
|
||||
|
||||
if (timestamp)
|
||||
skb_hwtstamps(skb)->hwtstamp = timestamp;
|
||||
|
||||
if (igc_cleanup_headers(ring, desc, skb))
|
||||
return;
|
||||
|
||||
|
@ -2777,7 +2780,6 @@ static int igc_clean_rx_irq_zc(struct igc_q_vector *q_vector, const int budget)
|
|||
union igc_adv_rx_desc *desc;
|
||||
struct igc_rx_buffer *bi;
|
||||
struct igc_xdp_buff *ctx;
|
||||
ktime_t timestamp = 0;
|
||||
unsigned int size;
|
||||
int res;
|
||||
|
||||
|
@ -2807,6 +2809,8 @@ static int igc_clean_rx_irq_zc(struct igc_q_vector *q_vector, const int budget)
|
|||
*/
|
||||
bi->xdp->data_meta += IGC_TS_HDR_LEN;
|
||||
size -= IGC_TS_HDR_LEN;
|
||||
} else {
|
||||
ctx->rx_ts = NULL;
|
||||
}
|
||||
|
||||
bi->xdp->data_end = bi->xdp->data + size;
|
||||
|
@ -2815,7 +2819,7 @@ static int igc_clean_rx_irq_zc(struct igc_q_vector *q_vector, const int budget)
|
|||
res = __igc_xdp_run_prog(adapter, prog, bi->xdp);
|
||||
switch (res) {
|
||||
case IGC_XDP_PASS:
|
||||
igc_dispatch_skb_zc(q_vector, desc, bi->xdp, timestamp);
|
||||
igc_dispatch_skb_zc(q_vector, desc, ctx);
|
||||
fallthrough;
|
||||
case IGC_XDP_CONSUMED:
|
||||
xsk_buff_free(bi->xdp);
|
||||
|
|
|
@ -2105,7 +2105,7 @@ static void ixgbe_put_rx_buffer(struct ixgbe_ring *rx_ring,
|
|||
/* hand second half of page back to the ring */
|
||||
ixgbe_reuse_rx_page(rx_ring, rx_buffer);
|
||||
} else {
|
||||
if (!IS_ERR(skb) && IXGBE_CB(skb)->dma == rx_buffer->dma) {
|
||||
if (skb && IXGBE_CB(skb)->dma == rx_buffer->dma) {
|
||||
/* the page has been released from the ring */
|
||||
IXGBE_CB(skb)->page_released = true;
|
||||
} else {
|
||||
|
|
|
@ -768,7 +768,9 @@ static void __mlxsw_sp_port_get_stats(struct net_device *dev,
|
|||
err = mlxsw_sp_get_hw_stats_by_group(&hw_stats, &len, grp);
|
||||
if (err)
|
||||
return;
|
||||
mlxsw_sp_port_get_stats_raw(dev, grp, prio, ppcnt_pl);
|
||||
err = mlxsw_sp_port_get_stats_raw(dev, grp, prio, ppcnt_pl);
|
||||
if (err)
|
||||
return;
|
||||
for (i = 0; i < len; i++) {
|
||||
data[data_index + i] = hw_stats[i].getter(ppcnt_pl);
|
||||
if (!hw_stats[i].cells_bytes)
|
||||
|
|
|
@ -2094,6 +2094,11 @@ static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv,
|
|||
pp_params.offset = stmmac_rx_offset(priv);
|
||||
pp_params.max_len = dma_conf->dma_buf_sz;
|
||||
|
||||
if (priv->sph) {
|
||||
pp_params.offset = 0;
|
||||
pp_params.max_len += stmmac_rx_offset(priv);
|
||||
}
|
||||
|
||||
rx_q->page_pool = page_pool_create(&pp_params);
|
||||
if (IS_ERR(rx_q->page_pool)) {
|
||||
ret = PTR_ERR(rx_q->page_pool);
|
||||
|
|
|
@ -828,21 +828,30 @@ static void am65_cpsw_nuss_xmit_free(struct am65_cpsw_tx_chn *tx_chn,
|
|||
static void am65_cpsw_nuss_tx_cleanup(void *data, dma_addr_t desc_dma)
|
||||
{
|
||||
struct am65_cpsw_tx_chn *tx_chn = data;
|
||||
enum am65_cpsw_tx_buf_type buf_type;
|
||||
struct cppi5_host_desc_t *desc_tx;
|
||||
struct xdp_frame *xdpf;
|
||||
struct sk_buff *skb;
|
||||
void **swdata;
|
||||
|
||||
desc_tx = k3_cppi_desc_pool_dma2virt(tx_chn->desc_pool, desc_dma);
|
||||
swdata = cppi5_hdesc_get_swdata(desc_tx);
|
||||
skb = *(swdata);
|
||||
am65_cpsw_nuss_xmit_free(tx_chn, desc_tx);
|
||||
buf_type = am65_cpsw_nuss_buf_type(tx_chn, desc_dma);
|
||||
if (buf_type == AM65_CPSW_TX_BUF_TYPE_SKB) {
|
||||
skb = *(swdata);
|
||||
dev_kfree_skb_any(skb);
|
||||
} else {
|
||||
xdpf = *(swdata);
|
||||
xdp_return_frame(xdpf);
|
||||
}
|
||||
|
||||
dev_kfree_skb_any(skb);
|
||||
am65_cpsw_nuss_xmit_free(tx_chn, desc_tx);
|
||||
}
|
||||
|
||||
static struct sk_buff *am65_cpsw_build_skb(void *page_addr,
|
||||
struct net_device *ndev,
|
||||
unsigned int len)
|
||||
unsigned int len,
|
||||
unsigned int headroom)
|
||||
{
|
||||
struct sk_buff *skb;
|
||||
|
||||
|
@ -852,7 +861,7 @@ static struct sk_buff *am65_cpsw_build_skb(void *page_addr,
|
|||
if (unlikely(!skb))
|
||||
return NULL;
|
||||
|
||||
skb_reserve(skb, AM65_CPSW_HEADROOM);
|
||||
skb_reserve(skb, headroom);
|
||||
skb->dev = ndev;
|
||||
|
||||
return skb;
|
||||
|
@ -1169,9 +1178,11 @@ static int am65_cpsw_run_xdp(struct am65_cpsw_rx_flow *flow,
|
|||
struct xdp_frame *xdpf;
|
||||
struct bpf_prog *prog;
|
||||
struct page *page;
|
||||
int pkt_len;
|
||||
u32 act;
|
||||
int err;
|
||||
|
||||
pkt_len = *len;
|
||||
prog = READ_ONCE(port->xdp_prog);
|
||||
if (!prog)
|
||||
return AM65_CPSW_XDP_PASS;
|
||||
|
@ -1189,8 +1200,10 @@ static int am65_cpsw_run_xdp(struct am65_cpsw_rx_flow *flow,
|
|||
netif_txq = netdev_get_tx_queue(ndev, tx_chn->id);
|
||||
|
||||
xdpf = xdp_convert_buff_to_frame(xdp);
|
||||
if (unlikely(!xdpf))
|
||||
if (unlikely(!xdpf)) {
|
||||
ndev->stats.tx_dropped++;
|
||||
goto drop;
|
||||
}
|
||||
|
||||
__netif_tx_lock(netif_txq, cpu);
|
||||
err = am65_cpsw_xdp_tx_frame(ndev, tx_chn, xdpf,
|
||||
|
@ -1199,14 +1212,14 @@ static int am65_cpsw_run_xdp(struct am65_cpsw_rx_flow *flow,
|
|||
if (err)
|
||||
goto drop;
|
||||
|
||||
dev_sw_netstats_tx_add(ndev, 1, *len);
|
||||
dev_sw_netstats_rx_add(ndev, pkt_len);
|
||||
ret = AM65_CPSW_XDP_CONSUMED;
|
||||
goto out;
|
||||
case XDP_REDIRECT:
|
||||
if (unlikely(xdp_do_redirect(ndev, xdp, prog)))
|
||||
goto drop;
|
||||
|
||||
dev_sw_netstats_rx_add(ndev, *len);
|
||||
dev_sw_netstats_rx_add(ndev, pkt_len);
|
||||
ret = AM65_CPSW_XDP_REDIRECT;
|
||||
goto out;
|
||||
default:
|
||||
|
@ -1315,16 +1328,8 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_rx_flow *flow,
|
|||
dev_dbg(dev, "%s rx csum_info:%#x\n", __func__, csum_info);
|
||||
|
||||
dma_unmap_single(rx_chn->dma_dev, buf_dma, buf_dma_len, DMA_FROM_DEVICE);
|
||||
|
||||
k3_cppi_desc_pool_free(rx_chn->desc_pool, desc_rx);
|
||||
|
||||
skb = am65_cpsw_build_skb(page_addr, ndev,
|
||||
AM65_CPSW_MAX_PACKET_SIZE);
|
||||
if (unlikely(!skb)) {
|
||||
new_page = page;
|
||||
goto requeue;
|
||||
}
|
||||
|
||||
if (port->xdp_prog) {
|
||||
xdp_init_buff(&xdp, PAGE_SIZE, &port->xdp_rxq[flow->id]);
|
||||
xdp_prepare_buff(&xdp, page_addr, AM65_CPSW_HEADROOM,
|
||||
|
@ -1334,9 +1339,16 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_rx_flow *flow,
|
|||
if (*xdp_state != AM65_CPSW_XDP_PASS)
|
||||
goto allocate;
|
||||
|
||||
/* Compute additional headroom to be reserved */
|
||||
headroom = (xdp.data - xdp.data_hard_start) - skb_headroom(skb);
|
||||
skb_reserve(skb, headroom);
|
||||
headroom = xdp.data - xdp.data_hard_start;
|
||||
} else {
|
||||
headroom = AM65_CPSW_HEADROOM;
|
||||
}
|
||||
|
||||
skb = am65_cpsw_build_skb(page_addr, ndev,
|
||||
AM65_CPSW_MAX_PACKET_SIZE, headroom);
|
||||
if (unlikely(!skb)) {
|
||||
new_page = page;
|
||||
goto requeue;
|
||||
}
|
||||
|
||||
ndev_priv = netdev_priv(ndev);
|
||||
|
|
|
@ -2265,12 +2265,15 @@ static int phylink_bringup_phy(struct phylink *pl, struct phy_device *phy,
|
|||
/* Allow the MAC to stop its clock if the PHY has the capability */
|
||||
pl->mac_tx_clk_stop = phy_eee_tx_clock_stop_capable(phy) > 0;
|
||||
|
||||
/* Explicitly configure whether the PHY is allowed to stop it's
|
||||
* receive clock.
|
||||
*/
|
||||
ret = phy_eee_rx_clock_stop(phy, pl->config->eee_rx_clk_stop_enable);
|
||||
if (ret == -EOPNOTSUPP)
|
||||
ret = 0;
|
||||
if (pl->mac_supports_eee_ops) {
|
||||
/* Explicitly configure whether the PHY is allowed to stop it's
|
||||
* receive clock.
|
||||
*/
|
||||
ret = phy_eee_rx_clock_stop(phy,
|
||||
pl->config->eee_rx_clk_stop_enable);
|
||||
if (ret == -EOPNOTSUPP)
|
||||
ret = 0;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -319,7 +319,7 @@ static int pse_pi_get_current_limit(struct regulator_dev *rdev)
|
|||
goto out;
|
||||
mW = ret;
|
||||
|
||||
ret = pse_pi_get_voltage(rdev);
|
||||
ret = _pse_pi_get_voltage(rdev);
|
||||
if (!ret) {
|
||||
dev_err(pcdev->dev, "Voltage null\n");
|
||||
ret = -ERANGE;
|
||||
|
@ -356,7 +356,7 @@ static int pse_pi_set_current_limit(struct regulator_dev *rdev, int min_uA,
|
|||
|
||||
id = rdev_get_id(rdev);
|
||||
mutex_lock(&pcdev->lock);
|
||||
ret = pse_pi_get_voltage(rdev);
|
||||
ret = _pse_pi_get_voltage(rdev);
|
||||
if (!ret) {
|
||||
dev_err(pcdev->dev, "Voltage null\n");
|
||||
ret = -ERANGE;
|
||||
|
|
|
@ -2639,7 +2639,9 @@ int team_nl_options_set_doit(struct sk_buff *skb, struct genl_info *info)
|
|||
ctx.data.u32_val = nla_get_u32(attr_data);
|
||||
break;
|
||||
case TEAM_OPTION_TYPE_STRING:
|
||||
if (nla_len(attr_data) > TEAM_STRING_MAX_LEN) {
|
||||
if (nla_len(attr_data) > TEAM_STRING_MAX_LEN ||
|
||||
!memchr(nla_data(attr_data), '\0',
|
||||
nla_len(attr_data))) {
|
||||
err = -EINVAL;
|
||||
goto team_put;
|
||||
}
|
||||
|
|
|
@ -2898,8 +2898,11 @@ static int vxlan_init(struct net_device *dev)
|
|||
struct vxlan_dev *vxlan = netdev_priv(dev);
|
||||
int err;
|
||||
|
||||
if (vxlan->cfg.flags & VXLAN_F_VNIFILTER)
|
||||
vxlan_vnigroup_init(vxlan);
|
||||
if (vxlan->cfg.flags & VXLAN_F_VNIFILTER) {
|
||||
err = vxlan_vnigroup_init(vxlan);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
err = gro_cells_init(&vxlan->gro_cells, dev);
|
||||
if (err)
|
||||
|
|
|
@ -4851,6 +4851,22 @@ static struct ath12k_reg_rule
|
|||
return reg_rule_ptr;
|
||||
}
|
||||
|
||||
static u8 ath12k_wmi_ignore_num_extra_rules(struct ath12k_wmi_reg_rule_ext_params *rule,
|
||||
u32 num_reg_rules)
|
||||
{
|
||||
u8 num_invalid_5ghz_rules = 0;
|
||||
u32 count, start_freq;
|
||||
|
||||
for (count = 0; count < num_reg_rules; count++) {
|
||||
start_freq = le32_get_bits(rule[count].freq_info, REG_RULE_START_FREQ);
|
||||
|
||||
if (start_freq >= ATH12K_MIN_6G_FREQ)
|
||||
num_invalid_5ghz_rules++;
|
||||
}
|
||||
|
||||
return num_invalid_5ghz_rules;
|
||||
}
|
||||
|
||||
static int ath12k_pull_reg_chan_list_ext_update_ev(struct ath12k_base *ab,
|
||||
struct sk_buff *skb,
|
||||
struct ath12k_reg_info *reg_info)
|
||||
|
@ -4861,6 +4877,7 @@ static int ath12k_pull_reg_chan_list_ext_update_ev(struct ath12k_base *ab,
|
|||
u32 num_2g_reg_rules, num_5g_reg_rules;
|
||||
u32 num_6g_reg_rules_ap[WMI_REG_CURRENT_MAX_AP_TYPE];
|
||||
u32 num_6g_reg_rules_cl[WMI_REG_CURRENT_MAX_AP_TYPE][WMI_REG_MAX_CLIENT_TYPE];
|
||||
u8 num_invalid_5ghz_ext_rules;
|
||||
u32 total_reg_rules = 0;
|
||||
int ret, i, j;
|
||||
|
||||
|
@ -4954,20 +4971,6 @@ static int ath12k_pull_reg_chan_list_ext_update_ev(struct ath12k_base *ab,
|
|||
|
||||
memcpy(reg_info->alpha2, &ev->alpha2, REG_ALPHA2_LEN);
|
||||
|
||||
/* FIXME: Currently FW includes 6G reg rule also in 5G rule
|
||||
* list for country US.
|
||||
* Having same 6G reg rule in 5G and 6G rules list causes
|
||||
* intersect check to be true, and same rules will be shown
|
||||
* multiple times in iw cmd. So added hack below to avoid
|
||||
* parsing 6G rule from 5G reg rule list, and this can be
|
||||
* removed later, after FW updates to remove 6G reg rule
|
||||
* from 5G rules list.
|
||||
*/
|
||||
if (memcmp(reg_info->alpha2, "US", 2) == 0) {
|
||||
reg_info->num_5g_reg_rules = REG_US_5G_NUM_REG_RULES;
|
||||
num_5g_reg_rules = reg_info->num_5g_reg_rules;
|
||||
}
|
||||
|
||||
reg_info->dfs_region = le32_to_cpu(ev->dfs_region);
|
||||
reg_info->phybitmap = le32_to_cpu(ev->phybitmap);
|
||||
reg_info->num_phy = le32_to_cpu(ev->num_phy);
|
||||
|
@ -5070,8 +5073,29 @@ static int ath12k_pull_reg_chan_list_ext_update_ev(struct ath12k_base *ab,
|
|||
}
|
||||
}
|
||||
|
||||
ext_wmi_reg_rule += num_2g_reg_rules;
|
||||
|
||||
/* Firmware might include 6 GHz reg rule in 5 GHz rule list
|
||||
* for few countries along with separate 6 GHz rule.
|
||||
* Having same 6 GHz reg rule in 5 GHz and 6 GHz rules list
|
||||
* causes intersect check to be true, and same rules will be
|
||||
* shown multiple times in iw cmd.
|
||||
* Hence, avoid parsing 6 GHz rule from 5 GHz reg rule list
|
||||
*/
|
||||
num_invalid_5ghz_ext_rules = ath12k_wmi_ignore_num_extra_rules(ext_wmi_reg_rule,
|
||||
num_5g_reg_rules);
|
||||
|
||||
if (num_invalid_5ghz_ext_rules) {
|
||||
ath12k_dbg(ab, ATH12K_DBG_WMI,
|
||||
"CC: %s 5 GHz reg rules number %d from fw, %d number of invalid 5 GHz rules",
|
||||
reg_info->alpha2, reg_info->num_5g_reg_rules,
|
||||
num_invalid_5ghz_ext_rules);
|
||||
|
||||
num_5g_reg_rules = num_5g_reg_rules - num_invalid_5ghz_ext_rules;
|
||||
reg_info->num_5g_reg_rules = num_5g_reg_rules;
|
||||
}
|
||||
|
||||
if (num_5g_reg_rules) {
|
||||
ext_wmi_reg_rule += num_2g_reg_rules;
|
||||
reg_info->reg_rules_5g_ptr =
|
||||
create_ext_reg_rules_from_wmi(num_5g_reg_rules,
|
||||
ext_wmi_reg_rule);
|
||||
|
@ -5083,7 +5107,12 @@ static int ath12k_pull_reg_chan_list_ext_update_ev(struct ath12k_base *ab,
|
|||
}
|
||||
}
|
||||
|
||||
ext_wmi_reg_rule += num_5g_reg_rules;
|
||||
/* We have adjusted the number of 5 GHz reg rules above. But still those
|
||||
* many rules needs to be adjusted in ext_wmi_reg_rule.
|
||||
*
|
||||
* NOTE: num_invalid_5ghz_ext_rules will be 0 for rest other cases.
|
||||
*/
|
||||
ext_wmi_reg_rule += (num_5g_reg_rules + num_invalid_5ghz_ext_rules);
|
||||
|
||||
for (i = 0; i < WMI_REG_CURRENT_MAX_AP_TYPE; i++) {
|
||||
reg_info->reg_rules_6g_ap_ptr[i] =
|
||||
|
|
|
@ -4073,7 +4073,6 @@ struct ath12k_wmi_eht_rate_set_params {
|
|||
#define MAX_REG_RULES 10
|
||||
#define REG_ALPHA2_LEN 2
|
||||
#define MAX_6G_REG_RULES 5
|
||||
#define REG_US_5G_NUM_REG_RULES 4
|
||||
|
||||
enum wmi_start_event_param {
|
||||
WMI_VDEV_START_RESP_EVENT = 0,
|
||||
|
|
|
@ -2712,7 +2712,7 @@ static const struct pci_device_id brcmf_pcie_devid_table[] = {
|
|||
BRCMF_PCIE_DEVICE(BRCM_PCIE_4350_DEVICE_ID, WCC),
|
||||
BRCMF_PCIE_DEVICE_SUB(0x4355, BRCM_PCIE_VENDOR_ID_BROADCOM, 0x4355, WCC),
|
||||
BRCMF_PCIE_DEVICE(BRCM_PCIE_4354_RAW_DEVICE_ID, WCC),
|
||||
BRCMF_PCIE_DEVICE(BRCM_PCIE_4355_DEVICE_ID, WCC),
|
||||
BRCMF_PCIE_DEVICE(BRCM_PCIE_4355_DEVICE_ID, WCC_SEED),
|
||||
BRCMF_PCIE_DEVICE(BRCM_PCIE_4356_DEVICE_ID, WCC),
|
||||
BRCMF_PCIE_DEVICE(BRCM_PCIE_43567_DEVICE_ID, WCC),
|
||||
BRCMF_PCIE_DEVICE(BRCM_PCIE_43570_DEVICE_ID, WCC),
|
||||
|
@ -2723,7 +2723,7 @@ static const struct pci_device_id brcmf_pcie_devid_table[] = {
|
|||
BRCMF_PCIE_DEVICE(BRCM_PCIE_43602_2G_DEVICE_ID, WCC),
|
||||
BRCMF_PCIE_DEVICE(BRCM_PCIE_43602_5G_DEVICE_ID, WCC),
|
||||
BRCMF_PCIE_DEVICE(BRCM_PCIE_43602_RAW_DEVICE_ID, WCC),
|
||||
BRCMF_PCIE_DEVICE(BRCM_PCIE_4364_DEVICE_ID, WCC),
|
||||
BRCMF_PCIE_DEVICE(BRCM_PCIE_4364_DEVICE_ID, WCC_SEED),
|
||||
BRCMF_PCIE_DEVICE(BRCM_PCIE_4365_DEVICE_ID, BCA),
|
||||
BRCMF_PCIE_DEVICE(BRCM_PCIE_4365_2G_DEVICE_ID, BCA),
|
||||
BRCMF_PCIE_DEVICE(BRCM_PCIE_4365_5G_DEVICE_ID, BCA),
|
||||
|
|
|
@ -414,16 +414,16 @@ static ssize_t vmclock_miscdev_read(struct file *fp, char __user *buf,
|
|||
}
|
||||
|
||||
static const struct file_operations vmclock_miscdev_fops = {
|
||||
.owner = THIS_MODULE,
|
||||
.mmap = vmclock_miscdev_mmap,
|
||||
.read = vmclock_miscdev_read,
|
||||
};
|
||||
|
||||
/* module operations */
|
||||
|
||||
static void vmclock_remove(struct platform_device *pdev)
|
||||
static void vmclock_remove(void *data)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct vmclock_state *st = dev_get_drvdata(dev);
|
||||
struct vmclock_state *st = data;
|
||||
|
||||
if (st->ptp_clock)
|
||||
ptp_clock_unregister(st->ptp_clock);
|
||||
|
@ -506,14 +506,13 @@ static int vmclock_probe(struct platform_device *pdev)
|
|||
|
||||
if (ret) {
|
||||
dev_info(dev, "Failed to obtain physical address: %d\n", ret);
|
||||
goto out;
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (resource_size(&st->res) < VMCLOCK_MIN_SIZE) {
|
||||
dev_info(dev, "Region too small (0x%llx)\n",
|
||||
resource_size(&st->res));
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
return -EINVAL;
|
||||
}
|
||||
st->clk = devm_memremap(dev, st->res.start, resource_size(&st->res),
|
||||
MEMREMAP_WB | MEMREMAP_DEC);
|
||||
|
@ -521,31 +520,34 @@ static int vmclock_probe(struct platform_device *pdev)
|
|||
ret = PTR_ERR(st->clk);
|
||||
dev_info(dev, "failed to map shared memory\n");
|
||||
st->clk = NULL;
|
||||
goto out;
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (le32_to_cpu(st->clk->magic) != VMCLOCK_MAGIC ||
|
||||
le32_to_cpu(st->clk->size) > resource_size(&st->res) ||
|
||||
le16_to_cpu(st->clk->version) != 1) {
|
||||
dev_info(dev, "vmclock magic fields invalid\n");
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
ret = ida_alloc(&vmclock_ida, GFP_KERNEL);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
return ret;
|
||||
|
||||
st->index = ret;
|
||||
ret = devm_add_action_or_reset(&pdev->dev, vmclock_put_idx, st);
|
||||
if (ret)
|
||||
goto out;
|
||||
return ret;
|
||||
|
||||
st->name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "vmclock%d", st->index);
|
||||
if (!st->name) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
if (!st->name)
|
||||
return -ENOMEM;
|
||||
|
||||
st->miscdev.minor = MISC_DYNAMIC_MINOR;
|
||||
|
||||
ret = devm_add_action_or_reset(&pdev->dev, vmclock_remove, st);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* If the structure is big enough, it can be mapped to userspace.
|
||||
|
@ -554,13 +556,12 @@ static int vmclock_probe(struct platform_device *pdev)
|
|||
* cross that bridge if/when we come to it.
|
||||
*/
|
||||
if (le32_to_cpu(st->clk->size) >= PAGE_SIZE) {
|
||||
st->miscdev.minor = MISC_DYNAMIC_MINOR;
|
||||
st->miscdev.fops = &vmclock_miscdev_fops;
|
||||
st->miscdev.name = st->name;
|
||||
|
||||
ret = misc_register(&st->miscdev);
|
||||
if (ret)
|
||||
goto out;
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* If there is valid clock information, register a PTP clock */
|
||||
|
@ -570,16 +571,14 @@ static int vmclock_probe(struct platform_device *pdev)
|
|||
if (IS_ERR(st->ptp_clock)) {
|
||||
ret = PTR_ERR(st->ptp_clock);
|
||||
st->ptp_clock = NULL;
|
||||
vmclock_remove(pdev);
|
||||
goto out;
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
if (!st->miscdev.minor && !st->ptp_clock) {
|
||||
/* Neither miscdev nor PTP registered */
|
||||
dev_info(dev, "vmclock: Neither miscdev nor PTP available; not registering\n");
|
||||
ret = -ENODEV;
|
||||
goto out;
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
dev_info(dev, "%s: registered %s%s%s\n", st->name,
|
||||
|
@ -587,10 +586,7 @@ static int vmclock_probe(struct platform_device *pdev)
|
|||
(st->miscdev.minor && st->ptp_clock) ? ", " : "",
|
||||
st->ptp_clock ? "PTP" : "");
|
||||
|
||||
dev_set_drvdata(dev, st);
|
||||
|
||||
out:
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct acpi_device_id vmclock_acpi_ids[] = {
|
||||
|
@ -601,7 +597,6 @@ MODULE_DEVICE_TABLE(acpi, vmclock_acpi_ids);
|
|||
|
||||
static struct platform_driver vmclock_platform_driver = {
|
||||
.probe = vmclock_probe,
|
||||
.remove = vmclock_remove,
|
||||
.driver = {
|
||||
.name = "vmclock",
|
||||
.acpi_match_table = vmclock_acpi_ids,
|
||||
|
|
|
@ -7050,14 +7050,16 @@ int qeth_open(struct net_device *dev)
|
|||
card->data.state = CH_STATE_UP;
|
||||
netif_tx_start_all_queues(dev);
|
||||
|
||||
local_bh_disable();
|
||||
qeth_for_each_output_queue(card, queue, i) {
|
||||
netif_napi_add_tx(dev, &queue->napi, qeth_tx_poll);
|
||||
napi_enable(&queue->napi);
|
||||
}
|
||||
napi_enable(&card->napi);
|
||||
|
||||
local_bh_disable();
|
||||
qeth_for_each_output_queue(card, queue, i) {
|
||||
napi_schedule(&queue->napi);
|
||||
}
|
||||
|
||||
napi_enable(&card->napi);
|
||||
napi_schedule(&card->napi);
|
||||
/* kick-start the NAPI softirq: */
|
||||
local_bh_enable();
|
||||
|
|
|
@ -2663,6 +2663,12 @@ struct net *dev_net(const struct net_device *dev)
|
|||
return read_pnet(&dev->nd_net);
|
||||
}
|
||||
|
||||
static inline
|
||||
struct net *dev_net_rcu(const struct net_device *dev)
|
||||
{
|
||||
return read_pnet_rcu(&dev->nd_net);
|
||||
}
|
||||
|
||||
static inline
|
||||
void dev_net_set(struct net_device *dev, struct net *net)
|
||||
{
|
||||
|
|
|
@ -668,7 +668,7 @@ struct l2cap_conn {
|
|||
struct l2cap_chan *smp;
|
||||
|
||||
struct list_head chan_l;
|
||||
struct mutex chan_lock;
|
||||
struct mutex lock;
|
||||
struct kref ref;
|
||||
struct list_head users;
|
||||
};
|
||||
|
@ -970,6 +970,7 @@ void l2cap_chan_del(struct l2cap_chan *chan, int err);
|
|||
void l2cap_send_conn_req(struct l2cap_chan *chan);
|
||||
|
||||
struct l2cap_conn *l2cap_conn_get(struct l2cap_conn *conn);
|
||||
struct l2cap_conn *l2cap_conn_hold_unless_zero(struct l2cap_conn *conn);
|
||||
void l2cap_conn_put(struct l2cap_conn *conn);
|
||||
|
||||
int l2cap_register_user(struct l2cap_conn *conn, struct l2cap_user *user);
|
||||
|
|
|
@ -471,9 +471,12 @@ static inline unsigned int ip_dst_mtu_maybe_forward(const struct dst_entry *dst,
|
|||
bool forwarding)
|
||||
{
|
||||
const struct rtable *rt = dst_rtable(dst);
|
||||
struct net *net = dev_net(dst->dev);
|
||||
unsigned int mtu;
|
||||
unsigned int mtu, res;
|
||||
struct net *net;
|
||||
|
||||
rcu_read_lock();
|
||||
|
||||
net = dev_net_rcu(dst->dev);
|
||||
if (READ_ONCE(net->ipv4.sysctl_ip_fwd_use_pmtu) ||
|
||||
ip_mtu_locked(dst) ||
|
||||
!forwarding) {
|
||||
|
@ -497,7 +500,11 @@ static inline unsigned int ip_dst_mtu_maybe_forward(const struct dst_entry *dst,
|
|||
out:
|
||||
mtu = min_t(unsigned int, mtu, IP_MAX_MTU);
|
||||
|
||||
return mtu - lwtunnel_headroom(dst->lwtstate, mtu);
|
||||
res = mtu - lwtunnel_headroom(dst->lwtstate, mtu);
|
||||
|
||||
rcu_read_unlock();
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
static inline unsigned int ip_skb_dst_mtu(struct sock *sk,
|
||||
|
|
|
@ -198,10 +198,12 @@ struct sk_buff *l3mdev_l3_out(struct sock *sk, struct sk_buff *skb, u16 proto)
|
|||
if (netif_is_l3_slave(dev)) {
|
||||
struct net_device *master;
|
||||
|
||||
rcu_read_lock();
|
||||
master = netdev_master_upper_dev_get_rcu(dev);
|
||||
if (master && master->l3mdev_ops->l3mdev_l3_out)
|
||||
skb = master->l3mdev_ops->l3mdev_l3_out(master, sk,
|
||||
skb, proto);
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
return skb;
|
||||
|
|
|
@ -398,7 +398,7 @@ static inline struct net *read_pnet(const possible_net_t *pnet)
|
|||
#endif
|
||||
}
|
||||
|
||||
static inline struct net *read_pnet_rcu(possible_net_t *pnet)
|
||||
static inline struct net *read_pnet_rcu(const possible_net_t *pnet)
|
||||
{
|
||||
#ifdef CONFIG_NET_NS
|
||||
return rcu_dereference(pnet->net);
|
||||
|
|
|
@ -382,10 +382,15 @@ static inline int inet_iif(const struct sk_buff *skb)
|
|||
static inline int ip4_dst_hoplimit(const struct dst_entry *dst)
|
||||
{
|
||||
int hoplimit = dst_metric_raw(dst, RTAX_HOPLIMIT);
|
||||
struct net *net = dev_net(dst->dev);
|
||||
|
||||
if (hoplimit == 0)
|
||||
if (hoplimit == 0) {
|
||||
const struct net *net;
|
||||
|
||||
rcu_read_lock();
|
||||
net = dev_net_rcu(dst->dev);
|
||||
hoplimit = READ_ONCE(net->ipv4.sysctl_ip_default_ttl);
|
||||
rcu_read_unlock();
|
||||
}
|
||||
return hoplimit;
|
||||
}
|
||||
|
||||
|
|
|
@ -682,6 +682,7 @@ enum ethtool_link_ext_substate_module {
|
|||
* @ETH_SS_STATS_ETH_CTRL: names of IEEE 802.3 MAC Control statistics
|
||||
* @ETH_SS_STATS_RMON: names of RMON statistics
|
||||
* @ETH_SS_STATS_PHY: names of PHY(dev) statistics
|
||||
* @ETH_SS_TS_FLAGS: hardware timestamping flags
|
||||
*
|
||||
* @ETH_SS_COUNT: number of defined string sets
|
||||
*/
|
||||
|
@ -708,6 +709,7 @@ enum ethtool_stringset {
|
|||
ETH_SS_STATS_ETH_CTRL,
|
||||
ETH_SS_STATS_RMON,
|
||||
ETH_SS_STATS_PHY,
|
||||
ETH_SS_TS_FLAGS,
|
||||
|
||||
/* add new constants above here */
|
||||
ETH_SS_COUNT
|
||||
|
|
|
@ -685,6 +685,15 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
|
|||
break;
|
||||
}
|
||||
|
||||
if (ax25->ax25_dev) {
|
||||
if (dev == ax25->ax25_dev->dev) {
|
||||
rcu_read_unlock();
|
||||
break;
|
||||
}
|
||||
netdev_put(ax25->ax25_dev->dev, &ax25->dev_tracker);
|
||||
ax25_dev_put(ax25->ax25_dev);
|
||||
}
|
||||
|
||||
ax25->ax25_dev = ax25_dev_ax25dev(dev);
|
||||
if (!ax25->ax25_dev) {
|
||||
rcu_read_unlock();
|
||||
|
@ -692,6 +701,8 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
|
|||
break;
|
||||
}
|
||||
ax25_fillin_cb(ax25, ax25->ax25_dev);
|
||||
netdev_hold(dev, &ax25->dev_tracker, GFP_ATOMIC);
|
||||
ax25_dev_hold(ax25->ax25_dev);
|
||||
rcu_read_unlock();
|
||||
break;
|
||||
|
||||
|
|
|
@ -113,8 +113,6 @@ static void
|
|||
batadv_v_hardif_neigh_init(struct batadv_hardif_neigh_node *hardif_neigh)
|
||||
{
|
||||
ewma_throughput_init(&hardif_neigh->bat_v.throughput);
|
||||
INIT_WORK(&hardif_neigh->bat_v.metric_work,
|
||||
batadv_v_elp_throughput_metric_update);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -18,6 +18,7 @@
|
|||
#include <linux/if_ether.h>
|
||||
#include <linux/jiffies.h>
|
||||
#include <linux/kref.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/minmax.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include <linux/nl80211.h>
|
||||
|
@ -26,6 +27,7 @@
|
|||
#include <linux/rcupdate.h>
|
||||
#include <linux/rtnetlink.h>
|
||||
#include <linux/skbuff.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/stddef.h>
|
||||
#include <linux/string.h>
|
||||
#include <linux/types.h>
|
||||
|
@ -41,6 +43,18 @@
|
|||
#include "routing.h"
|
||||
#include "send.h"
|
||||
|
||||
/**
|
||||
* struct batadv_v_metric_queue_entry - list of hardif neighbors which require
|
||||
* and metric update
|
||||
*/
|
||||
struct batadv_v_metric_queue_entry {
|
||||
/** @hardif_neigh: hardif neighbor scheduled for metric update */
|
||||
struct batadv_hardif_neigh_node *hardif_neigh;
|
||||
|
||||
/** @list: list node for metric_queue */
|
||||
struct list_head list;
|
||||
};
|
||||
|
||||
/**
|
||||
* batadv_v_elp_start_timer() - restart timer for ELP periodic work
|
||||
* @hard_iface: the interface for which the timer has to be reset
|
||||
|
@ -59,25 +73,36 @@ static void batadv_v_elp_start_timer(struct batadv_hard_iface *hard_iface)
|
|||
/**
|
||||
* batadv_v_elp_get_throughput() - get the throughput towards a neighbour
|
||||
* @neigh: the neighbour for which the throughput has to be obtained
|
||||
* @pthroughput: calculated throughput towards the given neighbour in multiples
|
||||
* of 100kpbs (a value of '1' equals 0.1Mbps, '10' equals 1Mbps, etc).
|
||||
*
|
||||
* Return: The throughput towards the given neighbour in multiples of 100kpbs
|
||||
* (a value of '1' equals 0.1Mbps, '10' equals 1Mbps, etc).
|
||||
* Return: true when value behind @pthroughput was set
|
||||
*/
|
||||
static u32 batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh)
|
||||
static bool batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh,
|
||||
u32 *pthroughput)
|
||||
{
|
||||
struct batadv_hard_iface *hard_iface = neigh->if_incoming;
|
||||
struct net_device *soft_iface = hard_iface->soft_iface;
|
||||
struct ethtool_link_ksettings link_settings;
|
||||
struct net_device *real_netdev;
|
||||
struct station_info sinfo;
|
||||
u32 throughput;
|
||||
int ret;
|
||||
|
||||
/* don't query throughput when no longer associated with any
|
||||
* batman-adv interface
|
||||
*/
|
||||
if (!soft_iface)
|
||||
return false;
|
||||
|
||||
/* if the user specified a customised value for this interface, then
|
||||
* return it directly
|
||||
*/
|
||||
throughput = atomic_read(&hard_iface->bat_v.throughput_override);
|
||||
if (throughput != 0)
|
||||
return throughput;
|
||||
if (throughput != 0) {
|
||||
*pthroughput = throughput;
|
||||
return true;
|
||||
}
|
||||
|
||||
/* if this is a wireless device, then ask its throughput through
|
||||
* cfg80211 API
|
||||
|
@ -104,27 +129,39 @@ static u32 batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh)
|
|||
* possible to delete this neighbor. For now set
|
||||
* the throughput metric to 0.
|
||||
*/
|
||||
return 0;
|
||||
*pthroughput = 0;
|
||||
return true;
|
||||
}
|
||||
if (ret)
|
||||
goto default_throughput;
|
||||
|
||||
if (sinfo.filled & BIT(NL80211_STA_INFO_EXPECTED_THROUGHPUT))
|
||||
return sinfo.expected_throughput / 100;
|
||||
if (sinfo.filled & BIT(NL80211_STA_INFO_EXPECTED_THROUGHPUT)) {
|
||||
*pthroughput = sinfo.expected_throughput / 100;
|
||||
return true;
|
||||
}
|
||||
|
||||
/* try to estimate the expected throughput based on reported tx
|
||||
* rates
|
||||
*/
|
||||
if (sinfo.filled & BIT(NL80211_STA_INFO_TX_BITRATE))
|
||||
return cfg80211_calculate_bitrate(&sinfo.txrate) / 3;
|
||||
if (sinfo.filled & BIT(NL80211_STA_INFO_TX_BITRATE)) {
|
||||
*pthroughput = cfg80211_calculate_bitrate(&sinfo.txrate) / 3;
|
||||
return true;
|
||||
}
|
||||
|
||||
goto default_throughput;
|
||||
}
|
||||
|
||||
/* only use rtnl_trylock because the elp worker will be cancelled while
|
||||
* the rntl_lock is held. the cancel_delayed_work_sync() would otherwise
|
||||
* wait forever when the elp work_item was started and it is then also
|
||||
* trying to rtnl_lock
|
||||
*/
|
||||
if (!rtnl_trylock())
|
||||
return false;
|
||||
|
||||
/* if not a wifi interface, check if this device provides data via
|
||||
* ethtool (e.g. an Ethernet adapter)
|
||||
*/
|
||||
rtnl_lock();
|
||||
ret = __ethtool_get_link_ksettings(hard_iface->net_dev, &link_settings);
|
||||
rtnl_unlock();
|
||||
if (ret == 0) {
|
||||
|
@ -135,13 +172,15 @@ static u32 batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh)
|
|||
hard_iface->bat_v.flags &= ~BATADV_FULL_DUPLEX;
|
||||
|
||||
throughput = link_settings.base.speed;
|
||||
if (throughput && throughput != SPEED_UNKNOWN)
|
||||
return throughput * 10;
|
||||
if (throughput && throughput != SPEED_UNKNOWN) {
|
||||
*pthroughput = throughput * 10;
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
default_throughput:
|
||||
if (!(hard_iface->bat_v.flags & BATADV_WARNING_DEFAULT)) {
|
||||
batadv_info(hard_iface->soft_iface,
|
||||
batadv_info(soft_iface,
|
||||
"WiFi driver or ethtool info does not provide information about link speeds on interface %s, therefore defaulting to hardcoded throughput values of %u.%1u Mbps. Consider overriding the throughput manually or checking your driver.\n",
|
||||
hard_iface->net_dev->name,
|
||||
BATADV_THROUGHPUT_DEFAULT_VALUE / 10,
|
||||
|
@ -150,31 +189,26 @@ default_throughput:
|
|||
}
|
||||
|
||||
/* if none of the above cases apply, return the base_throughput */
|
||||
return BATADV_THROUGHPUT_DEFAULT_VALUE;
|
||||
*pthroughput = BATADV_THROUGHPUT_DEFAULT_VALUE;
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* batadv_v_elp_throughput_metric_update() - worker updating the throughput
|
||||
* metric of a single hop neighbour
|
||||
* @work: the work queue item
|
||||
* @neigh: the neighbour to probe
|
||||
*/
|
||||
void batadv_v_elp_throughput_metric_update(struct work_struct *work)
|
||||
static void
|
||||
batadv_v_elp_throughput_metric_update(struct batadv_hardif_neigh_node *neigh)
|
||||
{
|
||||
struct batadv_hardif_neigh_node_bat_v *neigh_bat_v;
|
||||
struct batadv_hardif_neigh_node *neigh;
|
||||
u32 throughput;
|
||||
bool valid;
|
||||
|
||||
neigh_bat_v = container_of(work, struct batadv_hardif_neigh_node_bat_v,
|
||||
metric_work);
|
||||
neigh = container_of(neigh_bat_v, struct batadv_hardif_neigh_node,
|
||||
bat_v);
|
||||
valid = batadv_v_elp_get_throughput(neigh, &throughput);
|
||||
if (!valid)
|
||||
return;
|
||||
|
||||
ewma_throughput_add(&neigh->bat_v.throughput,
|
||||
batadv_v_elp_get_throughput(neigh));
|
||||
|
||||
/* decrement refcounter to balance increment performed before scheduling
|
||||
* this task
|
||||
*/
|
||||
batadv_hardif_neigh_put(neigh);
|
||||
ewma_throughput_add(&neigh->bat_v.throughput, throughput);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -248,14 +282,16 @@ batadv_v_elp_wifi_neigh_probe(struct batadv_hardif_neigh_node *neigh)
|
|||
*/
|
||||
static void batadv_v_elp_periodic_work(struct work_struct *work)
|
||||
{
|
||||
struct batadv_v_metric_queue_entry *metric_entry;
|
||||
struct batadv_v_metric_queue_entry *metric_safe;
|
||||
struct batadv_hardif_neigh_node *hardif_neigh;
|
||||
struct batadv_hard_iface *hard_iface;
|
||||
struct batadv_hard_iface_bat_v *bat_v;
|
||||
struct batadv_elp_packet *elp_packet;
|
||||
struct list_head metric_queue;
|
||||
struct batadv_priv *bat_priv;
|
||||
struct sk_buff *skb;
|
||||
u32 elp_interval;
|
||||
bool ret;
|
||||
|
||||
bat_v = container_of(work, struct batadv_hard_iface_bat_v, elp_wq.work);
|
||||
hard_iface = container_of(bat_v, struct batadv_hard_iface, bat_v);
|
||||
|
@ -291,6 +327,8 @@ static void batadv_v_elp_periodic_work(struct work_struct *work)
|
|||
|
||||
atomic_inc(&hard_iface->bat_v.elp_seqno);
|
||||
|
||||
INIT_LIST_HEAD(&metric_queue);
|
||||
|
||||
/* The throughput metric is updated on each sent packet. This way, if a
|
||||
* node is dead and no longer sends packets, batman-adv is still able to
|
||||
* react timely to its death.
|
||||
|
@ -315,16 +353,28 @@ static void batadv_v_elp_periodic_work(struct work_struct *work)
|
|||
|
||||
/* Reading the estimated throughput from cfg80211 is a task that
|
||||
* may sleep and that is not allowed in an rcu protected
|
||||
* context. Therefore schedule a task for that.
|
||||
* context. Therefore add it to metric_queue and process it
|
||||
* outside rcu protected context.
|
||||
*/
|
||||
ret = queue_work(batadv_event_workqueue,
|
||||
&hardif_neigh->bat_v.metric_work);
|
||||
|
||||
if (!ret)
|
||||
metric_entry = kzalloc(sizeof(*metric_entry), GFP_ATOMIC);
|
||||
if (!metric_entry) {
|
||||
batadv_hardif_neigh_put(hardif_neigh);
|
||||
continue;
|
||||
}
|
||||
|
||||
metric_entry->hardif_neigh = hardif_neigh;
|
||||
list_add(&metric_entry->list, &metric_queue);
|
||||
}
|
||||
rcu_read_unlock();
|
||||
|
||||
list_for_each_entry_safe(metric_entry, metric_safe, &metric_queue, list) {
|
||||
batadv_v_elp_throughput_metric_update(metric_entry->hardif_neigh);
|
||||
|
||||
batadv_hardif_neigh_put(metric_entry->hardif_neigh);
|
||||
list_del(&metric_entry->list);
|
||||
kfree(metric_entry);
|
||||
}
|
||||
|
||||
restart_timer:
|
||||
batadv_v_elp_start_timer(hard_iface);
|
||||
out:
|
||||
|
|
|
@ -10,7 +10,6 @@
|
|||
#include "main.h"
|
||||
|
||||
#include <linux/skbuff.h>
|
||||
#include <linux/workqueue.h>
|
||||
|
||||
int batadv_v_elp_iface_enable(struct batadv_hard_iface *hard_iface);
|
||||
void batadv_v_elp_iface_disable(struct batadv_hard_iface *hard_iface);
|
||||
|
@ -19,6 +18,5 @@ void batadv_v_elp_iface_activate(struct batadv_hard_iface *primary_iface,
|
|||
void batadv_v_elp_primary_iface_set(struct batadv_hard_iface *primary_iface);
|
||||
int batadv_v_elp_packet_recv(struct sk_buff *skb,
|
||||
struct batadv_hard_iface *if_incoming);
|
||||
void batadv_v_elp_throughput_metric_update(struct work_struct *work);
|
||||
|
||||
#endif /* _NET_BATMAN_ADV_BAT_V_ELP_H_ */
|
||||
|
|
|
@ -3937,23 +3937,21 @@ static void batadv_tt_tvlv_ogm_handler_v1(struct batadv_priv *bat_priv,
|
|||
struct batadv_tvlv_tt_change *tt_change;
|
||||
struct batadv_tvlv_tt_data *tt_data;
|
||||
u16 num_entries, num_vlan;
|
||||
size_t flex_size;
|
||||
size_t tt_data_sz;
|
||||
|
||||
if (tvlv_value_len < sizeof(*tt_data))
|
||||
return;
|
||||
|
||||
tt_data = tvlv_value;
|
||||
tvlv_value_len -= sizeof(*tt_data);
|
||||
|
||||
num_vlan = ntohs(tt_data->num_vlan);
|
||||
|
||||
flex_size = flex_array_size(tt_data, vlan_data, num_vlan);
|
||||
if (tvlv_value_len < flex_size)
|
||||
tt_data_sz = struct_size(tt_data, vlan_data, num_vlan);
|
||||
if (tvlv_value_len < tt_data_sz)
|
||||
return;
|
||||
|
||||
tt_change = (struct batadv_tvlv_tt_change *)((void *)tt_data
|
||||
+ flex_size);
|
||||
tvlv_value_len -= flex_size;
|
||||
+ tt_data_sz);
|
||||
tvlv_value_len -= tt_data_sz;
|
||||
|
||||
num_entries = batadv_tt_entries(tvlv_value_len);
|
||||
|
||||
|
|
|
@ -596,9 +596,6 @@ struct batadv_hardif_neigh_node_bat_v {
|
|||
* neighbor
|
||||
*/
|
||||
unsigned long last_unicast_tx;
|
||||
|
||||
/** @metric_work: work queue callback item for metric update */
|
||||
struct work_struct metric_work;
|
||||
};
|
||||
|
||||
/**
|
||||
|
|
|
@ -119,7 +119,6 @@ static struct l2cap_chan *l2cap_get_chan_by_scid(struct l2cap_conn *conn,
|
|||
{
|
||||
struct l2cap_chan *c;
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
c = __l2cap_get_chan_by_scid(conn, cid);
|
||||
if (c) {
|
||||
/* Only lock if chan reference is not 0 */
|
||||
|
@ -127,7 +126,6 @@ static struct l2cap_chan *l2cap_get_chan_by_scid(struct l2cap_conn *conn,
|
|||
if (c)
|
||||
l2cap_chan_lock(c);
|
||||
}
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
|
||||
return c;
|
||||
}
|
||||
|
@ -140,7 +138,6 @@ static struct l2cap_chan *l2cap_get_chan_by_dcid(struct l2cap_conn *conn,
|
|||
{
|
||||
struct l2cap_chan *c;
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
c = __l2cap_get_chan_by_dcid(conn, cid);
|
||||
if (c) {
|
||||
/* Only lock if chan reference is not 0 */
|
||||
|
@ -148,7 +145,6 @@ static struct l2cap_chan *l2cap_get_chan_by_dcid(struct l2cap_conn *conn,
|
|||
if (c)
|
||||
l2cap_chan_lock(c);
|
||||
}
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
|
||||
return c;
|
||||
}
|
||||
|
@ -418,7 +414,7 @@ static void l2cap_chan_timeout(struct work_struct *work)
|
|||
if (!conn)
|
||||
return;
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
mutex_lock(&conn->lock);
|
||||
/* __set_chan_timer() calls l2cap_chan_hold(chan) while scheduling
|
||||
* this work. No need to call l2cap_chan_hold(chan) here again.
|
||||
*/
|
||||
|
@ -439,7 +435,7 @@ static void l2cap_chan_timeout(struct work_struct *work)
|
|||
l2cap_chan_unlock(chan);
|
||||
l2cap_chan_put(chan);
|
||||
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
mutex_unlock(&conn->lock);
|
||||
}
|
||||
|
||||
struct l2cap_chan *l2cap_chan_create(void)
|
||||
|
@ -641,9 +637,9 @@ void __l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan)
|
|||
|
||||
void l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan)
|
||||
{
|
||||
mutex_lock(&conn->chan_lock);
|
||||
mutex_lock(&conn->lock);
|
||||
__l2cap_chan_add(conn, chan);
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
mutex_unlock(&conn->lock);
|
||||
}
|
||||
|
||||
void l2cap_chan_del(struct l2cap_chan *chan, int err)
|
||||
|
@ -731,9 +727,9 @@ void l2cap_chan_list(struct l2cap_conn *conn, l2cap_chan_func_t func,
|
|||
if (!conn)
|
||||
return;
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
mutex_lock(&conn->lock);
|
||||
__l2cap_chan_list(conn, func, data);
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
mutex_unlock(&conn->lock);
|
||||
}
|
||||
|
||||
EXPORT_SYMBOL_GPL(l2cap_chan_list);
|
||||
|
@ -745,7 +741,7 @@ static void l2cap_conn_update_id_addr(struct work_struct *work)
|
|||
struct hci_conn *hcon = conn->hcon;
|
||||
struct l2cap_chan *chan;
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
mutex_lock(&conn->lock);
|
||||
|
||||
list_for_each_entry(chan, &conn->chan_l, list) {
|
||||
l2cap_chan_lock(chan);
|
||||
|
@ -754,7 +750,7 @@ static void l2cap_conn_update_id_addr(struct work_struct *work)
|
|||
l2cap_chan_unlock(chan);
|
||||
}
|
||||
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
mutex_unlock(&conn->lock);
|
||||
}
|
||||
|
||||
static void l2cap_chan_le_connect_reject(struct l2cap_chan *chan)
|
||||
|
@ -948,6 +944,16 @@ static u8 l2cap_get_ident(struct l2cap_conn *conn)
|
|||
return id;
|
||||
}
|
||||
|
||||
static void l2cap_send_acl(struct l2cap_conn *conn, struct sk_buff *skb,
|
||||
u8 flags)
|
||||
{
|
||||
/* Check if the hcon still valid before attempting to send */
|
||||
if (hci_conn_valid(conn->hcon->hdev, conn->hcon))
|
||||
hci_send_acl(conn->hchan, skb, flags);
|
||||
else
|
||||
kfree_skb(skb);
|
||||
}
|
||||
|
||||
static void l2cap_send_cmd(struct l2cap_conn *conn, u8 ident, u8 code, u16 len,
|
||||
void *data)
|
||||
{
|
||||
|
@ -970,7 +976,7 @@ static void l2cap_send_cmd(struct l2cap_conn *conn, u8 ident, u8 code, u16 len,
|
|||
bt_cb(skb)->force_active = BT_POWER_FORCE_ACTIVE_ON;
|
||||
skb->priority = HCI_PRIO_MAX;
|
||||
|
||||
hci_send_acl(conn->hchan, skb, flags);
|
||||
l2cap_send_acl(conn, skb, flags);
|
||||
}
|
||||
|
||||
static void l2cap_do_send(struct l2cap_chan *chan, struct sk_buff *skb)
|
||||
|
@ -1497,8 +1503,6 @@ static void l2cap_conn_start(struct l2cap_conn *conn)
|
|||
|
||||
BT_DBG("conn %p", conn);
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
|
||||
list_for_each_entry_safe(chan, tmp, &conn->chan_l, list) {
|
||||
l2cap_chan_lock(chan);
|
||||
|
||||
|
@ -1567,8 +1571,6 @@ static void l2cap_conn_start(struct l2cap_conn *conn)
|
|||
|
||||
l2cap_chan_unlock(chan);
|
||||
}
|
||||
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
}
|
||||
|
||||
static void l2cap_le_conn_ready(struct l2cap_conn *conn)
|
||||
|
@ -1614,7 +1616,7 @@ static void l2cap_conn_ready(struct l2cap_conn *conn)
|
|||
if (hcon->type == ACL_LINK)
|
||||
l2cap_request_info(conn);
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
mutex_lock(&conn->lock);
|
||||
|
||||
list_for_each_entry(chan, &conn->chan_l, list) {
|
||||
|
||||
|
@ -1632,7 +1634,7 @@ static void l2cap_conn_ready(struct l2cap_conn *conn)
|
|||
l2cap_chan_unlock(chan);
|
||||
}
|
||||
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
mutex_unlock(&conn->lock);
|
||||
|
||||
if (hcon->type == LE_LINK)
|
||||
l2cap_le_conn_ready(conn);
|
||||
|
@ -1647,14 +1649,10 @@ static void l2cap_conn_unreliable(struct l2cap_conn *conn, int err)
|
|||
|
||||
BT_DBG("conn %p", conn);
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
|
||||
list_for_each_entry(chan, &conn->chan_l, list) {
|
||||
if (test_bit(FLAG_FORCE_RELIABLE, &chan->flags))
|
||||
l2cap_chan_set_err(chan, err);
|
||||
}
|
||||
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
}
|
||||
|
||||
static void l2cap_info_timeout(struct work_struct *work)
|
||||
|
@ -1665,7 +1663,9 @@ static void l2cap_info_timeout(struct work_struct *work)
|
|||
conn->info_state |= L2CAP_INFO_FEAT_MASK_REQ_DONE;
|
||||
conn->info_ident = 0;
|
||||
|
||||
mutex_lock(&conn->lock);
|
||||
l2cap_conn_start(conn);
|
||||
mutex_unlock(&conn->lock);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -1757,6 +1757,8 @@ static void l2cap_conn_del(struct hci_conn *hcon, int err)
|
|||
|
||||
BT_DBG("hcon %p conn %p, err %d", hcon, conn, err);
|
||||
|
||||
mutex_lock(&conn->lock);
|
||||
|
||||
kfree_skb(conn->rx_skb);
|
||||
|
||||
skb_queue_purge(&conn->pending_rx);
|
||||
|
@ -1775,8 +1777,6 @@ static void l2cap_conn_del(struct hci_conn *hcon, int err)
|
|||
/* Force the connection to be immediately dropped */
|
||||
hcon->disc_timeout = 0;
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
|
||||
/* Kill channels */
|
||||
list_for_each_entry_safe(chan, l, &conn->chan_l, list) {
|
||||
l2cap_chan_hold(chan);
|
||||
|
@ -1790,15 +1790,14 @@ static void l2cap_conn_del(struct hci_conn *hcon, int err)
|
|||
l2cap_chan_put(chan);
|
||||
}
|
||||
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
|
||||
hci_chan_del(conn->hchan);
|
||||
|
||||
if (conn->info_state & L2CAP_INFO_FEAT_MASK_REQ_SENT)
|
||||
cancel_delayed_work_sync(&conn->info_timer);
|
||||
|
||||
hcon->l2cap_data = NULL;
|
||||
hci_chan_del(conn->hchan);
|
||||
conn->hchan = NULL;
|
||||
|
||||
hcon->l2cap_data = NULL;
|
||||
mutex_unlock(&conn->lock);
|
||||
l2cap_conn_put(conn);
|
||||
}
|
||||
|
||||
|
@ -2916,8 +2915,6 @@ static void l2cap_raw_recv(struct l2cap_conn *conn, struct sk_buff *skb)
|
|||
|
||||
BT_DBG("conn %p", conn);
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
|
||||
list_for_each_entry(chan, &conn->chan_l, list) {
|
||||
if (chan->chan_type != L2CAP_CHAN_RAW)
|
||||
continue;
|
||||
|
@ -2932,8 +2929,6 @@ static void l2cap_raw_recv(struct l2cap_conn *conn, struct sk_buff *skb)
|
|||
if (chan->ops->recv(chan, nskb))
|
||||
kfree_skb(nskb);
|
||||
}
|
||||
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
}
|
||||
|
||||
/* ---- L2CAP signalling commands ---- */
|
||||
|
@ -3952,7 +3947,6 @@ static void l2cap_connect(struct l2cap_conn *conn, struct l2cap_cmd_hdr *cmd,
|
|||
goto response;
|
||||
}
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
l2cap_chan_lock(pchan);
|
||||
|
||||
/* Check if the ACL is secure enough (if not SDP) */
|
||||
|
@ -4059,7 +4053,6 @@ response:
|
|||
}
|
||||
|
||||
l2cap_chan_unlock(pchan);
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
l2cap_chan_put(pchan);
|
||||
}
|
||||
|
||||
|
@ -4098,27 +4091,19 @@ static int l2cap_connect_create_rsp(struct l2cap_conn *conn,
|
|||
BT_DBG("dcid 0x%4.4x scid 0x%4.4x result 0x%2.2x status 0x%2.2x",
|
||||
dcid, scid, result, status);
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
|
||||
if (scid) {
|
||||
chan = __l2cap_get_chan_by_scid(conn, scid);
|
||||
if (!chan) {
|
||||
err = -EBADSLT;
|
||||
goto unlock;
|
||||
}
|
||||
if (!chan)
|
||||
return -EBADSLT;
|
||||
} else {
|
||||
chan = __l2cap_get_chan_by_ident(conn, cmd->ident);
|
||||
if (!chan) {
|
||||
err = -EBADSLT;
|
||||
goto unlock;
|
||||
}
|
||||
if (!chan)
|
||||
return -EBADSLT;
|
||||
}
|
||||
|
||||
chan = l2cap_chan_hold_unless_zero(chan);
|
||||
if (!chan) {
|
||||
err = -EBADSLT;
|
||||
goto unlock;
|
||||
}
|
||||
if (!chan)
|
||||
return -EBADSLT;
|
||||
|
||||
err = 0;
|
||||
|
||||
|
@ -4156,9 +4141,6 @@ static int l2cap_connect_create_rsp(struct l2cap_conn *conn,
|
|||
l2cap_chan_unlock(chan);
|
||||
l2cap_chan_put(chan);
|
||||
|
||||
unlock:
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -4446,11 +4428,7 @@ static inline int l2cap_disconnect_req(struct l2cap_conn *conn,
|
|||
|
||||
chan->ops->set_shutdown(chan);
|
||||
|
||||
l2cap_chan_unlock(chan);
|
||||
mutex_lock(&conn->chan_lock);
|
||||
l2cap_chan_lock(chan);
|
||||
l2cap_chan_del(chan, ECONNRESET);
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
|
||||
chan->ops->close(chan);
|
||||
|
||||
|
@ -4487,11 +4465,7 @@ static inline int l2cap_disconnect_rsp(struct l2cap_conn *conn,
|
|||
return 0;
|
||||
}
|
||||
|
||||
l2cap_chan_unlock(chan);
|
||||
mutex_lock(&conn->chan_lock);
|
||||
l2cap_chan_lock(chan);
|
||||
l2cap_chan_del(chan, 0);
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
|
||||
chan->ops->close(chan);
|
||||
|
||||
|
@ -4689,13 +4663,9 @@ static int l2cap_le_connect_rsp(struct l2cap_conn *conn,
|
|||
BT_DBG("dcid 0x%4.4x mtu %u mps %u credits %u result 0x%2.2x",
|
||||
dcid, mtu, mps, credits, result);
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
|
||||
chan = __l2cap_get_chan_by_ident(conn, cmd->ident);
|
||||
if (!chan) {
|
||||
err = -EBADSLT;
|
||||
goto unlock;
|
||||
}
|
||||
if (!chan)
|
||||
return -EBADSLT;
|
||||
|
||||
err = 0;
|
||||
|
||||
|
@ -4743,9 +4713,6 @@ static int l2cap_le_connect_rsp(struct l2cap_conn *conn,
|
|||
|
||||
l2cap_chan_unlock(chan);
|
||||
|
||||
unlock:
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -4857,7 +4824,6 @@ static int l2cap_le_connect_req(struct l2cap_conn *conn,
|
|||
goto response;
|
||||
}
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
l2cap_chan_lock(pchan);
|
||||
|
||||
if (!smp_sufficient_security(conn->hcon, pchan->sec_level,
|
||||
|
@ -4923,7 +4889,6 @@ static int l2cap_le_connect_req(struct l2cap_conn *conn,
|
|||
|
||||
response_unlock:
|
||||
l2cap_chan_unlock(pchan);
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
l2cap_chan_put(pchan);
|
||||
|
||||
if (result == L2CAP_CR_PEND)
|
||||
|
@ -5057,7 +5022,6 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
|
|||
goto response;
|
||||
}
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
l2cap_chan_lock(pchan);
|
||||
|
||||
if (!smp_sufficient_security(conn->hcon, pchan->sec_level,
|
||||
|
@ -5132,7 +5096,6 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
|
|||
|
||||
unlock:
|
||||
l2cap_chan_unlock(pchan);
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
l2cap_chan_put(pchan);
|
||||
|
||||
response:
|
||||
|
@ -5169,8 +5132,6 @@ static inline int l2cap_ecred_conn_rsp(struct l2cap_conn *conn,
|
|||
BT_DBG("mtu %u mps %u credits %u result 0x%4.4x", mtu, mps, credits,
|
||||
result);
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
|
||||
cmd_len -= sizeof(*rsp);
|
||||
|
||||
list_for_each_entry_safe(chan, tmp, &conn->chan_l, list) {
|
||||
|
@ -5256,8 +5217,6 @@ static inline int l2cap_ecred_conn_rsp(struct l2cap_conn *conn,
|
|||
l2cap_chan_unlock(chan);
|
||||
}
|
||||
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -5370,8 +5329,6 @@ static inline int l2cap_le_command_rej(struct l2cap_conn *conn,
|
|||
if (cmd_len < sizeof(*rej))
|
||||
return -EPROTO;
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
|
||||
chan = __l2cap_get_chan_by_ident(conn, cmd->ident);
|
||||
if (!chan)
|
||||
goto done;
|
||||
|
@ -5386,7 +5343,6 @@ static inline int l2cap_le_command_rej(struct l2cap_conn *conn,
|
|||
l2cap_chan_put(chan);
|
||||
|
||||
done:
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -6841,8 +6797,12 @@ static void process_pending_rx(struct work_struct *work)
|
|||
|
||||
BT_DBG("");
|
||||
|
||||
mutex_lock(&conn->lock);
|
||||
|
||||
while ((skb = skb_dequeue(&conn->pending_rx)))
|
||||
l2cap_recv_frame(conn, skb);
|
||||
|
||||
mutex_unlock(&conn->lock);
|
||||
}
|
||||
|
||||
static struct l2cap_conn *l2cap_conn_add(struct hci_conn *hcon)
|
||||
|
@ -6881,7 +6841,7 @@ static struct l2cap_conn *l2cap_conn_add(struct hci_conn *hcon)
|
|||
conn->local_fixed_chan |= L2CAP_FC_SMP_BREDR;
|
||||
|
||||
mutex_init(&conn->ident_lock);
|
||||
mutex_init(&conn->chan_lock);
|
||||
mutex_init(&conn->lock);
|
||||
|
||||
INIT_LIST_HEAD(&conn->chan_l);
|
||||
INIT_LIST_HEAD(&conn->users);
|
||||
|
@ -7072,7 +7032,7 @@ int l2cap_chan_connect(struct l2cap_chan *chan, __le16 psm, u16 cid,
|
|||
}
|
||||
}
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
mutex_lock(&conn->lock);
|
||||
l2cap_chan_lock(chan);
|
||||
|
||||
if (cid && __l2cap_get_chan_by_dcid(conn, cid)) {
|
||||
|
@ -7113,7 +7073,7 @@ int l2cap_chan_connect(struct l2cap_chan *chan, __le16 psm, u16 cid,
|
|||
|
||||
chan_unlock:
|
||||
l2cap_chan_unlock(chan);
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
mutex_unlock(&conn->lock);
|
||||
done:
|
||||
hci_dev_unlock(hdev);
|
||||
hci_dev_put(hdev);
|
||||
|
@ -7325,7 +7285,7 @@ static void l2cap_security_cfm(struct hci_conn *hcon, u8 status, u8 encrypt)
|
|||
|
||||
BT_DBG("conn %p status 0x%2.2x encrypt %u", conn, status, encrypt);
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
mutex_lock(&conn->lock);
|
||||
|
||||
list_for_each_entry(chan, &conn->chan_l, list) {
|
||||
l2cap_chan_lock(chan);
|
||||
|
@ -7399,7 +7359,7 @@ static void l2cap_security_cfm(struct hci_conn *hcon, u8 status, u8 encrypt)
|
|||
l2cap_chan_unlock(chan);
|
||||
}
|
||||
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
mutex_unlock(&conn->lock);
|
||||
}
|
||||
|
||||
/* Append fragment into frame respecting the maximum len of rx_skb */
|
||||
|
@ -7466,19 +7426,45 @@ static void l2cap_recv_reset(struct l2cap_conn *conn)
|
|||
conn->rx_len = 0;
|
||||
}
|
||||
|
||||
struct l2cap_conn *l2cap_conn_hold_unless_zero(struct l2cap_conn *c)
|
||||
{
|
||||
if (!c)
|
||||
return NULL;
|
||||
|
||||
BT_DBG("conn %p orig refcnt %u", c, kref_read(&c->ref));
|
||||
|
||||
if (!kref_get_unless_zero(&c->ref))
|
||||
return NULL;
|
||||
|
||||
return c;
|
||||
}
|
||||
|
||||
void l2cap_recv_acldata(struct hci_conn *hcon, struct sk_buff *skb, u16 flags)
|
||||
{
|
||||
struct l2cap_conn *conn = hcon->l2cap_data;
|
||||
struct l2cap_conn *conn;
|
||||
int len;
|
||||
|
||||
/* Lock hdev to access l2cap_data to avoid race with l2cap_conn_del */
|
||||
hci_dev_lock(hcon->hdev);
|
||||
|
||||
conn = hcon->l2cap_data;
|
||||
|
||||
if (!conn)
|
||||
conn = l2cap_conn_add(hcon);
|
||||
|
||||
if (!conn)
|
||||
goto drop;
|
||||
conn = l2cap_conn_hold_unless_zero(conn);
|
||||
|
||||
hci_dev_unlock(hcon->hdev);
|
||||
|
||||
if (!conn) {
|
||||
kfree_skb(skb);
|
||||
return;
|
||||
}
|
||||
|
||||
BT_DBG("conn %p len %u flags 0x%x", conn, skb->len, flags);
|
||||
|
||||
mutex_lock(&conn->lock);
|
||||
|
||||
switch (flags) {
|
||||
case ACL_START:
|
||||
case ACL_START_NO_FLUSH:
|
||||
|
@ -7503,7 +7489,7 @@ void l2cap_recv_acldata(struct hci_conn *hcon, struct sk_buff *skb, u16 flags)
|
|||
if (len == skb->len) {
|
||||
/* Complete frame received */
|
||||
l2cap_recv_frame(conn, skb);
|
||||
return;
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
BT_DBG("Start: total len %d, frag len %u", len, skb->len);
|
||||
|
@ -7567,6 +7553,9 @@ void l2cap_recv_acldata(struct hci_conn *hcon, struct sk_buff *skb, u16 flags)
|
|||
|
||||
drop:
|
||||
kfree_skb(skb);
|
||||
unlock:
|
||||
mutex_unlock(&conn->lock);
|
||||
l2cap_conn_put(conn);
|
||||
}
|
||||
|
||||
static struct hci_cb l2cap_cb = {
|
||||
|
|
|
@ -1326,9 +1326,10 @@ static int l2cap_sock_shutdown(struct socket *sock, int how)
|
|||
/* prevent sk structure from being freed whilst unlocked */
|
||||
sock_hold(sk);
|
||||
|
||||
chan = l2cap_pi(sk)->chan;
|
||||
/* prevent chan structure from being freed whilst unlocked */
|
||||
l2cap_chan_hold(chan);
|
||||
chan = l2cap_chan_hold_unless_zero(l2cap_pi(sk)->chan);
|
||||
if (!chan)
|
||||
goto shutdown_already;
|
||||
|
||||
BT_DBG("chan %p state %s", chan, state_to_string(chan->state));
|
||||
|
||||
|
@ -1358,22 +1359,20 @@ static int l2cap_sock_shutdown(struct socket *sock, int how)
|
|||
release_sock(sk);
|
||||
|
||||
l2cap_chan_lock(chan);
|
||||
conn = chan->conn;
|
||||
if (conn)
|
||||
/* prevent conn structure from being freed */
|
||||
l2cap_conn_get(conn);
|
||||
/* prevent conn structure from being freed */
|
||||
conn = l2cap_conn_hold_unless_zero(chan->conn);
|
||||
l2cap_chan_unlock(chan);
|
||||
|
||||
if (conn)
|
||||
/* mutex lock must be taken before l2cap_chan_lock() */
|
||||
mutex_lock(&conn->chan_lock);
|
||||
mutex_lock(&conn->lock);
|
||||
|
||||
l2cap_chan_lock(chan);
|
||||
l2cap_chan_close(chan, 0);
|
||||
l2cap_chan_unlock(chan);
|
||||
|
||||
if (conn) {
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
mutex_unlock(&conn->lock);
|
||||
l2cap_conn_put(conn);
|
||||
}
|
||||
|
||||
|
|
|
@ -1132,7 +1132,7 @@ static int j1939_sk_send_loop(struct j1939_priv *priv, struct sock *sk,
|
|||
|
||||
todo_size = size;
|
||||
|
||||
while (todo_size) {
|
||||
do {
|
||||
struct j1939_sk_buff_cb *skcb;
|
||||
|
||||
segment_size = min_t(size_t, J1939_MAX_TP_PACKET_SIZE,
|
||||
|
@ -1177,7 +1177,7 @@ static int j1939_sk_send_loop(struct j1939_priv *priv, struct sock *sk,
|
|||
|
||||
todo_size -= segment_size;
|
||||
session->total_queued_size += segment_size;
|
||||
}
|
||||
} while (todo_size);
|
||||
|
||||
switch (ret) {
|
||||
case 0: /* OK */
|
||||
|
|
|
@ -382,8 +382,9 @@ sk_buff *j1939_session_skb_get_by_offset(struct j1939_session *session,
|
|||
skb_queue_walk(&session->skb_queue, do_skb) {
|
||||
do_skcb = j1939_skb_to_cb(do_skb);
|
||||
|
||||
if (offset_start >= do_skcb->offset &&
|
||||
offset_start < (do_skcb->offset + do_skb->len)) {
|
||||
if ((offset_start >= do_skcb->offset &&
|
||||
offset_start < (do_skcb->offset + do_skb->len)) ||
|
||||
(offset_start == 0 && do_skcb->offset == 0 && do_skb->len == 0)) {
|
||||
skb = do_skb;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -37,8 +37,8 @@ static const struct fib_kuid_range fib_kuid_range_unset = {
|
|||
|
||||
bool fib_rule_matchall(const struct fib_rule *rule)
|
||||
{
|
||||
if (rule->iifindex || rule->oifindex || rule->mark || rule->tun_id ||
|
||||
rule->flags)
|
||||
if (READ_ONCE(rule->iifindex) || READ_ONCE(rule->oifindex) ||
|
||||
rule->mark || rule->tun_id || rule->flags)
|
||||
return false;
|
||||
if (rule->suppress_ifgroup != -1 || rule->suppress_prefixlen != -1)
|
||||
return false;
|
||||
|
@ -261,12 +261,14 @@ static int fib_rule_match(struct fib_rule *rule, struct fib_rules_ops *ops,
|
|||
struct flowi *fl, int flags,
|
||||
struct fib_lookup_arg *arg)
|
||||
{
|
||||
int ret = 0;
|
||||
int iifindex, oifindex, ret = 0;
|
||||
|
||||
if (rule->iifindex && (rule->iifindex != fl->flowi_iif))
|
||||
iifindex = READ_ONCE(rule->iifindex);
|
||||
if (iifindex && (iifindex != fl->flowi_iif))
|
||||
goto out;
|
||||
|
||||
if (rule->oifindex && (rule->oifindex != fl->flowi_oif))
|
||||
oifindex = READ_ONCE(rule->oifindex);
|
||||
if (oifindex && (oifindex != fl->flowi_oif))
|
||||
goto out;
|
||||
|
||||
if ((rule->mark ^ fl->flowi_mark) & rule->mark_mask)
|
||||
|
@ -1041,14 +1043,14 @@ static int fib_nl_fill_rule(struct sk_buff *skb, struct fib_rule *rule,
|
|||
if (rule->iifname[0]) {
|
||||
if (nla_put_string(skb, FRA_IIFNAME, rule->iifname))
|
||||
goto nla_put_failure;
|
||||
if (rule->iifindex == -1)
|
||||
if (READ_ONCE(rule->iifindex) == -1)
|
||||
frh->flags |= FIB_RULE_IIF_DETACHED;
|
||||
}
|
||||
|
||||
if (rule->oifname[0]) {
|
||||
if (nla_put_string(skb, FRA_OIFNAME, rule->oifname))
|
||||
goto nla_put_failure;
|
||||
if (rule->oifindex == -1)
|
||||
if (READ_ONCE(rule->oifindex) == -1)
|
||||
frh->flags |= FIB_RULE_OIF_DETACHED;
|
||||
}
|
||||
|
||||
|
@ -1220,10 +1222,10 @@ static void attach_rules(struct list_head *rules, struct net_device *dev)
|
|||
list_for_each_entry(rule, rules, list) {
|
||||
if (rule->iifindex == -1 &&
|
||||
strcmp(dev->name, rule->iifname) == 0)
|
||||
rule->iifindex = dev->ifindex;
|
||||
WRITE_ONCE(rule->iifindex, dev->ifindex);
|
||||
if (rule->oifindex == -1 &&
|
||||
strcmp(dev->name, rule->oifname) == 0)
|
||||
rule->oifindex = dev->ifindex;
|
||||
WRITE_ONCE(rule->oifindex, dev->ifindex);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1233,9 +1235,9 @@ static void detach_rules(struct list_head *rules, struct net_device *dev)
|
|||
|
||||
list_for_each_entry(rule, rules, list) {
|
||||
if (rule->iifindex == dev->ifindex)
|
||||
rule->iifindex = -1;
|
||||
WRITE_ONCE(rule->iifindex, -1);
|
||||
if (rule->oifindex == dev->ifindex)
|
||||
rule->oifindex = -1;
|
||||
WRITE_ONCE(rule->oifindex, -1);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -1108,10 +1108,12 @@ bool __skb_flow_dissect(const struct net *net,
|
|||
FLOW_DISSECTOR_KEY_BASIC,
|
||||
target_container);
|
||||
|
||||
rcu_read_lock();
|
||||
|
||||
if (skb) {
|
||||
if (!net) {
|
||||
if (skb->dev)
|
||||
net = dev_net(skb->dev);
|
||||
net = dev_net_rcu(skb->dev);
|
||||
else if (skb->sk)
|
||||
net = sock_net(skb->sk);
|
||||
}
|
||||
|
@ -1122,7 +1124,6 @@ bool __skb_flow_dissect(const struct net *net,
|
|||
enum netns_bpf_attach_type type = NETNS_BPF_FLOW_DISSECTOR;
|
||||
struct bpf_prog_array *run_array;
|
||||
|
||||
rcu_read_lock();
|
||||
run_array = rcu_dereference(init_net.bpf.run_array[type]);
|
||||
if (!run_array)
|
||||
run_array = rcu_dereference(net->bpf.run_array[type]);
|
||||
|
@ -1150,17 +1151,17 @@ bool __skb_flow_dissect(const struct net *net,
|
|||
prog = READ_ONCE(run_array->items[0].prog);
|
||||
result = bpf_flow_dissect(prog, &ctx, n_proto, nhoff,
|
||||
hlen, flags);
|
||||
if (result == BPF_FLOW_DISSECTOR_CONTINUE)
|
||||
goto dissect_continue;
|
||||
__skb_flow_bpf_to_target(&flow_keys, flow_dissector,
|
||||
target_container);
|
||||
rcu_read_unlock();
|
||||
return result == BPF_OK;
|
||||
if (result != BPF_FLOW_DISSECTOR_CONTINUE) {
|
||||
__skb_flow_bpf_to_target(&flow_keys, flow_dissector,
|
||||
target_container);
|
||||
rcu_read_unlock();
|
||||
return result == BPF_OK;
|
||||
}
|
||||
}
|
||||
dissect_continue:
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
rcu_read_unlock();
|
||||
|
||||
if (dissector_uses_key(flow_dissector,
|
||||
FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
|
||||
struct ethhdr *eth = eth_hdr(skb);
|
||||
|
|
|
@ -3447,10 +3447,12 @@ static const struct seq_operations neigh_stat_seq_ops = {
|
|||
static void __neigh_notify(struct neighbour *n, int type, int flags,
|
||||
u32 pid)
|
||||
{
|
||||
struct net *net = dev_net(n->dev);
|
||||
struct sk_buff *skb;
|
||||
int err = -ENOBUFS;
|
||||
struct net *net;
|
||||
|
||||
rcu_read_lock();
|
||||
net = dev_net_rcu(n->dev);
|
||||
skb = nlmsg_new(neigh_nlmsg_size(), GFP_ATOMIC);
|
||||
if (skb == NULL)
|
||||
goto errout;
|
||||
|
@ -3463,9 +3465,11 @@ static void __neigh_notify(struct neighbour *n, int type, int flags,
|
|||
goto errout;
|
||||
}
|
||||
rtnl_notify(skb, net, 0, RTNLGRP_NEIGH, NULL, GFP_ATOMIC);
|
||||
return;
|
||||
goto out;
|
||||
errout:
|
||||
rtnl_set_sk_err(net, RTNLGRP_NEIGH, err);
|
||||
out:
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
void neigh_app_ns(struct neighbour *n)
|
||||
|
|
|
@ -3432,6 +3432,7 @@ static int rtnl_setlink(struct sk_buff *skb, struct nlmsghdr *nlh,
|
|||
err = -ENODEV;
|
||||
|
||||
rtnl_nets_unlock(&rtnl_nets);
|
||||
rtnl_nets_destroy(&rtnl_nets);
|
||||
errout:
|
||||
return err;
|
||||
}
|
||||
|
|
|
@ -462,6 +462,11 @@ const char ts_rx_filter_names[][ETH_GSTRING_LEN] = {
|
|||
};
|
||||
static_assert(ARRAY_SIZE(ts_rx_filter_names) == __HWTSTAMP_FILTER_CNT);
|
||||
|
||||
const char ts_flags_names[][ETH_GSTRING_LEN] = {
|
||||
[const_ilog2(HWTSTAMP_FLAG_BONDED_PHC_INDEX)] = "bonded-phc-index",
|
||||
};
|
||||
static_assert(ARRAY_SIZE(ts_flags_names) == __HWTSTAMP_FLAG_CNT);
|
||||
|
||||
const char udp_tunnel_type_names[][ETH_GSTRING_LEN] = {
|
||||
[ETHTOOL_UDP_TUNNEL_TYPE_VXLAN] = "vxlan",
|
||||
[ETHTOOL_UDP_TUNNEL_TYPE_GENEVE] = "geneve",
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
ETHTOOL_LINK_MODE_ ## speed ## base ## type ## _ ## duplex ## _BIT
|
||||
|
||||
#define __SOF_TIMESTAMPING_CNT (const_ilog2(SOF_TIMESTAMPING_LAST) + 1)
|
||||
#define __HWTSTAMP_FLAG_CNT (const_ilog2(HWTSTAMP_FLAG_LAST) + 1)
|
||||
|
||||
struct link_mode_info {
|
||||
int speed;
|
||||
|
@ -38,6 +39,7 @@ extern const char wol_mode_names[][ETH_GSTRING_LEN];
|
|||
extern const char sof_timestamping_names[][ETH_GSTRING_LEN];
|
||||
extern const char ts_tx_type_names[][ETH_GSTRING_LEN];
|
||||
extern const char ts_rx_filter_names[][ETH_GSTRING_LEN];
|
||||
extern const char ts_flags_names[][ETH_GSTRING_LEN];
|
||||
extern const char udp_tunnel_type_names[][ETH_GSTRING_LEN];
|
||||
|
||||
int __ethtool_get_link(struct net_device *dev);
|
||||
|
|
|
@ -75,6 +75,11 @@ static const struct strset_info info_template[] = {
|
|||
.count = __HWTSTAMP_FILTER_CNT,
|
||||
.strings = ts_rx_filter_names,
|
||||
},
|
||||
[ETH_SS_TS_FLAGS] = {
|
||||
.per_dev = false,
|
||||
.count = __HWTSTAMP_FLAG_CNT,
|
||||
.strings = ts_flags_names,
|
||||
},
|
||||
[ETH_SS_UDP_TUNNEL_TYPES] = {
|
||||
.per_dev = false,
|
||||
.count = __ETHTOOL_UDP_TUNNEL_TYPE_CNT,
|
||||
|
|
|
@ -54,7 +54,7 @@ static int tsconfig_prepare_data(const struct ethnl_req_info *req_base,
|
|||
|
||||
data->hwtst_config.tx_type = BIT(cfg.tx_type);
|
||||
data->hwtst_config.rx_filter = BIT(cfg.rx_filter);
|
||||
data->hwtst_config.flags = BIT(cfg.flags);
|
||||
data->hwtst_config.flags = cfg.flags;
|
||||
|
||||
data->hwprov_desc.index = -1;
|
||||
hwprov = rtnl_dereference(dev->hwprov);
|
||||
|
@ -91,10 +91,16 @@ static int tsconfig_reply_size(const struct ethnl_req_info *req_base,
|
|||
|
||||
BUILD_BUG_ON(__HWTSTAMP_TX_CNT > 32);
|
||||
BUILD_BUG_ON(__HWTSTAMP_FILTER_CNT > 32);
|
||||
BUILD_BUG_ON(__HWTSTAMP_FLAG_CNT > 32);
|
||||
|
||||
if (data->hwtst_config.flags)
|
||||
/* _TSCONFIG_HWTSTAMP_FLAGS */
|
||||
len += nla_total_size(sizeof(u32));
|
||||
if (data->hwtst_config.flags) {
|
||||
ret = ethnl_bitset32_size(&data->hwtst_config.flags,
|
||||
NULL, __HWTSTAMP_FLAG_CNT,
|
||||
ts_flags_names, compact);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
len += ret; /* _TSCONFIG_HWTSTAMP_FLAGS */
|
||||
}
|
||||
|
||||
if (data->hwtst_config.tx_type) {
|
||||
ret = ethnl_bitset32_size(&data->hwtst_config.tx_type,
|
||||
|
@ -130,8 +136,10 @@ static int tsconfig_fill_reply(struct sk_buff *skb,
|
|||
int ret;
|
||||
|
||||
if (data->hwtst_config.flags) {
|
||||
ret = nla_put_u32(skb, ETHTOOL_A_TSCONFIG_HWTSTAMP_FLAGS,
|
||||
data->hwtst_config.flags);
|
||||
ret = ethnl_put_bitset32(skb, ETHTOOL_A_TSCONFIG_HWTSTAMP_FLAGS,
|
||||
&data->hwtst_config.flags, NULL,
|
||||
__HWTSTAMP_FLAG_CNT,
|
||||
ts_flags_names, compact);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
|
@ -180,7 +188,7 @@ const struct nla_policy ethnl_tsconfig_set_policy[ETHTOOL_A_TSCONFIG_MAX + 1] =
|
|||
[ETHTOOL_A_TSCONFIG_HEADER] = NLA_POLICY_NESTED(ethnl_header_policy),
|
||||
[ETHTOOL_A_TSCONFIG_HWTSTAMP_PROVIDER] =
|
||||
NLA_POLICY_NESTED(ethnl_ts_hwtst_prov_policy),
|
||||
[ETHTOOL_A_TSCONFIG_HWTSTAMP_FLAGS] = { .type = NLA_U32 },
|
||||
[ETHTOOL_A_TSCONFIG_HWTSTAMP_FLAGS] = { .type = NLA_NESTED },
|
||||
[ETHTOOL_A_TSCONFIG_RX_FILTERS] = { .type = NLA_NESTED },
|
||||
[ETHTOOL_A_TSCONFIG_TX_TYPES] = { .type = NLA_NESTED },
|
||||
};
|
||||
|
@ -296,6 +304,7 @@ static int ethnl_set_tsconfig(struct ethnl_req_info *req_base,
|
|||
|
||||
BUILD_BUG_ON(__HWTSTAMP_TX_CNT >= 32);
|
||||
BUILD_BUG_ON(__HWTSTAMP_FILTER_CNT >= 32);
|
||||
BUILD_BUG_ON(__HWTSTAMP_FLAG_CNT > 32);
|
||||
|
||||
if (!netif_device_present(dev))
|
||||
return -ENODEV;
|
||||
|
@ -377,9 +386,13 @@ static int ethnl_set_tsconfig(struct ethnl_req_info *req_base,
|
|||
}
|
||||
|
||||
if (tb[ETHTOOL_A_TSCONFIG_HWTSTAMP_FLAGS]) {
|
||||
ethnl_update_u32(&hwtst_config.flags,
|
||||
tb[ETHTOOL_A_TSCONFIG_HWTSTAMP_FLAGS],
|
||||
&config_mod);
|
||||
ret = ethnl_update_bitset32(&hwtst_config.flags,
|
||||
__HWTSTAMP_FLAG_CNT,
|
||||
tb[ETHTOOL_A_TSCONFIG_HWTSTAMP_FLAGS],
|
||||
ts_flags_names, info->extack,
|
||||
&config_mod);
|
||||
if (ret < 0)
|
||||
goto err_free_hwprov;
|
||||
}
|
||||
|
||||
ret = net_hwtstamp_validate(&hwtst_config);
|
||||
|
|
|
@ -659,10 +659,12 @@ static int arp_xmit_finish(struct net *net, struct sock *sk, struct sk_buff *skb
|
|||
*/
|
||||
void arp_xmit(struct sk_buff *skb)
|
||||
{
|
||||
rcu_read_lock();
|
||||
/* Send it off, maybe filter it using firewalling first. */
|
||||
NF_HOOK(NFPROTO_ARP, NF_ARP_OUT,
|
||||
dev_net(skb->dev), NULL, skb, NULL, skb->dev,
|
||||
dev_net_rcu(skb->dev), NULL, skb, NULL, skb->dev,
|
||||
arp_xmit_finish);
|
||||
rcu_read_unlock();
|
||||
}
|
||||
EXPORT_SYMBOL(arp_xmit);
|
||||
|
||||
|
|
|
@ -1371,10 +1371,11 @@ __be32 inet_select_addr(const struct net_device *dev, __be32 dst, int scope)
|
|||
__be32 addr = 0;
|
||||
unsigned char localnet_scope = RT_SCOPE_HOST;
|
||||
struct in_device *in_dev;
|
||||
struct net *net = dev_net(dev);
|
||||
struct net *net;
|
||||
int master_idx;
|
||||
|
||||
rcu_read_lock();
|
||||
net = dev_net_rcu(dev);
|
||||
in_dev = __in_dev_get_rcu(dev);
|
||||
if (!in_dev)
|
||||
goto no_in_dev;
|
||||
|
|
|
@ -399,10 +399,10 @@ static void icmp_push_reply(struct sock *sk,
|
|||
|
||||
static void icmp_reply(struct icmp_bxm *icmp_param, struct sk_buff *skb)
|
||||
{
|
||||
struct ipcm_cookie ipc;
|
||||
struct rtable *rt = skb_rtable(skb);
|
||||
struct net *net = dev_net(rt->dst.dev);
|
||||
struct net *net = dev_net_rcu(rt->dst.dev);
|
||||
bool apply_ratelimit = false;
|
||||
struct ipcm_cookie ipc;
|
||||
struct flowi4 fl4;
|
||||
struct sock *sk;
|
||||
struct inet_sock *inet;
|
||||
|
@ -608,12 +608,14 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info,
|
|||
struct sock *sk;
|
||||
|
||||
if (!rt)
|
||||
goto out;
|
||||
return;
|
||||
|
||||
rcu_read_lock();
|
||||
|
||||
if (rt->dst.dev)
|
||||
net = dev_net(rt->dst.dev);
|
||||
net = dev_net_rcu(rt->dst.dev);
|
||||
else if (skb_in->dev)
|
||||
net = dev_net(skb_in->dev);
|
||||
net = dev_net_rcu(skb_in->dev);
|
||||
else
|
||||
goto out;
|
||||
|
||||
|
@ -785,7 +787,8 @@ out_unlock:
|
|||
icmp_xmit_unlock(sk);
|
||||
out_bh_enable:
|
||||
local_bh_enable();
|
||||
out:;
|
||||
out:
|
||||
rcu_read_unlock();
|
||||
}
|
||||
EXPORT_SYMBOL(__icmp_send);
|
||||
|
||||
|
@ -834,7 +837,7 @@ static void icmp_socket_deliver(struct sk_buff *skb, u32 info)
|
|||
* avoid additional coding at protocol handlers.
|
||||
*/
|
||||
if (!pskb_may_pull(skb, iph->ihl * 4 + 8)) {
|
||||
__ICMP_INC_STATS(dev_net(skb->dev), ICMP_MIB_INERRORS);
|
||||
__ICMP_INC_STATS(dev_net_rcu(skb->dev), ICMP_MIB_INERRORS);
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -868,7 +871,7 @@ static enum skb_drop_reason icmp_unreach(struct sk_buff *skb)
|
|||
struct net *net;
|
||||
u32 info = 0;
|
||||
|
||||
net = dev_net(skb_dst(skb)->dev);
|
||||
net = dev_net_rcu(skb_dst(skb)->dev);
|
||||
|
||||
/*
|
||||
* Incomplete header ?
|
||||
|
@ -979,7 +982,7 @@ out_err:
|
|||
static enum skb_drop_reason icmp_redirect(struct sk_buff *skb)
|
||||
{
|
||||
if (skb->len < sizeof(struct iphdr)) {
|
||||
__ICMP_INC_STATS(dev_net(skb->dev), ICMP_MIB_INERRORS);
|
||||
__ICMP_INC_STATS(dev_net_rcu(skb->dev), ICMP_MIB_INERRORS);
|
||||
return SKB_DROP_REASON_PKT_TOO_SMALL;
|
||||
}
|
||||
|
||||
|
@ -1011,7 +1014,7 @@ static enum skb_drop_reason icmp_echo(struct sk_buff *skb)
|
|||
struct icmp_bxm icmp_param;
|
||||
struct net *net;
|
||||
|
||||
net = dev_net(skb_dst(skb)->dev);
|
||||
net = dev_net_rcu(skb_dst(skb)->dev);
|
||||
/* should there be an ICMP stat for ignored echos? */
|
||||
if (READ_ONCE(net->ipv4.sysctl_icmp_echo_ignore_all))
|
||||
return SKB_NOT_DROPPED_YET;
|
||||
|
@ -1040,9 +1043,9 @@ static enum skb_drop_reason icmp_echo(struct sk_buff *skb)
|
|||
|
||||
bool icmp_build_probe(struct sk_buff *skb, struct icmphdr *icmphdr)
|
||||
{
|
||||
struct net *net = dev_net_rcu(skb->dev);
|
||||
struct icmp_ext_hdr *ext_hdr, _ext_hdr;
|
||||
struct icmp_ext_echo_iio *iio, _iio;
|
||||
struct net *net = dev_net(skb->dev);
|
||||
struct inet6_dev *in6_dev;
|
||||
struct in_device *in_dev;
|
||||
struct net_device *dev;
|
||||
|
@ -1181,7 +1184,7 @@ static enum skb_drop_reason icmp_timestamp(struct sk_buff *skb)
|
|||
return SKB_NOT_DROPPED_YET;
|
||||
|
||||
out_err:
|
||||
__ICMP_INC_STATS(dev_net(skb_dst(skb)->dev), ICMP_MIB_INERRORS);
|
||||
__ICMP_INC_STATS(dev_net_rcu(skb_dst(skb)->dev), ICMP_MIB_INERRORS);
|
||||
return SKB_DROP_REASON_PKT_TOO_SMALL;
|
||||
}
|
||||
|
||||
|
@ -1198,7 +1201,7 @@ int icmp_rcv(struct sk_buff *skb)
|
|||
{
|
||||
enum skb_drop_reason reason = SKB_DROP_REASON_NOT_SPECIFIED;
|
||||
struct rtable *rt = skb_rtable(skb);
|
||||
struct net *net = dev_net(rt->dst.dev);
|
||||
struct net *net = dev_net_rcu(rt->dst.dev);
|
||||
struct icmphdr *icmph;
|
||||
|
||||
if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb)) {
|
||||
|
@ -1371,9 +1374,9 @@ int icmp_err(struct sk_buff *skb, u32 info)
|
|||
struct iphdr *iph = (struct iphdr *)skb->data;
|
||||
int offset = iph->ihl<<2;
|
||||
struct icmphdr *icmph = (struct icmphdr *)(skb->data + offset);
|
||||
struct net *net = dev_net_rcu(skb->dev);
|
||||
int type = icmp_hdr(skb)->type;
|
||||
int code = icmp_hdr(skb)->code;
|
||||
struct net *net = dev_net(skb->dev);
|
||||
|
||||
/*
|
||||
* Use ping_err to handle all icmp errors except those
|
||||
|
|
|
@ -390,7 +390,13 @@ static inline int ip_rt_proc_init(void)
|
|||
|
||||
static inline bool rt_is_expired(const struct rtable *rth)
|
||||
{
|
||||
return rth->rt_genid != rt_genid_ipv4(dev_net(rth->dst.dev));
|
||||
bool res;
|
||||
|
||||
rcu_read_lock();
|
||||
res = rth->rt_genid != rt_genid_ipv4(dev_net_rcu(rth->dst.dev));
|
||||
rcu_read_unlock();
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
void rt_cache_flush(struct net *net)
|
||||
|
@ -1002,9 +1008,9 @@ out: kfree_skb_reason(skb, reason);
|
|||
static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
|
||||
{
|
||||
struct dst_entry *dst = &rt->dst;
|
||||
struct net *net = dev_net(dst->dev);
|
||||
struct fib_result res;
|
||||
bool lock = false;
|
||||
struct net *net;
|
||||
u32 old_mtu;
|
||||
|
||||
if (ip_mtu_locked(dst))
|
||||
|
@ -1014,6 +1020,8 @@ static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
|
|||
if (old_mtu < mtu)
|
||||
return;
|
||||
|
||||
rcu_read_lock();
|
||||
net = dev_net_rcu(dst->dev);
|
||||
if (mtu < net->ipv4.ip_rt_min_pmtu) {
|
||||
lock = true;
|
||||
mtu = min(old_mtu, net->ipv4.ip_rt_min_pmtu);
|
||||
|
@ -1021,9 +1029,8 @@ static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
|
|||
|
||||
if (rt->rt_pmtu == mtu && !lock &&
|
||||
time_before(jiffies, dst->expires - net->ipv4.ip_rt_mtu_expires / 2))
|
||||
return;
|
||||
goto out;
|
||||
|
||||
rcu_read_lock();
|
||||
if (fib_lookup(net, fl4, &res, 0) == 0) {
|
||||
struct fib_nh_common *nhc;
|
||||
|
||||
|
@ -1037,14 +1044,14 @@ static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
|
|||
update_or_create_fnhe(nhc, fl4->daddr, 0, mtu, lock,
|
||||
jiffies + net->ipv4.ip_rt_mtu_expires);
|
||||
}
|
||||
rcu_read_unlock();
|
||||
return;
|
||||
goto out;
|
||||
}
|
||||
#endif /* CONFIG_IP_ROUTE_MULTIPATH */
|
||||
nhc = FIB_RES_NHC(res);
|
||||
update_or_create_fnhe(nhc, fl4->daddr, 0, mtu, lock,
|
||||
jiffies + net->ipv4.ip_rt_mtu_expires);
|
||||
}
|
||||
out:
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
|
@ -1307,10 +1314,15 @@ static void set_class_tag(struct rtable *rt, u32 tag)
|
|||
|
||||
static unsigned int ipv4_default_advmss(const struct dst_entry *dst)
|
||||
{
|
||||
struct net *net = dev_net(dst->dev);
|
||||
unsigned int header_size = sizeof(struct tcphdr) + sizeof(struct iphdr);
|
||||
unsigned int advmss = max_t(unsigned int, ipv4_mtu(dst) - header_size,
|
||||
net->ipv4.ip_rt_min_advmss);
|
||||
unsigned int advmss;
|
||||
struct net *net;
|
||||
|
||||
rcu_read_lock();
|
||||
net = dev_net_rcu(dst->dev);
|
||||
advmss = max_t(unsigned int, ipv4_mtu(dst) - header_size,
|
||||
net->ipv4.ip_rt_min_advmss);
|
||||
rcu_read_unlock();
|
||||
|
||||
return min(advmss, IPV4_MAX_PMTU - header_size);
|
||||
}
|
||||
|
|
|
@ -76,7 +76,7 @@ static int icmpv6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
|
|||
{
|
||||
/* icmpv6_notify checks 8 bytes can be pulled, icmp6hdr is 8 bytes */
|
||||
struct icmp6hdr *icmp6 = (struct icmp6hdr *) (skb->data + offset);
|
||||
struct net *net = dev_net(skb->dev);
|
||||
struct net *net = dev_net_rcu(skb->dev);
|
||||
|
||||
if (type == ICMPV6_PKT_TOOBIG)
|
||||
ip6_update_pmtu(skb, net, info, skb->dev->ifindex, 0, sock_net_uid(net, NULL));
|
||||
|
@ -473,7 +473,10 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
|
|||
|
||||
if (!skb->dev)
|
||||
return;
|
||||
net = dev_net(skb->dev);
|
||||
|
||||
rcu_read_lock();
|
||||
|
||||
net = dev_net_rcu(skb->dev);
|
||||
mark = IP6_REPLY_MARK(net, skb->mark);
|
||||
/*
|
||||
* Make sure we respect the rules
|
||||
|
@ -496,7 +499,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
|
|||
!(type == ICMPV6_PARAMPROB &&
|
||||
code == ICMPV6_UNK_OPTION &&
|
||||
(opt_unrec(skb, info))))
|
||||
return;
|
||||
goto out;
|
||||
|
||||
saddr = NULL;
|
||||
}
|
||||
|
@ -526,7 +529,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
|
|||
if ((addr_type == IPV6_ADDR_ANY) || (addr_type & IPV6_ADDR_MULTICAST)) {
|
||||
net_dbg_ratelimited("icmp6_send: addr_any/mcast source [%pI6c > %pI6c]\n",
|
||||
&hdr->saddr, &hdr->daddr);
|
||||
return;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -535,7 +538,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
|
|||
if (is_ineligible(skb)) {
|
||||
net_dbg_ratelimited("icmp6_send: no reply to icmp error [%pI6c > %pI6c]\n",
|
||||
&hdr->saddr, &hdr->daddr);
|
||||
return;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Needed by both icmpv6_global_allow and icmpv6_xmit_lock */
|
||||
|
@ -582,7 +585,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
|
|||
np = inet6_sk(sk);
|
||||
|
||||
if (!icmpv6_xrlim_allow(sk, type, &fl6, apply_ratelimit))
|
||||
goto out;
|
||||
goto out_unlock;
|
||||
|
||||
tmp_hdr.icmp6_type = type;
|
||||
tmp_hdr.icmp6_code = code;
|
||||
|
@ -600,7 +603,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
|
|||
|
||||
dst = icmpv6_route_lookup(net, skb, sk, &fl6);
|
||||
if (IS_ERR(dst))
|
||||
goto out;
|
||||
goto out_unlock;
|
||||
|
||||
ipc6.hlimit = ip6_sk_dst_hoplimit(np, &fl6, dst);
|
||||
|
||||
|
@ -616,7 +619,6 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
|
|||
goto out_dst_release;
|
||||
}
|
||||
|
||||
rcu_read_lock();
|
||||
idev = __in6_dev_get(skb->dev);
|
||||
|
||||
if (ip6_append_data(sk, icmpv6_getfrag, &msg,
|
||||
|
@ -630,13 +632,15 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
|
|||
icmpv6_push_pending_frames(sk, &fl6, &tmp_hdr,
|
||||
len + sizeof(struct icmp6hdr));
|
||||
}
|
||||
rcu_read_unlock();
|
||||
|
||||
out_dst_release:
|
||||
dst_release(dst);
|
||||
out:
|
||||
out_unlock:
|
||||
icmpv6_xmit_unlock(sk);
|
||||
out_bh_enable:
|
||||
local_bh_enable();
|
||||
out:
|
||||
rcu_read_unlock();
|
||||
}
|
||||
EXPORT_SYMBOL(icmp6_send);
|
||||
|
||||
|
@ -679,8 +683,8 @@ int ip6_err_gen_icmpv6_unreach(struct sk_buff *skb, int nhs, int type,
|
|||
skb_pull(skb2, nhs);
|
||||
skb_reset_network_header(skb2);
|
||||
|
||||
rt = rt6_lookup(dev_net(skb->dev), &ipv6_hdr(skb2)->saddr, NULL, 0,
|
||||
skb, 0);
|
||||
rt = rt6_lookup(dev_net_rcu(skb->dev), &ipv6_hdr(skb2)->saddr,
|
||||
NULL, 0, skb, 0);
|
||||
|
||||
if (rt && rt->dst.dev)
|
||||
skb2->dev = rt->dst.dev;
|
||||
|
@ -717,7 +721,7 @@ EXPORT_SYMBOL(ip6_err_gen_icmpv6_unreach);
|
|||
|
||||
static enum skb_drop_reason icmpv6_echo_reply(struct sk_buff *skb)
|
||||
{
|
||||
struct net *net = dev_net(skb->dev);
|
||||
struct net *net = dev_net_rcu(skb->dev);
|
||||
struct sock *sk;
|
||||
struct inet6_dev *idev;
|
||||
struct ipv6_pinfo *np;
|
||||
|
@ -832,7 +836,7 @@ enum skb_drop_reason icmpv6_notify(struct sk_buff *skb, u8 type,
|
|||
u8 code, __be32 info)
|
||||
{
|
||||
struct inet6_skb_parm *opt = IP6CB(skb);
|
||||
struct net *net = dev_net(skb->dev);
|
||||
struct net *net = dev_net_rcu(skb->dev);
|
||||
const struct inet6_protocol *ipprot;
|
||||
enum skb_drop_reason reason;
|
||||
int inner_offset;
|
||||
|
@ -889,7 +893,7 @@ out:
|
|||
static int icmpv6_rcv(struct sk_buff *skb)
|
||||
{
|
||||
enum skb_drop_reason reason = SKB_DROP_REASON_NOT_SPECIFIED;
|
||||
struct net *net = dev_net(skb->dev);
|
||||
struct net *net = dev_net_rcu(skb->dev);
|
||||
struct net_device *dev = icmp6_dev(skb);
|
||||
struct inet6_dev *idev = __in6_dev_get(dev);
|
||||
const struct in6_addr *saddr, *daddr;
|
||||
|
@ -921,7 +925,7 @@ static int icmpv6_rcv(struct sk_buff *skb)
|
|||
skb_set_network_header(skb, nh);
|
||||
}
|
||||
|
||||
__ICMP6_INC_STATS(dev_net(dev), idev, ICMP6_MIB_INMSGS);
|
||||
__ICMP6_INC_STATS(dev_net_rcu(dev), idev, ICMP6_MIB_INMSGS);
|
||||
|
||||
saddr = &ipv6_hdr(skb)->saddr;
|
||||
daddr = &ipv6_hdr(skb)->daddr;
|
||||
|
@ -939,7 +943,7 @@ static int icmpv6_rcv(struct sk_buff *skb)
|
|||
|
||||
type = hdr->icmp6_type;
|
||||
|
||||
ICMP6MSGIN_INC_STATS(dev_net(dev), idev, type);
|
||||
ICMP6MSGIN_INC_STATS(dev_net_rcu(dev), idev, type);
|
||||
|
||||
switch (type) {
|
||||
case ICMPV6_ECHO_REQUEST:
|
||||
|
@ -1034,9 +1038,9 @@ static int icmpv6_rcv(struct sk_buff *skb)
|
|||
|
||||
csum_error:
|
||||
reason = SKB_DROP_REASON_ICMP_CSUM;
|
||||
__ICMP6_INC_STATS(dev_net(dev), idev, ICMP6_MIB_CSUMERRORS);
|
||||
__ICMP6_INC_STATS(dev_net_rcu(dev), idev, ICMP6_MIB_CSUMERRORS);
|
||||
discard_it:
|
||||
__ICMP6_INC_STATS(dev_net(dev), idev, ICMP6_MIB_INERRORS);
|
||||
__ICMP6_INC_STATS(dev_net_rcu(dev), idev, ICMP6_MIB_INERRORS);
|
||||
drop_no_count:
|
||||
kfree_skb_reason(skb, reason);
|
||||
return 0;
|
||||
|
|
|
@ -477,9 +477,7 @@ discard:
|
|||
static int ip6_input_finish(struct net *net, struct sock *sk, struct sk_buff *skb)
|
||||
{
|
||||
skb_clear_delivery_time(skb);
|
||||
rcu_read_lock();
|
||||
ip6_protocol_deliver_rcu(net, skb, 0, false);
|
||||
rcu_read_unlock();
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -487,9 +485,15 @@ static int ip6_input_finish(struct net *net, struct sock *sk, struct sk_buff *sk
|
|||
|
||||
int ip6_input(struct sk_buff *skb)
|
||||
{
|
||||
return NF_HOOK(NFPROTO_IPV6, NF_INET_LOCAL_IN,
|
||||
dev_net(skb->dev), NULL, skb, skb->dev, NULL,
|
||||
ip6_input_finish);
|
||||
int res;
|
||||
|
||||
rcu_read_lock();
|
||||
res = NF_HOOK(NFPROTO_IPV6, NF_INET_LOCAL_IN,
|
||||
dev_net_rcu(skb->dev), NULL, skb, skb->dev, NULL,
|
||||
ip6_input_finish);
|
||||
rcu_read_unlock();
|
||||
|
||||
return res;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(ip6_input);
|
||||
|
||||
|
|
|
@ -1773,21 +1773,19 @@ static struct sk_buff *mld_newpack(struct inet6_dev *idev, unsigned int mtu)
|
|||
struct net_device *dev = idev->dev;
|
||||
int hlen = LL_RESERVED_SPACE(dev);
|
||||
int tlen = dev->needed_tailroom;
|
||||
struct net *net = dev_net(dev);
|
||||
const struct in6_addr *saddr;
|
||||
struct in6_addr addr_buf;
|
||||
struct mld2_report *pmr;
|
||||
struct sk_buff *skb;
|
||||
unsigned int size;
|
||||
struct sock *sk;
|
||||
int err;
|
||||
struct net *net;
|
||||
|
||||
sk = net->ipv6.igmp_sk;
|
||||
/* we assume size > sizeof(ra) here
|
||||
* Also try to not allocate high-order pages for big MTU
|
||||
*/
|
||||
size = min_t(int, mtu, PAGE_SIZE / 2) + hlen + tlen;
|
||||
skb = sock_alloc_send_skb(sk, size, 1, &err);
|
||||
skb = alloc_skb(size, GFP_KERNEL);
|
||||
if (!skb)
|
||||
return NULL;
|
||||
|
||||
|
@ -1795,6 +1793,12 @@ static struct sk_buff *mld_newpack(struct inet6_dev *idev, unsigned int mtu)
|
|||
skb_reserve(skb, hlen);
|
||||
skb_tailroom_reserve(skb, mtu, tlen);
|
||||
|
||||
rcu_read_lock();
|
||||
|
||||
net = dev_net_rcu(dev);
|
||||
sk = net->ipv6.igmp_sk;
|
||||
skb_set_owner_w(skb, sk);
|
||||
|
||||
if (ipv6_get_lladdr(dev, &addr_buf, IFA_F_TENTATIVE)) {
|
||||
/* <draft-ietf-magma-mld-source-05.txt>:
|
||||
* use unspecified address as the source address
|
||||
|
@ -1806,6 +1810,8 @@ static struct sk_buff *mld_newpack(struct inet6_dev *idev, unsigned int mtu)
|
|||
|
||||
ip6_mc_hdr(sk, skb, dev, saddr, &mld2_all_mcr, NEXTHDR_HOP, 0);
|
||||
|
||||
rcu_read_unlock();
|
||||
|
||||
skb_put_data(skb, ra, sizeof(ra));
|
||||
|
||||
skb_set_transport_header(skb, skb_tail_pointer(skb) - skb->data);
|
||||
|
@ -2165,21 +2171,21 @@ static void mld_send_cr(struct inet6_dev *idev)
|
|||
|
||||
static void igmp6_send(struct in6_addr *addr, struct net_device *dev, int type)
|
||||
{
|
||||
struct net *net = dev_net(dev);
|
||||
struct sock *sk = net->ipv6.igmp_sk;
|
||||
const struct in6_addr *snd_addr, *saddr;
|
||||
int err, len, payload_len, full_len;
|
||||
struct in6_addr addr_buf;
|
||||
struct inet6_dev *idev;
|
||||
struct sk_buff *skb;
|
||||
struct mld_msg *hdr;
|
||||
const struct in6_addr *snd_addr, *saddr;
|
||||
struct in6_addr addr_buf;
|
||||
int hlen = LL_RESERVED_SPACE(dev);
|
||||
int tlen = dev->needed_tailroom;
|
||||
int err, len, payload_len, full_len;
|
||||
u8 ra[8] = { IPPROTO_ICMPV6, 0,
|
||||
IPV6_TLV_ROUTERALERT, 2, 0, 0,
|
||||
IPV6_TLV_PADN, 0 };
|
||||
struct flowi6 fl6;
|
||||
struct dst_entry *dst;
|
||||
struct flowi6 fl6;
|
||||
struct net *net;
|
||||
struct sock *sk;
|
||||
|
||||
if (type == ICMPV6_MGM_REDUCTION)
|
||||
snd_addr = &in6addr_linklocal_allrouters;
|
||||
|
@ -2190,19 +2196,21 @@ static void igmp6_send(struct in6_addr *addr, struct net_device *dev, int type)
|
|||
payload_len = len + sizeof(ra);
|
||||
full_len = sizeof(struct ipv6hdr) + payload_len;
|
||||
|
||||
skb = alloc_skb(hlen + tlen + full_len, GFP_KERNEL);
|
||||
|
||||
rcu_read_lock();
|
||||
IP6_INC_STATS(net, __in6_dev_get(dev), IPSTATS_MIB_OUTREQUESTS);
|
||||
rcu_read_unlock();
|
||||
|
||||
skb = sock_alloc_send_skb(sk, hlen + tlen + full_len, 1, &err);
|
||||
|
||||
net = dev_net_rcu(dev);
|
||||
idev = __in6_dev_get(dev);
|
||||
IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTREQUESTS);
|
||||
if (!skb) {
|
||||
rcu_read_lock();
|
||||
IP6_INC_STATS(net, __in6_dev_get(dev),
|
||||
IPSTATS_MIB_OUTDISCARDS);
|
||||
IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTDISCARDS);
|
||||
rcu_read_unlock();
|
||||
return;
|
||||
}
|
||||
sk = net->ipv6.igmp_sk;
|
||||
skb_set_owner_w(skb, sk);
|
||||
|
||||
skb->priority = TC_PRIO_CONTROL;
|
||||
skb_reserve(skb, hlen);
|
||||
|
||||
|
@ -2227,9 +2235,6 @@ static void igmp6_send(struct in6_addr *addr, struct net_device *dev, int type)
|
|||
IPPROTO_ICMPV6,
|
||||
csum_partial(hdr, len, 0));
|
||||
|
||||
rcu_read_lock();
|
||||
idev = __in6_dev_get(skb->dev);
|
||||
|
||||
icmpv6_flow_init(sk, &fl6, type,
|
||||
&ipv6_hdr(skb)->saddr, &ipv6_hdr(skb)->daddr,
|
||||
skb->dev->ifindex);
|
||||
|
|
|
@ -418,15 +418,11 @@ static struct sk_buff *ndisc_alloc_skb(struct net_device *dev,
|
|||
{
|
||||
int hlen = LL_RESERVED_SPACE(dev);
|
||||
int tlen = dev->needed_tailroom;
|
||||
struct sock *sk = dev_net(dev)->ipv6.ndisc_sk;
|
||||
struct sk_buff *skb;
|
||||
|
||||
skb = alloc_skb(hlen + sizeof(struct ipv6hdr) + len + tlen, GFP_ATOMIC);
|
||||
if (!skb) {
|
||||
ND_PRINTK(0, err, "ndisc: %s failed to allocate an skb\n",
|
||||
__func__);
|
||||
if (!skb)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
skb->protocol = htons(ETH_P_IPV6);
|
||||
skb->dev = dev;
|
||||
|
@ -437,7 +433,9 @@ static struct sk_buff *ndisc_alloc_skb(struct net_device *dev,
|
|||
/* Manually assign socket ownership as we avoid calling
|
||||
* sock_alloc_send_pskb() to bypass wmem buffer limits
|
||||
*/
|
||||
skb_set_owner_w(skb, sk);
|
||||
rcu_read_lock();
|
||||
skb_set_owner_w(skb, dev_net_rcu(dev)->ipv6.ndisc_sk);
|
||||
rcu_read_unlock();
|
||||
|
||||
return skb;
|
||||
}
|
||||
|
@ -473,16 +471,20 @@ static void ip6_nd_hdr(struct sk_buff *skb,
|
|||
void ndisc_send_skb(struct sk_buff *skb, const struct in6_addr *daddr,
|
||||
const struct in6_addr *saddr)
|
||||
{
|
||||
struct dst_entry *dst = skb_dst(skb);
|
||||
struct net *net = dev_net(skb->dev);
|
||||
struct sock *sk = net->ipv6.ndisc_sk;
|
||||
struct inet6_dev *idev;
|
||||
int err;
|
||||
struct icmp6hdr *icmp6h = icmp6_hdr(skb);
|
||||
struct dst_entry *dst = skb_dst(skb);
|
||||
struct inet6_dev *idev;
|
||||
struct net *net;
|
||||
struct sock *sk;
|
||||
int err;
|
||||
u8 type;
|
||||
|
||||
type = icmp6h->icmp6_type;
|
||||
|
||||
rcu_read_lock();
|
||||
|
||||
net = dev_net_rcu(skb->dev);
|
||||
sk = net->ipv6.ndisc_sk;
|
||||
if (!dst) {
|
||||
struct flowi6 fl6;
|
||||
int oif = skb->dev->ifindex;
|
||||
|
@ -490,6 +492,7 @@ void ndisc_send_skb(struct sk_buff *skb, const struct in6_addr *daddr,
|
|||
icmpv6_flow_init(sk, &fl6, type, saddr, daddr, oif);
|
||||
dst = icmp6_dst_alloc(skb->dev, &fl6);
|
||||
if (IS_ERR(dst)) {
|
||||
rcu_read_unlock();
|
||||
kfree_skb(skb);
|
||||
return;
|
||||
}
|
||||
|
@ -504,7 +507,6 @@ void ndisc_send_skb(struct sk_buff *skb, const struct in6_addr *daddr,
|
|||
|
||||
ip6_nd_hdr(skb, saddr, daddr, READ_ONCE(inet6_sk(sk)->hop_limit), skb->len);
|
||||
|
||||
rcu_read_lock();
|
||||
idev = __in6_dev_get(dst->dev);
|
||||
IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTREQUESTS);
|
||||
|
||||
|
@ -1694,7 +1696,7 @@ void ndisc_send_redirect(struct sk_buff *skb, const struct in6_addr *target)
|
|||
bool ret;
|
||||
|
||||
if (netif_is_l3_master(skb->dev)) {
|
||||
dev = __dev_get_by_index(dev_net(skb->dev), IPCB(skb)->iif);
|
||||
dev = dev_get_by_index_rcu(dev_net(skb->dev), IPCB(skb)->iif);
|
||||
if (!dev)
|
||||
return;
|
||||
}
|
||||
|
|
|
@ -3196,13 +3196,18 @@ static unsigned int ip6_default_advmss(const struct dst_entry *dst)
|
|||
{
|
||||
struct net_device *dev = dst->dev;
|
||||
unsigned int mtu = dst_mtu(dst);
|
||||
struct net *net = dev_net(dev);
|
||||
struct net *net;
|
||||
|
||||
mtu -= sizeof(struct ipv6hdr) + sizeof(struct tcphdr);
|
||||
|
||||
rcu_read_lock();
|
||||
|
||||
net = dev_net_rcu(dev);
|
||||
if (mtu < net->ipv6.sysctl.ip6_rt_min_advmss)
|
||||
mtu = net->ipv6.sysctl.ip6_rt_min_advmss;
|
||||
|
||||
rcu_read_unlock();
|
||||
|
||||
/*
|
||||
* Maximal non-jumbo IPv6 payload is IPV6_MAXPLEN and
|
||||
* corresponding MSS is IPV6_MAXPLEN - tcp_header_size.
|
||||
|
|
|
@ -381,10 +381,8 @@ static int nf_flow_offload_forward(struct nf_flowtable_ctx *ctx,
|
|||
flow = container_of(tuplehash, struct flow_offload, tuplehash[dir]);
|
||||
|
||||
mtu = flow->tuplehash[dir].tuple.mtu + ctx->offset;
|
||||
if (unlikely(nf_flow_exceeds_mtu(skb, mtu))) {
|
||||
flow_offload_teardown(flow);
|
||||
if (unlikely(nf_flow_exceeds_mtu(skb, mtu)))
|
||||
return 0;
|
||||
}
|
||||
|
||||
iph = (struct iphdr *)(skb_network_header(skb) + ctx->offset);
|
||||
thoff = (iph->ihl * 4) + ctx->offset;
|
||||
|
@ -662,10 +660,8 @@ static int nf_flow_offload_ipv6_forward(struct nf_flowtable_ctx *ctx,
|
|||
flow = container_of(tuplehash, struct flow_offload, tuplehash[dir]);
|
||||
|
||||
mtu = flow->tuplehash[dir].tuple.mtu + ctx->offset;
|
||||
if (unlikely(nf_flow_exceeds_mtu(skb, mtu))) {
|
||||
flow_offload_teardown(flow);
|
||||
if (unlikely(nf_flow_exceeds_mtu(skb, mtu)))
|
||||
return 0;
|
||||
}
|
||||
|
||||
ip6h = (struct ipv6hdr *)(skb_network_header(skb) + ctx->offset);
|
||||
thoff = sizeof(*ip6h) + ctx->offset;
|
||||
|
|
|
@ -2101,6 +2101,7 @@ static int ovs_vport_cmd_fill_info(struct vport *vport, struct sk_buff *skb,
|
|||
{
|
||||
struct ovs_header *ovs_header;
|
||||
struct ovs_vport_stats vport_stats;
|
||||
struct net *net_vport;
|
||||
int err;
|
||||
|
||||
ovs_header = genlmsg_put(skb, portid, seq, &dp_vport_genl_family,
|
||||
|
@ -2117,12 +2118,15 @@ static int ovs_vport_cmd_fill_info(struct vport *vport, struct sk_buff *skb,
|
|||
nla_put_u32(skb, OVS_VPORT_ATTR_IFINDEX, vport->dev->ifindex))
|
||||
goto nla_put_failure;
|
||||
|
||||
if (!net_eq(net, dev_net(vport->dev))) {
|
||||
int id = peernet2id_alloc(net, dev_net(vport->dev), gfp);
|
||||
rcu_read_lock();
|
||||
net_vport = dev_net_rcu(vport->dev);
|
||||
if (!net_eq(net, net_vport)) {
|
||||
int id = peernet2id_alloc(net, net_vport, GFP_ATOMIC);
|
||||
|
||||
if (nla_put_s32(skb, OVS_VPORT_ATTR_NETNSID, id))
|
||||
goto nla_put_failure;
|
||||
goto nla_put_failure_unlock;
|
||||
}
|
||||
rcu_read_unlock();
|
||||
|
||||
ovs_vport_get_stats(vport, &vport_stats);
|
||||
if (nla_put_64bit(skb, OVS_VPORT_ATTR_STATS,
|
||||
|
@ -2143,6 +2147,8 @@ static int ovs_vport_cmd_fill_info(struct vport *vport, struct sk_buff *skb,
|
|||
genlmsg_end(skb, ovs_header);
|
||||
return 0;
|
||||
|
||||
nla_put_failure_unlock:
|
||||
rcu_read_unlock();
|
||||
nla_put_failure:
|
||||
err = -EMSGSIZE;
|
||||
error:
|
||||
|
|
|
@ -327,8 +327,8 @@ struct rxrpc_local {
|
|||
* packet with a maximum set of jumbo subpackets or a PING ACK padded
|
||||
* out to 64K with zeropages for PMTUD.
|
||||
*/
|
||||
struct kvec kvec[RXRPC_MAX_NR_JUMBO > 3 + 16 ?
|
||||
RXRPC_MAX_NR_JUMBO : 3 + 16];
|
||||
struct kvec kvec[1 + RXRPC_MAX_NR_JUMBO > 3 + 16 ?
|
||||
1 + RXRPC_MAX_NR_JUMBO : 3 + 16];
|
||||
};
|
||||
|
||||
/*
|
||||
|
@ -874,8 +874,7 @@ struct rxrpc_txbuf {
|
|||
#define RXRPC_TXBUF_RESENT 0x100 /* Set if has been resent */
|
||||
__be16 cksum; /* Checksum to go in header */
|
||||
bool jumboable; /* Can be non-terminal jumbo subpacket */
|
||||
u8 nr_kvec; /* Amount of kvec[] used */
|
||||
struct kvec kvec[1];
|
||||
void *data; /* Data with preceding jumbo header */
|
||||
};
|
||||
|
||||
static inline bool rxrpc_sending_to_server(const struct rxrpc_txbuf *txb)
|
||||
|
|
|
@ -428,13 +428,13 @@ int rxrpc_send_abort_packet(struct rxrpc_call *call)
|
|||
static size_t rxrpc_prepare_data_subpacket(struct rxrpc_call *call,
|
||||
struct rxrpc_send_data_req *req,
|
||||
struct rxrpc_txbuf *txb,
|
||||
struct rxrpc_wire_header *whdr,
|
||||
rxrpc_serial_t serial, int subpkt)
|
||||
{
|
||||
struct rxrpc_wire_header *whdr = txb->kvec[0].iov_base;
|
||||
struct rxrpc_jumbo_header *jumbo = (void *)(whdr + 1) - sizeof(*jumbo);
|
||||
struct rxrpc_jumbo_header *jumbo = txb->data - sizeof(*jumbo);
|
||||
enum rxrpc_req_ack_trace why;
|
||||
struct rxrpc_connection *conn = call->conn;
|
||||
struct kvec *kv = &call->local->kvec[subpkt];
|
||||
struct kvec *kv = &call->local->kvec[1 + subpkt];
|
||||
size_t len = txb->pkt_len;
|
||||
bool last;
|
||||
u8 flags;
|
||||
|
@ -491,18 +491,15 @@ static size_t rxrpc_prepare_data_subpacket(struct rxrpc_call *call,
|
|||
}
|
||||
dont_set_request_ack:
|
||||
|
||||
/* The jumbo header overlays the wire header in the txbuf. */
|
||||
/* There's a jumbo header prepended to the data if we need it. */
|
||||
if (subpkt < req->n - 1)
|
||||
flags |= RXRPC_JUMBO_PACKET;
|
||||
else
|
||||
flags &= ~RXRPC_JUMBO_PACKET;
|
||||
if (subpkt == 0) {
|
||||
whdr->flags = flags;
|
||||
whdr->serial = htonl(txb->serial);
|
||||
whdr->cksum = txb->cksum;
|
||||
whdr->serviceId = htons(conn->service_id);
|
||||
kv->iov_base = whdr;
|
||||
len += sizeof(*whdr);
|
||||
kv->iov_base = txb->data;
|
||||
} else {
|
||||
jumbo->flags = flags;
|
||||
jumbo->pad = 0;
|
||||
|
@ -535,7 +532,9 @@ static unsigned int rxrpc_prepare_txqueue(struct rxrpc_txqueue *tq,
|
|||
/*
|
||||
* Prepare a (jumbo) packet for transmission.
|
||||
*/
|
||||
static size_t rxrpc_prepare_data_packet(struct rxrpc_call *call, struct rxrpc_send_data_req *req)
|
||||
static size_t rxrpc_prepare_data_packet(struct rxrpc_call *call,
|
||||
struct rxrpc_send_data_req *req,
|
||||
struct rxrpc_wire_header *whdr)
|
||||
{
|
||||
struct rxrpc_txqueue *tq = req->tq;
|
||||
rxrpc_serial_t serial;
|
||||
|
@ -549,6 +548,18 @@ static size_t rxrpc_prepare_data_packet(struct rxrpc_call *call, struct rxrpc_se
|
|||
/* Each transmission of a Tx packet needs a new serial number */
|
||||
serial = rxrpc_get_next_serials(call->conn, req->n);
|
||||
|
||||
whdr->epoch = htonl(call->conn->proto.epoch);
|
||||
whdr->cid = htonl(call->cid);
|
||||
whdr->callNumber = htonl(call->call_id);
|
||||
whdr->seq = htonl(seq);
|
||||
whdr->serial = htonl(serial);
|
||||
whdr->type = RXRPC_PACKET_TYPE_DATA;
|
||||
whdr->flags = 0;
|
||||
whdr->userStatus = 0;
|
||||
whdr->securityIndex = call->security_ix;
|
||||
whdr->_rsvd = 0;
|
||||
whdr->serviceId = htons(call->conn->service_id);
|
||||
|
||||
call->tx_last_serial = serial + req->n - 1;
|
||||
call->tx_last_sent = req->now;
|
||||
xmit_ts = rxrpc_prepare_txqueue(tq, req);
|
||||
|
@ -576,7 +587,7 @@ static size_t rxrpc_prepare_data_packet(struct rxrpc_call *call, struct rxrpc_se
|
|||
if (i + 1 == req->n)
|
||||
/* Only sample the last subpacket in a jumbo. */
|
||||
__set_bit(ix, &tq->rtt_samples);
|
||||
len += rxrpc_prepare_data_subpacket(call, req, txb, serial, i);
|
||||
len += rxrpc_prepare_data_subpacket(call, req, txb, whdr, serial, i);
|
||||
serial++;
|
||||
seq++;
|
||||
i++;
|
||||
|
@ -618,6 +629,7 @@ static size_t rxrpc_prepare_data_packet(struct rxrpc_call *call, struct rxrpc_se
|
|||
}
|
||||
|
||||
rxrpc_set_keepalive(call, req->now);
|
||||
page_frag_free(whdr);
|
||||
return len;
|
||||
}
|
||||
|
||||
|
@ -626,25 +638,33 @@ static size_t rxrpc_prepare_data_packet(struct rxrpc_call *call, struct rxrpc_se
|
|||
*/
|
||||
void rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_send_data_req *req)
|
||||
{
|
||||
struct rxrpc_wire_header *whdr;
|
||||
struct rxrpc_connection *conn = call->conn;
|
||||
enum rxrpc_tx_point frag;
|
||||
struct rxrpc_txqueue *tq = req->tq;
|
||||
struct rxrpc_txbuf *txb;
|
||||
struct msghdr msg;
|
||||
rxrpc_seq_t seq = req->seq;
|
||||
size_t len;
|
||||
size_t len = sizeof(*whdr);
|
||||
bool new_call = test_bit(RXRPC_CALL_BEGAN_RX_TIMER, &call->flags);
|
||||
int ret, stat_ix;
|
||||
|
||||
_enter("%x,%x-%x", tq->qbase, seq, seq + req->n - 1);
|
||||
|
||||
whdr = page_frag_alloc(&call->local->tx_alloc, sizeof(*whdr), GFP_NOFS);
|
||||
if (!whdr)
|
||||
return; /* Drop the packet if no memory. */
|
||||
|
||||
call->local->kvec[0].iov_base = whdr;
|
||||
call->local->kvec[0].iov_len = sizeof(*whdr);
|
||||
|
||||
stat_ix = umin(req->n, ARRAY_SIZE(call->rxnet->stat_tx_jumbo)) - 1;
|
||||
atomic_inc(&call->rxnet->stat_tx_jumbo[stat_ix]);
|
||||
|
||||
len = rxrpc_prepare_data_packet(call, req);
|
||||
len += rxrpc_prepare_data_packet(call, req, whdr);
|
||||
txb = tq->bufs[seq & RXRPC_TXQ_MASK];
|
||||
|
||||
iov_iter_kvec(&msg.msg_iter, WRITE, call->local->kvec, req->n, len);
|
||||
iov_iter_kvec(&msg.msg_iter, WRITE, call->local->kvec, 1 + req->n, len);
|
||||
|
||||
msg.msg_name = &call->peer->srx.transport;
|
||||
msg.msg_namelen = call->peer->srx.transport_len;
|
||||
|
@ -695,13 +715,13 @@ void rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_send_data_req
|
|||
|
||||
if (ret == -EMSGSIZE) {
|
||||
rxrpc_inc_stat(call->rxnet, stat_tx_data_send_msgsize);
|
||||
trace_rxrpc_tx_packet(call->debug_id, call->local->kvec[0].iov_base, frag);
|
||||
trace_rxrpc_tx_packet(call->debug_id, whdr, frag);
|
||||
ret = 0;
|
||||
} else if (ret < 0) {
|
||||
rxrpc_inc_stat(call->rxnet, stat_tx_data_send_fail);
|
||||
trace_rxrpc_tx_fail(call->debug_id, txb->serial, ret, frag);
|
||||
} else {
|
||||
trace_rxrpc_tx_packet(call->debug_id, call->local->kvec[0].iov_base, frag);
|
||||
trace_rxrpc_tx_packet(call->debug_id, whdr, frag);
|
||||
}
|
||||
|
||||
rxrpc_tx_backoff(call, ret);
|
||||
|
|
|
@ -169,6 +169,13 @@ void rxrpc_input_error(struct rxrpc_local *local, struct sk_buff *skb)
|
|||
goto out;
|
||||
}
|
||||
|
||||
if ((serr->ee.ee_origin == SO_EE_ORIGIN_ICMP6 &&
|
||||
serr->ee.ee_type == ICMPV6_PKT_TOOBIG &&
|
||||
serr->ee.ee_code == 0)) {
|
||||
rxrpc_adjust_mtu(peer, serr->ee.ee_info);
|
||||
goto out;
|
||||
}
|
||||
|
||||
rxrpc_store_error(peer, skb);
|
||||
out:
|
||||
rxrpc_put_peer(peer, rxrpc_peer_put_input_error);
|
||||
|
|
|
@ -257,8 +257,7 @@ static int rxkad_secure_packet_auth(const struct rxrpc_call *call,
|
|||
struct rxrpc_txbuf *txb,
|
||||
struct skcipher_request *req)
|
||||
{
|
||||
struct rxrpc_wire_header *whdr = txb->kvec[0].iov_base;
|
||||
struct rxkad_level1_hdr *hdr = (void *)(whdr + 1);
|
||||
struct rxkad_level1_hdr *hdr = txb->data;
|
||||
struct rxrpc_crypt iv;
|
||||
struct scatterlist sg;
|
||||
size_t pad;
|
||||
|
@ -274,7 +273,7 @@ static int rxkad_secure_packet_auth(const struct rxrpc_call *call,
|
|||
pad = RXKAD_ALIGN - pad;
|
||||
pad &= RXKAD_ALIGN - 1;
|
||||
if (pad) {
|
||||
memset(txb->kvec[0].iov_base + txb->offset, 0, pad);
|
||||
memset(txb->data + txb->offset, 0, pad);
|
||||
txb->pkt_len += pad;
|
||||
}
|
||||
|
||||
|
@ -300,8 +299,7 @@ static int rxkad_secure_packet_encrypt(const struct rxrpc_call *call,
|
|||
struct skcipher_request *req)
|
||||
{
|
||||
const struct rxrpc_key_token *token;
|
||||
struct rxrpc_wire_header *whdr = txb->kvec[0].iov_base;
|
||||
struct rxkad_level2_hdr *rxkhdr = (void *)(whdr + 1);
|
||||
struct rxkad_level2_hdr *rxkhdr = txb->data;
|
||||
struct rxrpc_crypt iv;
|
||||
struct scatterlist sg;
|
||||
size_t content, pad;
|
||||
|
@ -319,7 +317,7 @@ static int rxkad_secure_packet_encrypt(const struct rxrpc_call *call,
|
|||
txb->pkt_len = round_up(content, RXKAD_ALIGN);
|
||||
pad = txb->pkt_len - content;
|
||||
if (pad)
|
||||
memset(txb->kvec[0].iov_base + txb->offset, 0, pad);
|
||||
memset(txb->data + txb->offset, 0, pad);
|
||||
|
||||
/* encrypt from the session key */
|
||||
token = call->conn->key->payload.data[0];
|
||||
|
@ -407,9 +405,8 @@ static int rxkad_secure_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb)
|
|||
|
||||
/* Clear excess space in the packet */
|
||||
if (txb->pkt_len < txb->alloc_size) {
|
||||
struct rxrpc_wire_header *whdr = txb->kvec[0].iov_base;
|
||||
size_t gap = txb->alloc_size - txb->pkt_len;
|
||||
void *p = whdr + 1;
|
||||
void *p = txb->data;
|
||||
|
||||
memset(p + txb->pkt_len, 0, gap);
|
||||
}
|
||||
|
|
|
@ -419,7 +419,7 @@ reload:
|
|||
size_t copy = umin(txb->space, msg_data_left(msg));
|
||||
|
||||
_debug("add %zu", copy);
|
||||
if (!copy_from_iter_full(txb->kvec[0].iov_base + txb->offset,
|
||||
if (!copy_from_iter_full(txb->data + txb->offset,
|
||||
copy, &msg->msg_iter))
|
||||
goto efault;
|
||||
_debug("added");
|
||||
|
@ -445,8 +445,6 @@ reload:
|
|||
ret = call->security->secure_packet(call, txb);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
txb->kvec[0].iov_len += txb->len;
|
||||
rxrpc_queue_packet(rx, call, txb, notify_end_tx);
|
||||
txb = NULL;
|
||||
}
|
||||
|
|
|
@ -19,17 +19,19 @@ atomic_t rxrpc_nr_txbuf;
|
|||
struct rxrpc_txbuf *rxrpc_alloc_data_txbuf(struct rxrpc_call *call, size_t data_size,
|
||||
size_t data_align, gfp_t gfp)
|
||||
{
|
||||
struct rxrpc_wire_header *whdr;
|
||||
struct rxrpc_txbuf *txb;
|
||||
size_t total, hoff;
|
||||
size_t total, doff, jsize = sizeof(struct rxrpc_jumbo_header);
|
||||
void *buf;
|
||||
|
||||
txb = kzalloc(sizeof(*txb), gfp);
|
||||
if (!txb)
|
||||
return NULL;
|
||||
|
||||
hoff = round_up(sizeof(*whdr), data_align) - sizeof(*whdr);
|
||||
total = hoff + sizeof(*whdr) + data_size;
|
||||
/* We put a jumbo header in the buffer, but not a full wire header to
|
||||
* avoid delayed-corruption problems with zerocopy.
|
||||
*/
|
||||
doff = round_up(jsize, data_align);
|
||||
total = doff + data_size;
|
||||
|
||||
data_align = umax(data_align, L1_CACHE_BYTES);
|
||||
mutex_lock(&call->conn->tx_data_alloc_lock);
|
||||
|
@ -41,30 +43,15 @@ struct rxrpc_txbuf *rxrpc_alloc_data_txbuf(struct rxrpc_call *call, size_t data_
|
|||
return NULL;
|
||||
}
|
||||
|
||||
whdr = buf + hoff;
|
||||
|
||||
refcount_set(&txb->ref, 1);
|
||||
txb->call_debug_id = call->debug_id;
|
||||
txb->debug_id = atomic_inc_return(&rxrpc_txbuf_debug_ids);
|
||||
txb->alloc_size = data_size;
|
||||
txb->space = data_size;
|
||||
txb->offset = sizeof(*whdr);
|
||||
txb->offset = 0;
|
||||
txb->flags = call->conn->out_clientflag;
|
||||
txb->seq = call->send_top + 1;
|
||||
txb->nr_kvec = 1;
|
||||
txb->kvec[0].iov_base = whdr;
|
||||
txb->kvec[0].iov_len = sizeof(*whdr);
|
||||
|
||||
whdr->epoch = htonl(call->conn->proto.epoch);
|
||||
whdr->cid = htonl(call->cid);
|
||||
whdr->callNumber = htonl(call->call_id);
|
||||
whdr->seq = htonl(txb->seq);
|
||||
whdr->type = RXRPC_PACKET_TYPE_DATA;
|
||||
whdr->flags = 0;
|
||||
whdr->userStatus = 0;
|
||||
whdr->securityIndex = call->security_ix;
|
||||
whdr->_rsvd = 0;
|
||||
whdr->serviceId = htons(call->dest_srx.srx_service);
|
||||
txb->data = buf + doff;
|
||||
|
||||
trace_rxrpc_txbuf(txb->debug_id, txb->call_debug_id, txb->seq, 1,
|
||||
rxrpc_txbuf_alloc_data);
|
||||
|
@ -90,14 +77,10 @@ void rxrpc_see_txbuf(struct rxrpc_txbuf *txb, enum rxrpc_txbuf_trace what)
|
|||
|
||||
static void rxrpc_free_txbuf(struct rxrpc_txbuf *txb)
|
||||
{
|
||||
int i;
|
||||
|
||||
trace_rxrpc_txbuf(txb->debug_id, txb->call_debug_id, txb->seq, 0,
|
||||
rxrpc_txbuf_free);
|
||||
for (i = 0; i < txb->nr_kvec; i++)
|
||||
if (txb->kvec[i].iov_base &&
|
||||
!is_zero_pfn(page_to_pfn(virt_to_page(txb->kvec[i].iov_base))))
|
||||
page_frag_free(txb->kvec[i].iov_base);
|
||||
if (txb->data)
|
||||
page_frag_free(txb->data);
|
||||
kfree(txb);
|
||||
atomic_dec(&rxrpc_nr_txbuf);
|
||||
}
|
||||
|
|
|
@ -824,13 +824,19 @@ static void __vsock_release(struct sock *sk, int level)
|
|||
*/
|
||||
lock_sock_nested(sk, level);
|
||||
|
||||
sock_orphan(sk);
|
||||
/* Indicate to vsock_remove_sock() that the socket is being released and
|
||||
* can be removed from the bound_table. Unlike transport reassignment
|
||||
* case, where the socket must remain bound despite vsock_remove_sock()
|
||||
* being called from the transport release() callback.
|
||||
*/
|
||||
sock_set_flag(sk, SOCK_DEAD);
|
||||
|
||||
if (vsk->transport)
|
||||
vsk->transport->release(vsk);
|
||||
else if (sock_type_connectible(sk->sk_type))
|
||||
vsock_remove_sock(vsk);
|
||||
|
||||
sock_orphan(sk);
|
||||
sk->sk_shutdown = SHUTDOWN_MASK;
|
||||
|
||||
skb_queue_purge(&sk->sk_receive_queue);
|
||||
|
|
|
@ -1788,6 +1788,42 @@ static void test_stream_connect_retry_server(const struct test_opts *opts)
|
|||
close(fd);
|
||||
}
|
||||
|
||||
static void test_stream_linger_client(const struct test_opts *opts)
|
||||
{
|
||||
struct linger optval = {
|
||||
.l_onoff = 1,
|
||||
.l_linger = 1
|
||||
};
|
||||
int fd;
|
||||
|
||||
fd = vsock_stream_connect(opts->peer_cid, opts->peer_port);
|
||||
if (fd < 0) {
|
||||
perror("connect");
|
||||
exit(EXIT_FAILURE);
|
||||
}
|
||||
|
||||
if (setsockopt(fd, SOL_SOCKET, SO_LINGER, &optval, sizeof(optval))) {
|
||||
perror("setsockopt(SO_LINGER)");
|
||||
exit(EXIT_FAILURE);
|
||||
}
|
||||
|
||||
close(fd);
|
||||
}
|
||||
|
||||
static void test_stream_linger_server(const struct test_opts *opts)
|
||||
{
|
||||
int fd;
|
||||
|
||||
fd = vsock_stream_accept(VMADDR_CID_ANY, opts->peer_port, NULL);
|
||||
if (fd < 0) {
|
||||
perror("accept");
|
||||
exit(EXIT_FAILURE);
|
||||
}
|
||||
|
||||
vsock_wait_remote_close(fd);
|
||||
close(fd);
|
||||
}
|
||||
|
||||
static struct test_case test_cases[] = {
|
||||
{
|
||||
.name = "SOCK_STREAM connection reset",
|
||||
|
@ -1943,6 +1979,11 @@ static struct test_case test_cases[] = {
|
|||
.run_client = test_stream_connect_retry_client,
|
||||
.run_server = test_stream_connect_retry_server,
|
||||
},
|
||||
{
|
||||
.name = "SOCK_STREAM SO_LINGER null-ptr-deref",
|
||||
.run_client = test_stream_linger_client,
|
||||
.run_server = test_stream_linger_server,
|
||||
},
|
||||
{},
|
||||
};
|
||||
|
||||
|
|
Loading…
Add table
Reference in a new issue