1
0
Fork 0
mirror of synced 2025-03-06 20:59:54 +01:00

First batch of fixes for 6.14. Nothing really stands out,

but as usual there's a slight concentration of fixes for issues
 added in the last two weeks before the MW, and driver bugs
 from 6.13 which tend to get discovered upon wider distribution.
 
 Including fixes from IPSec, netfilter and Bluetooth.
 
 Current release - regressions:
 
  - net: revert RTNL changes in unregister_netdevice_many_notify()
 
  - Bluetooth: fix possible infinite recursion of btusb_reset
 
  - eth: adjust locking in some old drivers which protect their state
 	with spinlocks to avoid sleeping in atomic; core protects
 	netdev state with a mutex now
 
 Previous releases - regressions:
 
  - eth: mlx5e: make sure we pass node ID, not CPU ID to kvzalloc_node()
 
  - eth: bgmac: reduce max frame size to support just 1500 bytes;
 	the jumbo frame support would previously cause OOB writes,
 	but now fails outright
 
  - mptcp: blackhole only if 1st SYN retrans w/o MPC is accepted,
 	avoid false detection of MPTCP blackholing
 
 Previous releases - always broken:
 
  - mptcp: handle fastopen disconnect correctly
 
  - xfrm: make sure skb->sk is a full sock before accessing its fields
 
  - xfrm: fix taking a lock with preempt disabled for RT kernels
 
  - usb: ipheth: improve safety of packet metadata parsing; prevent
 	potential OOB accesses
 
  - eth: renesas: fix missing rtnl lock in suspend/resume path
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmebzXsACgkQMUZtbf5S
 IrvGBQ//auOF2yY1sg40fBvc6Hr1jpZBcr+uqTL6Qka1uVOvTFY51hAN54lBt32+
 ixmcHsD0xdcHrr7VrqSXqurQLiGsdwUpnxZFCj/FymQuMunVysEqudvPeKDVHpsw
 JW5c4nJOexEA2viByK9iB23Qq0P3uBoPEnKrbSTVSDvYaXUj6y8Cvt3/vXc+H/tc
 T7GaxHH55NNNPkRz34YU3OWcaZsgkQEcdVpZf4tODPmg7J5VQj8SQeMhk/HI0sdO
 WKjWB0woZkiQECtamqAOXnv47PXd6igv8NALRPlJcKjs0EszUvuYhD/9MEOeghjI
 sjcQn9JnPpG+ca/qFVCSpEEOo2zGVn5dkJT5x26udH+5XHf7Pq+zpJwB6LHo98yF
 bGMpIrF6gi2EnBtS/tRjMyBU9Ut9KiUtjXMvn9EsD1U1FGIbz6wyQLlT0pG0hwJb
 rEdKfegrcyhWKHOD4vH9ciEg/7lgGfsGyfJDktIMdyailZc6tBcxwbdlc+5jRDA9
 0RqGASIXaiX7AC3WOSeQzMgbV+WXdhaX/yrMJL5KBfBTzxG2audnJt1tPN3mbh3z
 NM6M2cnMsoX4QLSiaukJaCL7LWHSTlVttZVg8FGXHj1PejMQQBVjGVvzk2UF55UR
 gV7X9/VkXhmIDAZgThWtOdPLz+ItksfSKiruhUsXust6JgqRuqc=
 =GjBE
 -----END PGP SIGNATURE-----

Merge tag 'net-6.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from IPSec, netfilter and Bluetooth.

  Nothing really stands out, but as usual there's a slight concentration
  of fixes for issues added in the last two weeks before the merge
  window, and driver bugs from 6.13 which tend to get discovered upon
  wider distribution.

  Current release - regressions:

   - net: revert RTNL changes in unregister_netdevice_many_notify()

   - Bluetooth: fix possible infinite recursion of btusb_reset

   - eth: adjust locking in some old drivers which protect their state
     with spinlocks to avoid sleeping in atomic; core protects netdev
     state with a mutex now

  Previous releases - regressions:

   - eth:
      - mlx5e: make sure we pass node ID, not CPU ID to kvzalloc_node()
      - bgmac: reduce max frame size to support just 1500 bytes; the
        jumbo frame support would previously cause OOB writes, but now
        fails outright

   - mptcp: blackhole only if 1st SYN retrans w/o MPC is accepted, avoid
     false detection of MPTCP blackholing

  Previous releases - always broken:

   - mptcp: handle fastopen disconnect correctly

   - xfrm:
      - make sure skb->sk is a full sock before accessing its fields
      - fix taking a lock with preempt disabled for RT kernels

   - usb: ipheth: improve safety of packet metadata parsing; prevent
     potential OOB accesses

   - eth: renesas: fix missing rtnl lock in suspend/resume path"

* tag 'net-6.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (88 commits)
  MAINTAINERS: add Neal to TCP maintainers
  net: revert RTNL changes in unregister_netdevice_many_notify()
  net: hsr: fix fill_frame_info() regression vs VLAN packets
  doc: mptcp: sysctl: blackhole_timeout is per-netns
  mptcp: blackhole only if 1st SYN retrans w/o MPC is accepted
  netfilter: nf_tables: reject mismatching sum of field_len with set key length
  net: sh_eth: Fix missing rtnl lock in suspend/resume path
  net: ravb: Fix missing rtnl lock in suspend/resume path
  selftests/net: Add test for loading devbound XDP program in generic mode
  net: xdp: Disallow attaching device-bound programs in generic mode
  tcp: correct handling of extreme memory squeeze
  bgmac: reduce max frame size to support just MTU 1500
  vsock/test: Add test for connect() retries
  vsock/test: Add test for UAF due to socket unbinding
  vsock/test: Introduce vsock_connect_fd()
  vsock/test: Introduce vsock_bind()
  vsock: Allow retrying on connect() failure
  vsock: Keep the binding until socket destruction
  Bluetooth: L2CAP: accept zero as a special value for MTU auto-selection
  Bluetooth: btnxpuart: Fix glitches seen in dual A2DP streaming
  ...
This commit is contained in:
Linus Torvalds 2025-01-30 12:24:20 -08:00
commit c2933b2bef
103 changed files with 877 additions and 420 deletions

View file

@ -0,0 +1,9 @@
What: /sys/class/bluetooth/hci<index>/reset
Date: 14-Jan-2025
KernelVersion: 6.13
Contact: linux-bluetooth@vger.kernel.org
Description: This write-only attribute allows users to trigger the vendor reset
method on the Bluetooth device when arbitrary data is written.
The reset may or may not be done through the device transport
(e.g., UART/USB), and can also be done through an out-of-band
approach such as GPIO.

View file

@ -22,12 +22,12 @@ properties:
oneOf:
- items:
- enum:
- qcom,qcs8300-ethqos
- const: qcom,sa8775p-ethqos
- qcom,qcs615-ethqos
- const: qcom,qcs404-ethqos
- items:
- enum:
- qcom,qcs615-ethqos
- const: qcom,sm8150-ethqos
- qcom,qcs8300-ethqos
- const: qcom,sa8775p-ethqos
- enum:
- qcom,qcs404-ethqos
- qcom,sa8775p-ethqos

View file

@ -699,10 +699,10 @@ RAW socket option CAN_RAW_JOIN_FILTERS
The CAN_RAW socket can set multiple CAN identifier specific filters that
lead to multiple filters in the af_can.c filter processing. These filters
are indenpendent from each other which leads to logical OR'ed filters when
are independent from each other which leads to logical OR'ed filters when
applied (see :ref:`socketcan-rawfilter`).
This socket option joines the given CAN filters in the way that only CAN
This socket option joins the given CAN filters in the way that only CAN
frames are passed to user space that matched *all* given CAN filters. The
semantic for the applied filters is therefore changed to a logical AND.

View file

@ -41,7 +41,7 @@ blackhole_timeout - INTEGER (seconds)
MPTCP is re-enabled and will reset to the initial value when the
blackhole issue goes away.
0 to disable the blackhole detection.
0 to disable the blackhole detection. This is a per-namespace sysctl.
Default: 3600

View file

@ -362,7 +362,7 @@ It is expected that ``irq-suspend-timeout`` will be set to a value much larger
than ``gro_flush_timeout`` as ``irq-suspend-timeout`` should suspend IRQs for
the duration of one userland processing cycle.
While it is not stricly necessary to use ``napi_defer_hard_irqs`` and
While it is not strictly necessary to use ``napi_defer_hard_irqs`` and
``gro_flush_timeout`` to use IRQ suspension, their use is strongly
recommended.

View file

@ -4102,6 +4102,7 @@ S: Supported
W: http://www.bluez.org/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next.git
F: Documentation/ABI/stable/sysfs-class-bluetooth
F: include/net/bluetooth/
F: net/bluetooth/
@ -16277,6 +16278,7 @@ F: drivers/scsi/sun3_scsi_vme.c
NCSI LIBRARY
M: Samuel Mendoza-Jonas <sam@mendozajonas.com>
R: Paul Fertser <fercerpav@gmail.com>
S: Maintained
F: net/ncsi/
@ -16611,6 +16613,7 @@ F: tools/testing/selftests/net/mptcp/
NETWORKING [TCP]
M: Eric Dumazet <edumazet@google.com>
M: Neal Cardwell <ncardwell@google.com>
L: netdev@vger.kernel.org
S: Maintained
F: Documentation/networking/net_cachelines/tcp_sock.rst

View file

@ -1381,13 +1381,12 @@ static void btnxpuart_tx_work(struct work_struct *work)
while ((skb = nxp_dequeue(nxpdev))) {
len = serdev_device_write_buf(serdev, skb->data, skb->len);
serdev_device_wait_until_sent(serdev, 0);
hdev->stat.byte_tx += len;
skb_pull(skb, len);
if (skb->len > 0) {
skb_queue_head(&nxpdev->txq, skb);
break;
continue;
}
switch (hci_skb_pkt_type(skb)) {

View file

@ -899,11 +899,6 @@ static void btusb_reset(struct hci_dev *hdev)
struct btusb_data *data;
int err;
if (hdev->reset) {
hdev->reset(hdev);
return;
}
data = hci_get_drvdata(hdev);
/* This is not an unbalanced PM reference since the device will reset */
err = usb_autopm_get_interface(data->intf);
@ -2639,8 +2634,15 @@ static void btusb_mtk_claim_iso_intf(struct btusb_data *data)
struct btmtk_data *btmtk_data = hci_get_priv(data->hdev);
int err;
/*
* The function usb_driver_claim_interface() is documented to need
* locks held if it's not called from a probe routine. The code here
* is called from the hci_power_on workqueue, so grab the lock.
*/
device_lock(&btmtk_data->isopkt_intf->dev);
err = usb_driver_claim_interface(&btusb_driver,
btmtk_data->isopkt_intf, data);
device_unlock(&btmtk_data->isopkt_intf->dev);
if (err < 0) {
btmtk_data->isopkt_intf = NULL;
bt_dev_err(data->hdev, "Failed to claim iso interface");

View file

@ -1538,17 +1538,20 @@ static netdev_features_t bond_fix_features(struct net_device *dev,
NETIF_F_HIGHDMA | NETIF_F_LRO)
#define BOND_ENC_FEATURES (NETIF_F_HW_CSUM | NETIF_F_SG | \
NETIF_F_RXCSUM | NETIF_F_GSO_SOFTWARE)
NETIF_F_RXCSUM | NETIF_F_GSO_SOFTWARE | \
NETIF_F_GSO_PARTIAL)
#define BOND_MPLS_FEATURES (NETIF_F_HW_CSUM | NETIF_F_SG | \
NETIF_F_GSO_SOFTWARE)
#define BOND_GSO_PARTIAL_FEATURES (NETIF_F_GSO_ESP)
static void bond_compute_features(struct bonding *bond)
{
netdev_features_t gso_partial_features = BOND_GSO_PARTIAL_FEATURES;
unsigned int dst_release_flag = IFF_XMIT_DST_RELEASE |
IFF_XMIT_DST_RELEASE_PERM;
netdev_features_t gso_partial_features = NETIF_F_GSO_ESP;
netdev_features_t vlan_features = BOND_VLAN_FEATURES;
netdev_features_t enc_features = BOND_ENC_FEATURES;
#ifdef CONFIG_XFRM_OFFLOAD
@ -1582,8 +1585,9 @@ static void bond_compute_features(struct bonding *bond)
BOND_XFRM_FEATURES);
#endif /* CONFIG_XFRM_OFFLOAD */
if (slave->dev->hw_enc_features & NETIF_F_GSO_PARTIAL)
gso_partial_features &= slave->dev->gso_partial_features;
gso_partial_features = netdev_increment_features(gso_partial_features,
slave->dev->gso_partial_features,
BOND_GSO_PARTIAL_FEATURES);
mpls_features = netdev_increment_features(mpls_features,
slave->dev->mpls_features,
@ -1598,12 +1602,8 @@ static void bond_compute_features(struct bonding *bond)
}
bond_dev->hard_header_len = max_hard_header_len;
if (gso_partial_features & NETIF_F_GSO_ESP)
bond_dev->gso_partial_features |= NETIF_F_GSO_ESP;
else
bond_dev->gso_partial_features &= ~NETIF_F_GSO_ESP;
done:
bond_dev->gso_partial_features = gso_partial_features;
bond_dev->vlan_features = vlan_features;
bond_dev->hw_enc_features = enc_features | NETIF_F_GSO_ENCAP_ALL |
NETIF_F_HW_VLAN_CTAG_TX |
@ -6046,6 +6046,7 @@ void bond_setup(struct net_device *bond_dev)
bond_dev->hw_features |= NETIF_F_GSO_ENCAP_ALL;
bond_dev->features |= bond_dev->hw_features;
bond_dev->features |= NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_STAG_TX;
bond_dev->features |= NETIF_F_GSO_PARTIAL;
#ifdef CONFIG_XFRM_OFFLOAD
bond_dev->hw_features |= BOND_XFRM_FEATURES;
/* Only enable XFRM features if this is an active-backup config */

View file

@ -328,8 +328,7 @@
#define BGMAC_RX_FRAME_OFFSET 30 /* There are 2 unused bytes between header and real data */
#define BGMAC_RX_BUF_OFFSET (NET_SKB_PAD + NET_IP_ALIGN - \
BGMAC_RX_FRAME_OFFSET)
/* Jumbo frame size with FCS */
#define BGMAC_RX_MAX_FRAME_SIZE 9724
#define BGMAC_RX_MAX_FRAME_SIZE 1536
#define BGMAC_RX_BUF_SIZE (BGMAC_RX_FRAME_OFFSET + BGMAC_RX_MAX_FRAME_SIZE)
#define BGMAC_RX_ALLOC_SIZE (SKB_DATA_ALIGN(BGMAC_RX_BUF_SIZE + BGMAC_RX_BUF_OFFSET) + \
SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))

View file

@ -7424,7 +7424,7 @@ static void tg3_napi_enable(struct tg3 *tp)
for (i = 0; i < tp->irq_cnt; i++) {
tnapi = &tp->napi[i];
napi_enable(&tnapi->napi);
napi_enable_locked(&tnapi->napi);
if (tnapi->tx_buffers) {
netif_queue_set_napi(tp->dev, txq_idx,
NETDEV_QUEUE_TYPE_TX,
@ -7445,9 +7445,10 @@ static void tg3_napi_init(struct tg3 *tp)
int i;
for (i = 0; i < tp->irq_cnt; i++) {
netif_napi_add(tp->dev, &tp->napi[i].napi,
i ? tg3_poll_msix : tg3_poll);
netif_napi_set_irq(&tp->napi[i].napi, tp->napi[i].irq_vec);
netif_napi_add_locked(tp->dev, &tp->napi[i].napi,
i ? tg3_poll_msix : tg3_poll);
netif_napi_set_irq_locked(&tp->napi[i].napi,
tp->napi[i].irq_vec);
}
}
@ -11259,6 +11260,8 @@ static void tg3_timer_stop(struct tg3 *tp)
static int tg3_restart_hw(struct tg3 *tp, bool reset_phy)
__releases(tp->lock)
__acquires(tp->lock)
__releases(tp->dev->lock)
__acquires(tp->dev->lock)
{
int err;
@ -11271,7 +11274,9 @@ static int tg3_restart_hw(struct tg3 *tp, bool reset_phy)
tg3_timer_stop(tp);
tp->irq_sync = 0;
tg3_napi_enable(tp);
netdev_unlock(tp->dev);
dev_close(tp->dev);
netdev_lock(tp->dev);
tg3_full_lock(tp, 0);
}
return err;
@ -11299,6 +11304,7 @@ static void tg3_reset_task(struct work_struct *work)
tg3_netif_stop(tp);
netdev_lock(tp->dev);
tg3_full_lock(tp, 1);
if (tg3_flag(tp, TX_RECOVERY_PENDING)) {
@ -11318,12 +11324,14 @@ static void tg3_reset_task(struct work_struct *work)
* call cancel_work_sync() and wait forever.
*/
tg3_flag_clear(tp, RESET_TASK_PENDING);
netdev_unlock(tp->dev);
dev_close(tp->dev);
goto out;
}
tg3_netif_start(tp);
tg3_full_unlock(tp);
netdev_unlock(tp->dev);
tg3_phy_start(tp);
tg3_flag_clear(tp, RESET_TASK_PENDING);
out:
@ -11683,9 +11691,11 @@ static int tg3_start(struct tg3 *tp, bool reset_phy, bool test_irq,
if (err)
goto out_ints_fini;
netdev_lock(dev);
tg3_napi_init(tp);
tg3_napi_enable(tp);
netdev_unlock(dev);
for (i = 0; i < tp->irq_cnt; i++) {
err = tg3_request_irq(tp, i);
@ -12569,6 +12579,7 @@ static int tg3_set_ringparam(struct net_device *dev,
irq_sync = 1;
}
netdev_lock(dev);
tg3_full_lock(tp, irq_sync);
tp->rx_pending = ering->rx_pending;
@ -12597,6 +12608,7 @@ static int tg3_set_ringparam(struct net_device *dev,
}
tg3_full_unlock(tp);
netdev_unlock(dev);
if (irq_sync && !err)
tg3_phy_start(tp);
@ -12678,6 +12690,7 @@ static int tg3_set_pauseparam(struct net_device *dev, struct ethtool_pauseparam
irq_sync = 1;
}
netdev_lock(dev);
tg3_full_lock(tp, irq_sync);
if (epause->autoneg)
@ -12707,6 +12720,7 @@ static int tg3_set_pauseparam(struct net_device *dev, struct ethtool_pauseparam
}
tg3_full_unlock(tp);
netdev_unlock(dev);
}
tp->phy_flags |= TG3_PHYFLG_USER_CONFIGURED;
@ -13911,6 +13925,7 @@ static void tg3_self_test(struct net_device *dev, struct ethtool_test *etest,
data[TG3_INTERRUPT_TEST] = 1;
}
netdev_lock(dev);
tg3_full_lock(tp, 0);
tg3_halt(tp, RESET_KIND_SHUTDOWN, 1);
@ -13922,6 +13937,7 @@ static void tg3_self_test(struct net_device *dev, struct ethtool_test *etest,
}
tg3_full_unlock(tp);
netdev_unlock(dev);
if (irq_sync && !err2)
tg3_phy_start(tp);
@ -14365,6 +14381,7 @@ static int tg3_change_mtu(struct net_device *dev, int new_mtu)
tg3_set_mtu(dev, tp, new_mtu);
netdev_lock(dev);
tg3_full_lock(tp, 1);
tg3_halt(tp, RESET_KIND_SHUTDOWN, 1);
@ -14384,6 +14401,7 @@ static int tg3_change_mtu(struct net_device *dev, int new_mtu)
tg3_netif_start(tp);
tg3_full_unlock(tp);
netdev_unlock(dev);
if (!err)
tg3_phy_start(tp);
@ -18164,6 +18182,7 @@ static int tg3_resume(struct device *device)
netif_device_attach(dev);
netdev_lock(dev);
tg3_full_lock(tp, 0);
tg3_ape_driver_state_change(tp, RESET_KIND_INIT);
@ -18180,6 +18199,7 @@ static int tg3_resume(struct device *device)
out:
tg3_full_unlock(tp);
netdev_unlock(dev);
if (!err)
tg3_phy_start(tp);
@ -18260,7 +18280,9 @@ static pci_ers_result_t tg3_io_error_detected(struct pci_dev *pdev,
done:
if (state == pci_channel_io_perm_failure) {
if (netdev) {
netdev_lock(netdev);
tg3_napi_enable(tp);
netdev_unlock(netdev);
dev_close(netdev);
}
err = PCI_ERS_RESULT_DISCONNECT;
@ -18314,7 +18336,9 @@ static pci_ers_result_t tg3_io_slot_reset(struct pci_dev *pdev)
done:
if (rc != PCI_ERS_RESULT_RECOVERED && netdev && netif_running(netdev)) {
netdev_lock(netdev);
tg3_napi_enable(tp);
netdev_unlock(netdev);
dev_close(netdev);
}
rtnl_unlock();
@ -18340,12 +18364,14 @@ static void tg3_io_resume(struct pci_dev *pdev)
if (!netdev || !netif_running(netdev))
goto done;
netdev_lock(netdev);
tg3_full_lock(tp, 0);
tg3_ape_driver_state_change(tp, RESET_KIND_INIT);
tg3_flag_set(tp, INIT_COMPLETE);
err = tg3_restart_hw(tp, true);
if (err) {
tg3_full_unlock(tp);
netdev_unlock(netdev);
netdev_err(netdev, "Cannot restart hardware after reset.\n");
goto done;
}
@ -18357,6 +18383,7 @@ static void tg3_io_resume(struct pci_dev *pdev)
tg3_netif_start(tp);
tg3_full_unlock(tp);
netdev_unlock(netdev);
tg3_phy_start(tp);

View file

@ -1777,10 +1777,11 @@ static void dm9000_drv_remove(struct platform_device *pdev)
unregister_netdev(ndev);
dm9000_release_board(pdev, dm);
free_netdev(ndev); /* free device structure */
if (dm->power_supply)
regulator_disable(dm->power_supply);
free_netdev(ndev); /* free device structure */
dev_dbg(&pdev->dev, "released and freed device\n");
}

View file

@ -840,6 +840,8 @@ static int fec_enet_txq_submit_tso(struct fec_enet_priv_tx_q *txq,
struct fec_enet_private *fep = netdev_priv(ndev);
int hdr_len, total_len, data_left;
struct bufdesc *bdp = txq->bd.cur;
struct bufdesc *tmp_bdp;
struct bufdesc_ex *ebdp;
struct tso_t tso;
unsigned int index = 0;
int ret;
@ -913,7 +915,34 @@ static int fec_enet_txq_submit_tso(struct fec_enet_priv_tx_q *txq,
return 0;
err_release:
/* TODO: Release all used data descriptors for TSO */
/* Release all used data descriptors for TSO */
tmp_bdp = txq->bd.cur;
while (tmp_bdp != bdp) {
/* Unmap data buffers */
if (tmp_bdp->cbd_bufaddr &&
!IS_TSO_HEADER(txq, fec32_to_cpu(tmp_bdp->cbd_bufaddr)))
dma_unmap_single(&fep->pdev->dev,
fec32_to_cpu(tmp_bdp->cbd_bufaddr),
fec16_to_cpu(tmp_bdp->cbd_datlen),
DMA_TO_DEVICE);
/* Clear standard buffer descriptor fields */
tmp_bdp->cbd_sc = 0;
tmp_bdp->cbd_datlen = 0;
tmp_bdp->cbd_bufaddr = 0;
/* Handle extended descriptor if enabled */
if (fep->bufdesc_ex) {
ebdp = (struct bufdesc_ex *)tmp_bdp;
ebdp->cbd_esc = 0;
}
tmp_bdp = fec_enet_get_nextdesc(tmp_bdp, &txq->bd);
}
dev_kfree_skb_any(skb);
return ret;
}

View file

@ -40,6 +40,21 @@ EXPORT_SYMBOL(hnae3_unregister_ae_algo_prepare);
*/
static DEFINE_MUTEX(hnae3_common_lock);
/* ensure the drivers being unloaded one by one */
static DEFINE_MUTEX(hnae3_unload_lock);
void hnae3_acquire_unload_lock(void)
{
mutex_lock(&hnae3_unload_lock);
}
EXPORT_SYMBOL(hnae3_acquire_unload_lock);
void hnae3_release_unload_lock(void)
{
mutex_unlock(&hnae3_unload_lock);
}
EXPORT_SYMBOL(hnae3_release_unload_lock);
static bool hnae3_client_match(enum hnae3_client_type client_type)
{
if (client_type == HNAE3_CLIENT_KNIC ||

View file

@ -963,4 +963,6 @@ int hnae3_register_client(struct hnae3_client *client);
void hnae3_set_client_init_flag(struct hnae3_client *client,
struct hnae3_ae_dev *ae_dev,
unsigned int inited);
void hnae3_acquire_unload_lock(void);
void hnae3_release_unload_lock(void);
#endif

View file

@ -6002,9 +6002,11 @@ module_init(hns3_init_module);
*/
static void __exit hns3_exit_module(void)
{
hnae3_acquire_unload_lock();
pci_unregister_driver(&hns3_driver);
hnae3_unregister_client(&client);
hns3_dbg_unregister_debugfs();
hnae3_release_unload_lock();
}
module_exit(hns3_exit_module);

View file

@ -12919,9 +12919,11 @@ static int __init hclge_init(void)
static void __exit hclge_exit(void)
{
hnae3_acquire_unload_lock();
hnae3_unregister_ae_algo_prepare(&ae_algo);
hnae3_unregister_ae_algo(&ae_algo);
destroy_workqueue(hclge_wq);
hnae3_release_unload_lock();
}
module_init(hclge_init);
module_exit(hclge_exit);

View file

@ -3410,8 +3410,10 @@ static int __init hclgevf_init(void)
static void __exit hclgevf_exit(void)
{
hnae3_acquire_unload_lock();
hnae3_unregister_ae_algo(&ae_algovf);
destroy_workqueue(hclgevf_wq);
hnae3_release_unload_lock();
}
module_init(hclgevf_init);
module_exit(hclgevf_exit);

View file

@ -773,6 +773,11 @@ iavf_vlan_filter *iavf_add_vlan(struct iavf_adapter *adapter,
f->state = IAVF_VLAN_ADD;
adapter->num_vlan_filters++;
iavf_schedule_aq_request(adapter, IAVF_FLAG_AQ_ADD_VLAN_FILTER);
} else if (f->state == IAVF_VLAN_REMOVE) {
/* IAVF_VLAN_REMOVE means that VLAN wasn't yet removed.
* We can safely only change the state here.
*/
f->state = IAVF_VLAN_ACTIVE;
}
clearout:
@ -793,8 +798,18 @@ static void iavf_del_vlan(struct iavf_adapter *adapter, struct iavf_vlan vlan)
f = iavf_find_vlan(adapter, vlan);
if (f) {
f->state = IAVF_VLAN_REMOVE;
iavf_schedule_aq_request(adapter, IAVF_FLAG_AQ_DEL_VLAN_FILTER);
/* IAVF_ADD_VLAN means that VLAN wasn't even added yet.
* Remove it from the list.
*/
if (f->state == IAVF_VLAN_ADD) {
list_del(&f->list);
kfree(f);
adapter->num_vlan_filters--;
} else {
f->state = IAVF_VLAN_REMOVE;
iavf_schedule_aq_request(adapter,
IAVF_FLAG_AQ_DEL_VLAN_FILTER);
}
}
spin_unlock_bh(&adapter->mac_vlan_list_lock);

View file

@ -1498,7 +1498,6 @@ struct ice_aqc_dnl_equa_param {
#define ICE_AQC_RX_EQU_POST1 (0x12 << ICE_AQC_RX_EQU_SHIFT)
#define ICE_AQC_RX_EQU_BFLF (0x13 << ICE_AQC_RX_EQU_SHIFT)
#define ICE_AQC_RX_EQU_BFHF (0x14 << ICE_AQC_RX_EQU_SHIFT)
#define ICE_AQC_RX_EQU_DRATE (0x15 << ICE_AQC_RX_EQU_SHIFT)
#define ICE_AQC_RX_EQU_CTLE_GAINHF (0x20 << ICE_AQC_RX_EQU_SHIFT)
#define ICE_AQC_RX_EQU_CTLE_GAINLF (0x21 << ICE_AQC_RX_EQU_SHIFT)
#define ICE_AQC_RX_EQU_CTLE_GAINDC (0x22 << ICE_AQC_RX_EQU_SHIFT)

View file

@ -710,7 +710,6 @@ static int ice_get_tx_rx_equa(struct ice_hw *hw, u8 serdes_num,
{ ICE_AQC_RX_EQU_POST1, rx, &ptr->rx_equ_post1 },
{ ICE_AQC_RX_EQU_BFLF, rx, &ptr->rx_equ_bflf },
{ ICE_AQC_RX_EQU_BFHF, rx, &ptr->rx_equ_bfhf },
{ ICE_AQC_RX_EQU_DRATE, rx, &ptr->rx_equ_drate },
{ ICE_AQC_RX_EQU_CTLE_GAINHF, rx, &ptr->rx_equ_ctle_gainhf },
{ ICE_AQC_RX_EQU_CTLE_GAINLF, rx, &ptr->rx_equ_ctle_gainlf },
{ ICE_AQC_RX_EQU_CTLE_GAINDC, rx, &ptr->rx_equ_ctle_gaindc },

View file

@ -15,7 +15,6 @@ struct ice_serdes_equalization_to_ethtool {
int rx_equ_post1;
int rx_equ_bflf;
int rx_equ_bfhf;
int rx_equ_drate;
int rx_equ_ctle_gainhf;
int rx_equ_ctle_gainlf;
int rx_equ_ctle_gaindc;

View file

@ -257,7 +257,6 @@ ice_pg_nm_cam_match(struct ice_pg_nm_cam_item *table, int size,
/*** ICE_SID_RXPARSER_BOOST_TCAM and ICE_SID_LBL_RXPARSER_TMEM sections ***/
#define ICE_BST_TCAM_TABLE_SIZE 256
#define ICE_BST_TCAM_KEY_SIZE 20
#define ICE_BST_KEY_TCAM_SIZE 19
/* Boost TCAM item */
struct ice_bst_tcam_item {
@ -401,7 +400,6 @@ u16 ice_xlt_kb_flag_get(struct ice_xlt_kb *kb, u64 pkt_flag);
#define ICE_PARSER_GPR_NUM 128
#define ICE_PARSER_FLG_NUM 64
#define ICE_PARSER_ERR_NUM 16
#define ICE_BST_KEY_SIZE 10
#define ICE_MARKER_ID_SIZE 9
#define ICE_MARKER_MAX_SIZE \
(ICE_MARKER_ID_SIZE * BITS_PER_BYTE - 1)
@ -431,13 +429,13 @@ struct ice_parser_rt {
u8 pkt_buf[ICE_PARSER_MAX_PKT_LEN + ICE_PARSER_PKT_REV];
u16 pkt_len;
u16 po;
u8 bst_key[ICE_BST_KEY_SIZE];
u8 bst_key[ICE_BST_TCAM_KEY_SIZE];
struct ice_pg_cam_key pg_key;
u8 pg_prio;
struct ice_alu *alu0;
struct ice_alu *alu1;
struct ice_alu *alu2;
struct ice_pg_cam_action *action;
u8 pg_prio;
struct ice_gpr_pu pu;
u8 markers[ICE_MARKER_ID_SIZE];
bool protocols[ICE_PO_PAIR_SIZE];

View file

@ -125,22 +125,20 @@ static void ice_bst_key_init(struct ice_parser_rt *rt,
else
key[idd] = imem->b_kb.prio;
idd = ICE_BST_KEY_TCAM_SIZE - 1;
idd = ICE_BST_TCAM_KEY_SIZE - 2;
for (i = idd; i >= 0; i--) {
int j;
j = ho + idd - i;
if (j < ICE_PARSER_MAX_PKT_LEN)
key[i] = rt->pkt_buf[ho + idd - i];
key[i] = rt->pkt_buf[j];
else
key[i] = 0;
}
ice_debug(rt->psr->hw, ICE_DBG_PARSER, "Generated Boost TCAM Key:\n");
ice_debug(rt->psr->hw, ICE_DBG_PARSER, "%02X %02X %02X %02X %02X %02X %02X %02X %02X %02X\n",
key[0], key[1], key[2], key[3], key[4],
key[5], key[6], key[7], key[8], key[9]);
ice_debug(rt->psr->hw, ICE_DBG_PARSER, "\n");
ice_debug_array_w_prefix(rt->psr->hw, ICE_DBG_PARSER,
KBUILD_MODNAME ": Generated Boost TCAM Key",
key, ICE_BST_TCAM_KEY_SIZE);
}
static u16 ice_bit_rev_u16(u16 v, int len)

View file

@ -376,6 +376,9 @@ int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
if (!(le16_to_cpu(desc->flags) & IDPF_CTLQ_FLAG_DD))
break;
/* Ensure no other fields are read until DD flag is checked */
dma_rmb();
/* strip off FW internal code */
desc_err = le16_to_cpu(desc->ret_val) & 0xff;
@ -563,6 +566,9 @@ int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
if (!(flags & IDPF_CTLQ_FLAG_DD))
break;
/* Ensure no other fields are read until DD flag is checked */
dma_rmb();
q_msg[i].vmvf_type = (flags &
(IDPF_CTLQ_FLAG_FTYPE_VM |
IDPF_CTLQ_FLAG_FTYPE_PF)) >>

View file

@ -174,7 +174,8 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
pci_set_master(pdev);
pci_set_drvdata(pdev, adapter);
adapter->init_wq = alloc_workqueue("%s-%s-init", 0, 0,
adapter->init_wq = alloc_workqueue("%s-%s-init",
WQ_UNBOUND | WQ_MEM_RECLAIM, 0,
dev_driver_string(dev),
dev_name(dev));
if (!adapter->init_wq) {
@ -183,7 +184,8 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_free;
}
adapter->serv_wq = alloc_workqueue("%s-%s-service", 0, 0,
adapter->serv_wq = alloc_workqueue("%s-%s-service",
WQ_UNBOUND | WQ_MEM_RECLAIM, 0,
dev_driver_string(dev),
dev_name(dev));
if (!adapter->serv_wq) {
@ -192,7 +194,8 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_serv_wq_alloc;
}
adapter->mbx_wq = alloc_workqueue("%s-%s-mbx", 0, 0,
adapter->mbx_wq = alloc_workqueue("%s-%s-mbx",
WQ_UNBOUND | WQ_MEM_RECLAIM, 0,
dev_driver_string(dev),
dev_name(dev));
if (!adapter->mbx_wq) {
@ -201,7 +204,8 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_mbx_wq_alloc;
}
adapter->stats_wq = alloc_workqueue("%s-%s-stats", 0, 0,
adapter->stats_wq = alloc_workqueue("%s-%s-stats",
WQ_UNBOUND | WQ_MEM_RECLAIM, 0,
dev_driver_string(dev),
dev_name(dev));
if (!adapter->stats_wq) {
@ -210,7 +214,8 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_stats_wq_alloc;
}
adapter->vc_event_wq = alloc_workqueue("%s-%s-vc_event", 0, 0,
adapter->vc_event_wq = alloc_workqueue("%s-%s-vc_event",
WQ_UNBOUND | WQ_MEM_RECLAIM, 0,
dev_driver_string(dev),
dev_name(dev));
if (!adapter->vc_event_wq) {

View file

@ -517,8 +517,10 @@ static ssize_t idpf_vc_xn_exec(struct idpf_adapter *adapter,
retval = -ENXIO;
goto only_unlock;
case IDPF_VC_XN_WAITING:
dev_notice_ratelimited(&adapter->pdev->dev, "Transaction timed-out (op %d, %dms)\n",
params->vc_op, params->timeout_ms);
dev_notice_ratelimited(&adapter->pdev->dev,
"Transaction timed-out (op:%d cookie:%04x vc_op:%d salt:%02x timeout:%dms)\n",
params->vc_op, cookie, xn->vc_op,
xn->salt, params->timeout_ms);
retval = -ETIME;
break;
case IDPF_VC_XN_COMPLETED_SUCCESS:
@ -612,14 +614,16 @@ idpf_vc_xn_forward_reply(struct idpf_adapter *adapter,
return -EINVAL;
}
xn = &adapter->vcxn_mngr->ring[xn_idx];
idpf_vc_xn_lock(xn);
salt = FIELD_GET(IDPF_VC_XN_SALT_M, msg_info);
if (xn->salt != salt) {
dev_err_ratelimited(&adapter->pdev->dev, "Transaction salt does not match (%02x != %02x)\n",
xn->salt, salt);
dev_err_ratelimited(&adapter->pdev->dev, "Transaction salt does not match (exp:%d@%02x(%d) != got:%d@%02x)\n",
xn->vc_op, xn->salt, xn->state,
ctlq_msg->cookie.mbx.chnl_opcode, salt);
idpf_vc_xn_unlock(xn);
return -EINVAL;
}
idpf_vc_xn_lock(xn);
switch (xn->state) {
case IDPF_VC_XN_WAITING:
/* success */
@ -3077,12 +3081,21 @@ init_failed:
*/
void idpf_vc_core_deinit(struct idpf_adapter *adapter)
{
bool remove_in_prog;
if (!test_bit(IDPF_VC_CORE_INIT, adapter->flags))
return;
/* Avoid transaction timeouts when called during reset */
remove_in_prog = test_bit(IDPF_REMOVE_IN_PROG, adapter->flags);
if (!remove_in_prog)
idpf_vc_xn_shutdown(adapter->vcxn_mngr);
idpf_deinit_task(adapter);
idpf_intr_rel(adapter);
idpf_vc_xn_shutdown(adapter->vcxn_mngr);
if (remove_in_prog)
idpf_vc_xn_shutdown(adapter->vcxn_mngr);
cancel_delayed_work_sync(&adapter->serv_task);
cancel_delayed_work_sync(&adapter->mbx_task);

View file

@ -4432,6 +4432,7 @@ static int mvneta_cpu_online(unsigned int cpu, struct hlist_node *node)
*/
if (pp->is_stopped) {
spin_unlock(&pp->lock);
netdev_unlock(port->napi.dev);
return 0;
}
netif_tx_stop_all_queues(pp->dev);

View file

@ -266,11 +266,11 @@
#define REG_GDM3_FWD_CFG GDM3_BASE
#define GDM3_PAD_EN_MASK BIT(28)
#define REG_GDM4_FWD_CFG (GDM4_BASE + 0x100)
#define REG_GDM4_FWD_CFG GDM4_BASE
#define GDM4_PAD_EN_MASK BIT(28)
#define GDM4_SPORT_OFFSET0_MASK GENMASK(11, 8)
#define REG_GDM4_SRC_PORT_SET (GDM4_BASE + 0x33c)
#define REG_GDM4_SRC_PORT_SET (GDM4_BASE + 0x23c)
#define GDM4_SPORT_OFF2_MASK GENMASK(19, 16)
#define GDM4_SPORT_OFF1_MASK GENMASK(15, 12)
#define GDM4_SPORT_OFF0_MASK GENMASK(11, 8)

View file

@ -2087,7 +2087,7 @@ static struct mlx5e_xdpsq *mlx5e_open_xdpredirect_sq(struct mlx5e_channel *c,
struct mlx5e_xdpsq *xdpsq;
int err;
xdpsq = kvzalloc_node(sizeof(*xdpsq), GFP_KERNEL, c->cpu);
xdpsq = kvzalloc_node(sizeof(*xdpsq), GFP_KERNEL, cpu_to_node(c->cpu));
if (!xdpsq)
return ERR_PTR(-ENOMEM);

View file

@ -1120,20 +1120,6 @@ static void nv_disable_hw_interrupts(struct net_device *dev, u32 mask)
}
}
static void nv_napi_enable(struct net_device *dev)
{
struct fe_priv *np = get_nvpriv(dev);
napi_enable(&np->napi);
}
static void nv_napi_disable(struct net_device *dev)
{
struct fe_priv *np = get_nvpriv(dev);
napi_disable(&np->napi);
}
#define MII_READ (-1)
/* mii_rw: read/write a register on the PHY.
*
@ -3114,7 +3100,7 @@ static int nv_change_mtu(struct net_device *dev, int new_mtu)
* Changing the MTU is a rare event, it shouldn't matter.
*/
nv_disable_irq(dev);
nv_napi_disable(dev);
napi_disable(&np->napi);
netif_tx_lock_bh(dev);
netif_addr_lock(dev);
spin_lock(&np->lock);
@ -3143,7 +3129,7 @@ static int nv_change_mtu(struct net_device *dev, int new_mtu)
spin_unlock(&np->lock);
netif_addr_unlock(dev);
netif_tx_unlock_bh(dev);
nv_napi_enable(dev);
napi_enable(&np->napi);
nv_enable_irq(dev);
}
return 0;
@ -4731,7 +4717,7 @@ static int nv_set_ringparam(struct net_device *dev,
if (netif_running(dev)) {
nv_disable_irq(dev);
nv_napi_disable(dev);
napi_disable(&np->napi);
netif_tx_lock_bh(dev);
netif_addr_lock(dev);
spin_lock(&np->lock);
@ -4784,7 +4770,7 @@ static int nv_set_ringparam(struct net_device *dev,
spin_unlock(&np->lock);
netif_addr_unlock(dev);
netif_tx_unlock_bh(dev);
nv_napi_enable(dev);
napi_enable(&np->napi);
nv_enable_irq(dev);
}
return 0;
@ -5277,7 +5263,7 @@ static void nv_self_test(struct net_device *dev, struct ethtool_test *test, u64
if (test->flags & ETH_TEST_FL_OFFLINE) {
if (netif_running(dev)) {
netif_stop_queue(dev);
nv_napi_disable(dev);
napi_disable(&np->napi);
netif_tx_lock_bh(dev);
netif_addr_lock(dev);
spin_lock_irq(&np->lock);
@ -5334,7 +5320,7 @@ static void nv_self_test(struct net_device *dev, struct ethtool_test *test, u64
/* restart rx engine */
nv_start_rxtx(dev);
netif_start_queue(dev);
nv_napi_enable(dev);
napi_enable(&np->napi);
nv_enable_hw_interrupts(dev, np->irqmask);
}
}
@ -5576,6 +5562,7 @@ static int nv_open(struct net_device *dev)
/* ask for interrupts */
nv_enable_hw_interrupts(dev, np->irqmask);
netdev_lock(dev);
spin_lock_irq(&np->lock);
writel(NVREG_MCASTADDRA_FORCE, base + NvRegMulticastAddrA);
writel(0, base + NvRegMulticastAddrB);
@ -5594,7 +5581,7 @@ static int nv_open(struct net_device *dev)
ret = nv_update_linkspeed(dev);
nv_start_rxtx(dev);
netif_start_queue(dev);
nv_napi_enable(dev);
napi_enable_locked(&np->napi);
if (ret) {
netif_carrier_on(dev);
@ -5611,6 +5598,7 @@ static int nv_open(struct net_device *dev)
round_jiffies(jiffies + STATS_INTERVAL));
spin_unlock_irq(&np->lock);
netdev_unlock(dev);
/* If the loopback feature was set while the device was down, make sure
* that it's set correctly now.
@ -5632,7 +5620,7 @@ static int nv_close(struct net_device *dev)
spin_lock_irq(&np->lock);
np->in_shutdown = 1;
spin_unlock_irq(&np->lock);
nv_napi_disable(dev);
napi_disable(&np->napi);
synchronize_irq(np->pci_dev->irq);
del_timer_sync(&np->oom_kick);

View file

@ -1684,6 +1684,7 @@ static void rtl8139_tx_timeout_task (struct work_struct *work)
if (tmp8 & CmdTxEnb)
RTL_W8 (ChipCmd, CmdRxEnb);
netdev_lock(dev);
spin_lock_bh(&tp->rx_lock);
/* Disable interrupts by clearing the interrupt mask. */
RTL_W16 (IntrMask, 0x0000);
@ -1694,11 +1695,12 @@ static void rtl8139_tx_timeout_task (struct work_struct *work)
spin_unlock_irq(&tp->lock);
/* ...and finally, reset everything */
napi_enable(&tp->napi);
napi_enable_locked(&tp->napi);
rtl8139_hw_start(dev);
netif_wake_queue(dev);
spin_unlock_bh(&tp->rx_lock);
netdev_unlock(dev);
}
static void rtl8139_tx_timeout(struct net_device *dev, unsigned int txqueue)

View file

@ -3217,10 +3217,15 @@ static int ravb_suspend(struct device *dev)
netif_device_detach(ndev);
if (priv->wol_enabled)
return ravb_wol_setup(ndev);
rtnl_lock();
if (priv->wol_enabled) {
ret = ravb_wol_setup(ndev);
rtnl_unlock();
return ret;
}
ret = ravb_close(ndev);
rtnl_unlock();
if (ret)
return ret;
@ -3245,19 +3250,20 @@ static int ravb_resume(struct device *dev)
if (!netif_running(ndev))
return 0;
rtnl_lock();
/* If WoL is enabled restore the interface. */
if (priv->wol_enabled) {
if (priv->wol_enabled)
ret = ravb_wol_restore(ndev);
if (ret)
return ret;
} else {
else
ret = pm_runtime_force_resume(dev);
if (ret)
return ret;
if (ret) {
rtnl_unlock();
return ret;
}
/* Reopening the interface will restore the device to the working state. */
ret = ravb_open(ndev);
rtnl_unlock();
if (ret < 0)
goto out_rpm_put;

View file

@ -3494,10 +3494,12 @@ static int sh_eth_suspend(struct device *dev)
netif_device_detach(ndev);
rtnl_lock();
if (mdp->wol_enabled)
ret = sh_eth_wol_setup(ndev);
else
ret = sh_eth_close(ndev);
rtnl_unlock();
return ret;
}
@ -3511,10 +3513,12 @@ static int sh_eth_resume(struct device *dev)
if (!netif_running(ndev))
return 0;
rtnl_lock();
if (mdp->wol_enabled)
ret = sh_eth_wol_restore(ndev);
else
ret = sh_eth_open(ndev);
rtnl_unlock();
if (ret < 0)
return ret;

View file

@ -2424,11 +2424,6 @@ static void stmmac_dma_operation_mode(struct stmmac_priv *priv)
u32 chan = 0;
u8 qmode = 0;
if (rxfifosz == 0)
rxfifosz = priv->dma_cap.rx_fifo_size;
if (txfifosz == 0)
txfifosz = priv->dma_cap.tx_fifo_size;
/* Split up the shared Tx/Rx FIFO memory on DW QoS Eth and DW XGMAC */
if (priv->plat->has_gmac4 || priv->plat->has_xgmac) {
rxfifosz /= rx_channels_count;
@ -2897,11 +2892,6 @@ static void stmmac_set_dma_operation_mode(struct stmmac_priv *priv, u32 txmode,
int rxfifosz = priv->plat->rx_fifo_size;
int txfifosz = priv->plat->tx_fifo_size;
if (rxfifosz == 0)
rxfifosz = priv->dma_cap.rx_fifo_size;
if (txfifosz == 0)
txfifosz = priv->dma_cap.tx_fifo_size;
/* Adjust for real per queue fifo size */
rxfifosz /= rx_channels_count;
txfifosz /= tx_channels_count;
@ -5878,9 +5868,6 @@ static int stmmac_change_mtu(struct net_device *dev, int new_mtu)
const int mtu = new_mtu;
int ret;
if (txfifosz == 0)
txfifosz = priv->dma_cap.tx_fifo_size;
txfifosz /= priv->plat->tx_queues_to_use;
if (stmmac_xdp_is_enabled(priv) && new_mtu > ETH_DATA_LEN) {
@ -7217,6 +7204,50 @@ static int stmmac_hw_init(struct stmmac_priv *priv)
if (priv->dma_cap.tsoen)
dev_info(priv->device, "TSO supported\n");
if (priv->dma_cap.number_rx_queues &&
priv->plat->rx_queues_to_use > priv->dma_cap.number_rx_queues) {
dev_warn(priv->device,
"Number of Rx queues (%u) exceeds dma capability\n",
priv->plat->rx_queues_to_use);
priv->plat->rx_queues_to_use = priv->dma_cap.number_rx_queues;
}
if (priv->dma_cap.number_tx_queues &&
priv->plat->tx_queues_to_use > priv->dma_cap.number_tx_queues) {
dev_warn(priv->device,
"Number of Tx queues (%u) exceeds dma capability\n",
priv->plat->tx_queues_to_use);
priv->plat->tx_queues_to_use = priv->dma_cap.number_tx_queues;
}
if (!priv->plat->rx_fifo_size) {
if (priv->dma_cap.rx_fifo_size) {
priv->plat->rx_fifo_size = priv->dma_cap.rx_fifo_size;
} else {
dev_err(priv->device, "Can't specify Rx FIFO size\n");
return -ENODEV;
}
} else if (priv->dma_cap.rx_fifo_size &&
priv->plat->rx_fifo_size > priv->dma_cap.rx_fifo_size) {
dev_warn(priv->device,
"Rx FIFO size (%u) exceeds dma capability\n",
priv->plat->rx_fifo_size);
priv->plat->rx_fifo_size = priv->dma_cap.rx_fifo_size;
}
if (!priv->plat->tx_fifo_size) {
if (priv->dma_cap.tx_fifo_size) {
priv->plat->tx_fifo_size = priv->dma_cap.tx_fifo_size;
} else {
dev_err(priv->device, "Can't specify Tx FIFO size\n");
return -ENODEV;
}
} else if (priv->dma_cap.tx_fifo_size &&
priv->plat->tx_fifo_size > priv->dma_cap.tx_fifo_size) {
dev_warn(priv->device,
"Tx FIFO size (%u) exceeds dma capability\n",
priv->plat->tx_fifo_size);
priv->plat->tx_fifo_size = priv->dma_cap.tx_fifo_size;
}
priv->hw->vlan_fail_q_en =
(priv->plat->flags & STMMAC_FLAG_VLAN_FAIL_Q_EN);
priv->hw->vlan_fail_q = priv->plat->vlan_fail_q;

View file

@ -6086,7 +6086,7 @@ static void niu_enable_napi(struct niu *np)
int i;
for (i = 0; i < np->num_ldg; i++)
napi_enable(&np->ldg[i].napi);
napi_enable_locked(&np->ldg[i].napi);
}
static void niu_disable_napi(struct niu *np)
@ -6116,7 +6116,9 @@ static int niu_open(struct net_device *dev)
if (err)
goto out_free_channels;
netdev_lock(dev);
niu_enable_napi(np);
netdev_unlock(dev);
spin_lock_irq(&np->lock);
@ -6521,6 +6523,7 @@ static void niu_reset_task(struct work_struct *work)
niu_reset_buffers(np);
netdev_lock(np->dev);
spin_lock_irqsave(&np->lock, flags);
err = niu_init_hw(np);
@ -6531,6 +6534,7 @@ static void niu_reset_task(struct work_struct *work)
}
spin_unlock_irqrestore(&np->lock, flags);
netdev_unlock(np->dev);
}
static void niu_tx_timeout(struct net_device *dev, unsigned int txqueue)
@ -6761,7 +6765,9 @@ static int niu_change_mtu(struct net_device *dev, int new_mtu)
niu_free_channels(np);
netdev_lock(dev);
niu_enable_napi(np);
netdev_unlock(dev);
err = niu_alloc_channels(np);
if (err)
@ -9937,6 +9943,7 @@ static int __maybe_unused niu_resume(struct device *dev_d)
spin_lock_irqsave(&np->lock, flags);
netdev_lock(dev);
err = niu_init_hw(np);
if (!err) {
np->timer.expires = jiffies + HZ;
@ -9945,6 +9952,7 @@ static int __maybe_unused niu_resume(struct device *dev_d)
}
spin_unlock_irqrestore(&np->lock, flags);
netdev_unlock(dev);
return err;
}

View file

@ -1568,7 +1568,7 @@ static void init_registers(struct net_device *dev)
if (rp->quirks & rqMgmt)
rhine_init_cam_filter(dev);
napi_enable(&rp->napi);
napi_enable_locked(&rp->napi);
iowrite16(RHINE_EVENT & 0xffff, ioaddr + IntrEnable);
@ -1696,7 +1696,10 @@ static int rhine_open(struct net_device *dev)
rhine_power_init(dev);
rhine_chip_reset(dev);
rhine_task_enable(rp);
netdev_lock(dev);
init_registers(dev);
netdev_unlock(dev);
netif_dbg(rp, ifup, dev, "%s() Done - status %04x MII status: %04x\n",
__func__, ioread16(ioaddr + ChipCmd),
@ -1727,6 +1730,8 @@ static void rhine_reset_task(struct work_struct *work)
napi_disable(&rp->napi);
netif_tx_disable(dev);
netdev_lock(dev);
spin_lock_bh(&rp->lock);
/* clear all descriptors */
@ -1740,6 +1745,7 @@ static void rhine_reset_task(struct work_struct *work)
init_registers(dev);
spin_unlock_bh(&rp->lock);
netdev_unlock(dev);
netif_trans_update(dev); /* prevent tx timeout */
dev->stats.tx_errors++;
@ -2541,9 +2547,12 @@ static int rhine_resume(struct device *device)
alloc_tbufs(dev);
rhine_reset_rbufs(rp);
rhine_task_enable(rp);
netdev_lock(dev);
spin_lock_bh(&rp->lock);
init_registers(dev);
spin_unlock_bh(&rp->lock);
netdev_unlock(dev);
netif_device_attach(dev);

View file

@ -74,7 +74,7 @@ static void nsim_get_ringparam(struct net_device *dev,
memcpy(ring, &ns->ethtool.ring, sizeof(ns->ethtool.ring));
kernel_ring->hds_thresh_max = NSIM_HDS_THRESHOLD_MAX;
if (kernel_ring->tcp_data_split == ETHTOOL_TCP_DATA_SPLIT_UNKNOWN)
if (dev->cfg->hds_config == ETHTOOL_TCP_DATA_SPLIT_UNKNOWN)
kernel_ring->tcp_data_split = ETHTOOL_TCP_DATA_SPLIT_ENABLED;
}

View file

@ -134,6 +134,7 @@ struct netdevsim {
u32 sleep;
u32 __ports[2][NSIM_UDP_TUNNEL_N_PORTS];
u32 (*ports)[NSIM_UDP_TUNNEL_N_PORTS];
struct dentry *ddir;
struct debugfs_u32_array dfs_ports[2];
} udp_ports;

View file

@ -112,9 +112,11 @@ nsim_udp_tunnels_info_reset_write(struct file *file, const char __user *data,
struct net_device *dev = file->private_data;
struct netdevsim *ns = netdev_priv(dev);
memset(ns->udp_ports.ports, 0, sizeof(ns->udp_ports.__ports));
rtnl_lock();
udp_tunnel_nic_reset_ntf(dev);
if (dev->reg_state == NETREG_REGISTERED) {
memset(ns->udp_ports.ports, 0, sizeof(ns->udp_ports.__ports));
udp_tunnel_nic_reset_ntf(dev);
}
rtnl_unlock();
return count;
@ -144,23 +146,23 @@ int nsim_udp_tunnels_info_create(struct nsim_dev *nsim_dev,
else
ns->udp_ports.ports = nsim_dev->udp_ports.__ports;
debugfs_create_u32("udp_ports_inject_error", 0600,
ns->nsim_dev_port->ddir,
ns->udp_ports.ddir = debugfs_create_dir("udp_ports",
ns->nsim_dev_port->ddir);
debugfs_create_u32("inject_error", 0600, ns->udp_ports.ddir,
&ns->udp_ports.inject_error);
ns->udp_ports.dfs_ports[0].array = ns->udp_ports.ports[0];
ns->udp_ports.dfs_ports[0].n_elements = NSIM_UDP_TUNNEL_N_PORTS;
debugfs_create_u32_array("udp_ports_table0", 0400,
ns->nsim_dev_port->ddir,
debugfs_create_u32_array("table0", 0400, ns->udp_ports.ddir,
&ns->udp_ports.dfs_ports[0]);
ns->udp_ports.dfs_ports[1].array = ns->udp_ports.ports[1];
ns->udp_ports.dfs_ports[1].n_elements = NSIM_UDP_TUNNEL_N_PORTS;
debugfs_create_u32_array("udp_ports_table1", 0400,
ns->nsim_dev_port->ddir,
debugfs_create_u32_array("table1", 0400, ns->udp_ports.ddir,
&ns->udp_ports.dfs_ports[1]);
debugfs_create_file("udp_ports_reset", 0200, ns->nsim_dev_port->ddir,
debugfs_create_file("reset", 0200, ns->udp_ports.ddir,
dev, &nsim_udp_tunnels_info_reset_fops);
/* Note: it's not normal to allocate the info struct like this!
@ -196,6 +198,9 @@ int nsim_udp_tunnels_info_create(struct nsim_dev *nsim_dev,
void nsim_udp_tunnels_info_destroy(struct net_device *dev)
{
struct netdevsim *ns = netdev_priv(dev);
debugfs_remove_recursive(ns->udp_ports.ddir);
kfree(dev->udp_tunnel_nic_info);
dev->udp_tunnel_nic_info = NULL;
}

View file

@ -95,6 +95,10 @@
#define MDIO_MMD_PCS_MV_TDR_OFF_CUTOFF 65246
struct mv88q2xxx_priv {
bool enable_temp;
};
struct mmd_val {
int devad;
u32 regnum;
@ -710,17 +714,12 @@ static const struct hwmon_chip_info mv88q2xxx_hwmon_chip_info = {
static int mv88q2xxx_hwmon_probe(struct phy_device *phydev)
{
struct mv88q2xxx_priv *priv = phydev->priv;
struct device *dev = &phydev->mdio.dev;
struct device *hwmon;
char *hwmon_name;
int ret;
/* Enable temperature sense */
ret = phy_modify_mmd(phydev, MDIO_MMD_PCS, MDIO_MMD_PCS_MV_TEMP_SENSOR2,
MDIO_MMD_PCS_MV_TEMP_SENSOR2_DIS_MASK, 0);
if (ret < 0)
return ret;
priv->enable_temp = true;
hwmon_name = devm_hwmon_sanitize_name(dev, dev_name(dev));
if (IS_ERR(hwmon_name))
return PTR_ERR(hwmon_name);
@ -743,6 +742,14 @@ static int mv88q2xxx_hwmon_probe(struct phy_device *phydev)
static int mv88q2xxx_probe(struct phy_device *phydev)
{
struct mv88q2xxx_priv *priv;
priv = devm_kzalloc(&phydev->mdio.dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
return -ENOMEM;
phydev->priv = priv;
return mv88q2xxx_hwmon_probe(phydev);
}
@ -810,6 +817,18 @@ static int mv88q222x_revb1_revb2_config_init(struct phy_device *phydev)
static int mv88q222x_config_init(struct phy_device *phydev)
{
struct mv88q2xxx_priv *priv = phydev->priv;
int ret;
/* Enable temperature sense */
if (priv->enable_temp) {
ret = phy_modify_mmd(phydev, MDIO_MMD_PCS,
MDIO_MMD_PCS_MV_TEMP_SENSOR2,
MDIO_MMD_PCS_MV_TEMP_SENSOR2_DIS_MASK, 0);
if (ret < 0)
return ret;
}
if (phydev->c45_ids.device_ids[MDIO_MMD_PMAPMD] == PHY_ID_88Q2220_REVB0)
return mv88q222x_revb0_config_init(phydev);
else

View file

@ -1297,6 +1297,8 @@ static int nxp_c45_soft_reset(struct phy_device *phydev)
if (ret)
return ret;
usleep_range(2000, 2050);
return phy_read_mmd_poll_timeout(phydev, MDIO_MMD_VEND1,
VEND1_DEVICE_CONTROL, ret,
!(ret & DEVICE_CONTROL_RESET), 20000,

View file

@ -61,7 +61,18 @@
#define IPHETH_USBINTF_PROTO 1
#define IPHETH_IP_ALIGN 2 /* padding at front of URB */
#define IPHETH_NCM_HEADER_SIZE (12 + 96) /* NCMH + NCM0 */
/* On iOS devices, NCM headers in RX have a fixed size regardless of DPE count:
* - NTH16 (NCMH): 12 bytes, as per CDC NCM 1.0 spec
* - NDP16 (NCM0): 96 bytes, of which
* - NDP16 fixed header: 8 bytes
* - maximum of 22 DPEs (21 datagrams + trailer), 4 bytes each
*/
#define IPHETH_NDP16_MAX_DPE 22
#define IPHETH_NDP16_HEADER_SIZE (sizeof(struct usb_cdc_ncm_ndp16) + \
IPHETH_NDP16_MAX_DPE * \
sizeof(struct usb_cdc_ncm_dpe16))
#define IPHETH_NCM_HEADER_SIZE (sizeof(struct usb_cdc_ncm_nth16) + \
IPHETH_NDP16_HEADER_SIZE)
#define IPHETH_TX_BUF_SIZE ETH_FRAME_LEN
#define IPHETH_RX_BUF_SIZE_LEGACY (IPHETH_IP_ALIGN + ETH_FRAME_LEN)
#define IPHETH_RX_BUF_SIZE_NCM 65536
@ -207,15 +218,23 @@ static int ipheth_rcvbulk_callback_legacy(struct urb *urb)
return ipheth_consume_skb(buf, len, dev);
}
/* In "NCM mode", the iOS device encapsulates RX (phone->computer) traffic
* in NCM Transfer Blocks (similarly to CDC NCM). However, unlike reverse
* tethering (handled by the `cdc_ncm` driver), regular tethering is not
* compliant with the CDC NCM spec, as the device is missing the necessary
* descriptors, and TX (computer->phone) traffic is not encapsulated
* at all. Thus `ipheth` implements a very limited subset of the spec with
* the sole purpose of parsing RX URBs.
*/
static int ipheth_rcvbulk_callback_ncm(struct urb *urb)
{
struct usb_cdc_ncm_nth16 *ncmh;
struct usb_cdc_ncm_ndp16 *ncm0;
struct usb_cdc_ncm_dpe16 *dpe;
struct ipheth_device *dev;
u16 dg_idx, dg_len;
int retval = -EINVAL;
char *buf;
int len;
dev = urb->context;
@ -226,40 +245,42 @@ static int ipheth_rcvbulk_callback_ncm(struct urb *urb)
ncmh = urb->transfer_buffer;
if (ncmh->dwSignature != cpu_to_le32(USB_CDC_NCM_NTH16_SIGN) ||
le16_to_cpu(ncmh->wNdpIndex) >= urb->actual_length) {
dev->net->stats.rx_errors++;
return retval;
}
/* On iOS, NDP16 directly follows NTH16 */
ncmh->wNdpIndex != cpu_to_le16(sizeof(struct usb_cdc_ncm_nth16)))
goto rx_error;
ncm0 = urb->transfer_buffer + le16_to_cpu(ncmh->wNdpIndex);
if (ncm0->dwSignature != cpu_to_le32(USB_CDC_NCM_NDP16_NOCRC_SIGN) ||
le16_to_cpu(ncmh->wHeaderLength) + le16_to_cpu(ncm0->wLength) >=
urb->actual_length) {
dev->net->stats.rx_errors++;
return retval;
}
ncm0 = urb->transfer_buffer + sizeof(struct usb_cdc_ncm_nth16);
if (ncm0->dwSignature != cpu_to_le32(USB_CDC_NCM_NDP16_NOCRC_SIGN))
goto rx_error;
dpe = ncm0->dpe16;
while (le16_to_cpu(dpe->wDatagramIndex) != 0 &&
le16_to_cpu(dpe->wDatagramLength) != 0) {
if (le16_to_cpu(dpe->wDatagramIndex) >= urb->actual_length ||
le16_to_cpu(dpe->wDatagramIndex) +
le16_to_cpu(dpe->wDatagramLength) > urb->actual_length) {
for (int dpe_i = 0; dpe_i < IPHETH_NDP16_MAX_DPE; ++dpe_i, ++dpe) {
dg_idx = le16_to_cpu(dpe->wDatagramIndex);
dg_len = le16_to_cpu(dpe->wDatagramLength);
/* Null DPE must be present after last datagram pointer entry
* (3.3.1 USB CDC NCM spec v1.0)
*/
if (dg_idx == 0 && dg_len == 0)
return 0;
if (dg_idx < IPHETH_NCM_HEADER_SIZE ||
dg_idx >= urb->actual_length ||
dg_len > urb->actual_length - dg_idx) {
dev->net->stats.rx_length_errors++;
return retval;
}
buf = urb->transfer_buffer + le16_to_cpu(dpe->wDatagramIndex);
len = le16_to_cpu(dpe->wDatagramLength);
buf = urb->transfer_buffer + dg_idx;
retval = ipheth_consume_skb(buf, len, dev);
retval = ipheth_consume_skb(buf, dg_len, dev);
if (retval != 0)
return retval;
dpe++;
}
return 0;
rx_error:
dev->net->stats.rx_errors++;
return retval;
}
static void ipheth_rcvbulk_callback(struct urb *urb)

View file

@ -71,6 +71,14 @@
#define MSR_SPEED (1<<3)
#define MSR_LINK (1<<2)
/* USB endpoints */
enum rtl8150_usb_ep {
RTL8150_USB_EP_CONTROL = 0,
RTL8150_USB_EP_BULK_IN = 1,
RTL8150_USB_EP_BULK_OUT = 2,
RTL8150_USB_EP_INT_IN = 3,
};
/* Interrupt pipe data */
#define INT_TSR 0x00
#define INT_RSR 0x01
@ -867,6 +875,13 @@ static int rtl8150_probe(struct usb_interface *intf,
struct usb_device *udev = interface_to_usbdev(intf);
rtl8150_t *dev;
struct net_device *netdev;
static const u8 bulk_ep_addr[] = {
RTL8150_USB_EP_BULK_IN | USB_DIR_IN,
RTL8150_USB_EP_BULK_OUT | USB_DIR_OUT,
0};
static const u8 int_ep_addr[] = {
RTL8150_USB_EP_INT_IN | USB_DIR_IN,
0};
netdev = alloc_etherdev(sizeof(rtl8150_t));
if (!netdev)
@ -880,6 +895,13 @@ static int rtl8150_probe(struct usb_interface *intf,
return -ENOMEM;
}
/* Verify that all required endpoints are present */
if (!usb_check_bulk_endpoints(intf, bulk_ep_addr) ||
!usb_check_int_endpoints(intf, int_ep_addr)) {
dev_err(&intf->dev, "couldn't find required endpoints\n");
goto out;
}
tasklet_setup(&dev->tl, rx_fixup);
spin_lock_init(&dev->rx_pool_lock);

View file

@ -411,6 +411,11 @@ static int vxlan_vnifilter_dump(struct sk_buff *skb, struct netlink_callback *cb
struct tunnel_msg *tmsg;
struct net_device *dev;
if (cb->nlh->nlmsg_len < nlmsg_msg_size(sizeof(struct tunnel_msg))) {
NL_SET_ERR_MSG(cb->extack, "Invalid msg length");
return -EINVAL;
}
tmsg = nlmsg_data(cb->nlh);
if (tmsg->flags & ~TUNNEL_MSG_VALID_USER_FLAGS) {

View file

@ -1479,14 +1479,13 @@ static void mt7603_mac_watchdog_reset(struct mt7603_dev *dev)
tasklet_enable(&dev->mt76.pre_tbtt_tasklet);
mt7603_beacon_set_timer(dev, -1, beacon_int);
local_bh_disable();
napi_enable(&dev->mt76.tx_napi);
napi_schedule(&dev->mt76.tx_napi);
napi_enable(&dev->mt76.napi[0]);
napi_schedule(&dev->mt76.napi[0]);
napi_enable(&dev->mt76.napi[1]);
local_bh_disable();
napi_schedule(&dev->mt76.tx_napi);
napi_schedule(&dev->mt76.napi[0]);
napi_schedule(&dev->mt76.napi[1]);
local_bh_enable();

View file

@ -164,12 +164,16 @@ static int mt7615_pci_resume(struct pci_dev *pdev)
dev_err(mdev->dev, "PDMA engine must be reinitialized\n");
mt76_worker_enable(&mdev->tx_worker);
local_bh_disable();
mt76_for_each_q_rx(mdev, i) {
napi_enable(&mdev->napi[i]);
napi_schedule(&mdev->napi[i]);
}
napi_enable(&mdev->tx_napi);
local_bh_disable();
mt76_for_each_q_rx(mdev, i) {
napi_schedule(&mdev->napi[i]);
}
napi_schedule(&mdev->tx_napi);
local_bh_enable();

View file

@ -262,12 +262,14 @@ void mt7615_mac_reset_work(struct work_struct *work)
mt76_worker_enable(&dev->mt76.tx_worker);
local_bh_disable();
napi_enable(&dev->mt76.tx_napi);
napi_schedule(&dev->mt76.tx_napi);
mt76_for_each_q_rx(&dev->mt76, i) {
napi_enable(&dev->mt76.napi[i]);
}
local_bh_disable();
napi_schedule(&dev->mt76.tx_napi);
mt76_for_each_q_rx(&dev->mt76, i) {
napi_schedule(&dev->mt76.napi[i]);
}
local_bh_enable();

View file

@ -282,14 +282,16 @@ static int mt76x0e_resume(struct pci_dev *pdev)
mt76_worker_enable(&mdev->tx_worker);
local_bh_disable();
mt76_for_each_q_rx(mdev, i) {
mt76_queue_rx_reset(dev, i);
napi_enable(&mdev->napi[i]);
}
napi_enable(&mdev->tx_napi);
local_bh_disable();
mt76_for_each_q_rx(mdev, i) {
napi_schedule(&mdev->napi[i]);
}
napi_enable(&mdev->tx_napi);
napi_schedule(&mdev->tx_napi);
local_bh_enable();

View file

@ -504,12 +504,14 @@ static void mt76x02_watchdog_reset(struct mt76x02_dev *dev)
mt76_worker_enable(&dev->mt76.tx_worker);
tasklet_enable(&dev->mt76.pre_tbtt_tasklet);
local_bh_disable();
napi_enable(&dev->mt76.tx_napi);
napi_schedule(&dev->mt76.tx_napi);
mt76_for_each_q_rx(&dev->mt76, i) {
napi_enable(&dev->mt76.napi[i]);
}
local_bh_disable();
napi_schedule(&dev->mt76.tx_napi);
mt76_for_each_q_rx(&dev->mt76, i) {
napi_schedule(&dev->mt76.napi[i]);
}
local_bh_enable();

View file

@ -151,12 +151,15 @@ mt76x2e_resume(struct pci_dev *pdev)
mt76_worker_enable(&mdev->tx_worker);
local_bh_disable();
mt76_for_each_q_rx(mdev, i) {
napi_enable(&mdev->napi[i]);
napi_schedule(&mdev->napi[i]);
}
napi_enable(&mdev->tx_napi);
local_bh_disable();
mt76_for_each_q_rx(mdev, i) {
napi_schedule(&mdev->napi[i]);
}
napi_schedule(&mdev->tx_napi);
local_bh_enable();

View file

@ -1356,10 +1356,15 @@ mt7915_mac_restart(struct mt7915_dev *dev)
mt7915_dma_reset(dev, true);
local_bh_disable();
mt76_for_each_q_rx(mdev, i) {
if (mdev->q_rx[i].ndesc) {
napi_enable(&dev->mt76.napi[i]);
}
}
local_bh_disable();
mt76_for_each_q_rx(mdev, i) {
if (mdev->q_rx[i].ndesc) {
napi_schedule(&dev->mt76.napi[i]);
}
}
@ -1419,8 +1424,9 @@ out:
if (phy2)
clear_bit(MT76_RESET, &phy2->mt76->state);
local_bh_disable();
napi_enable(&dev->mt76.tx_napi);
local_bh_disable();
napi_schedule(&dev->mt76.tx_napi);
local_bh_enable();
@ -1570,9 +1576,12 @@ void mt7915_mac_reset_work(struct work_struct *work)
if (phy2)
clear_bit(MT76_RESET, &phy2->mt76->state);
local_bh_disable();
mt76_for_each_q_rx(&dev->mt76, i) {
napi_enable(&dev->mt76.napi[i]);
}
local_bh_disable();
mt76_for_each_q_rx(&dev->mt76, i) {
napi_schedule(&dev->mt76.napi[i]);
}
local_bh_enable();
@ -1581,8 +1590,8 @@ void mt7915_mac_reset_work(struct work_struct *work)
mt76_worker_enable(&dev->mt76.tx_worker);
local_bh_disable();
napi_enable(&dev->mt76.tx_napi);
local_bh_disable();
napi_schedule(&dev->mt76.tx_napi);
local_bh_enable();

View file

@ -523,12 +523,15 @@ static int mt7921_pci_resume(struct device *device)
mt76_worker_enable(&mdev->tx_worker);
local_bh_disable();
mt76_for_each_q_rx(mdev, i) {
napi_enable(&mdev->napi[i]);
napi_schedule(&mdev->napi[i]);
}
napi_enable(&mdev->tx_napi);
local_bh_disable();
mt76_for_each_q_rx(mdev, i) {
napi_schedule(&mdev->napi[i]);
}
napi_schedule(&mdev->tx_napi);
local_bh_enable();

View file

@ -81,9 +81,12 @@ int mt7921e_mac_reset(struct mt792x_dev *dev)
mt792x_wpdma_reset(dev, true);
local_bh_disable();
mt76_for_each_q_rx(&dev->mt76, i) {
napi_enable(&dev->mt76.napi[i]);
}
local_bh_disable();
mt76_for_each_q_rx(&dev->mt76, i) {
napi_schedule(&dev->mt76.napi[i]);
}
local_bh_enable();
@ -115,8 +118,8 @@ int mt7921e_mac_reset(struct mt792x_dev *dev)
err = __mt7921_start(&dev->phy);
out:
local_bh_disable();
napi_enable(&dev->mt76.tx_napi);
local_bh_disable();
napi_schedule(&dev->mt76.tx_napi);
local_bh_enable();

View file

@ -556,12 +556,15 @@ static int mt7925_pci_resume(struct device *device)
mt76_worker_enable(&mdev->tx_worker);
local_bh_disable();
mt76_for_each_q_rx(mdev, i) {
napi_enable(&mdev->napi[i]);
napi_schedule(&mdev->napi[i]);
}
napi_enable(&mdev->tx_napi);
local_bh_disable();
mt76_for_each_q_rx(mdev, i) {
napi_schedule(&mdev->napi[i]);
}
napi_schedule(&mdev->tx_napi);
local_bh_enable();

View file

@ -101,12 +101,15 @@ int mt7925e_mac_reset(struct mt792x_dev *dev)
mt792x_wpdma_reset(dev, true);
local_bh_disable();
mt76_for_each_q_rx(&dev->mt76, i) {
napi_enable(&dev->mt76.napi[i]);
napi_schedule(&dev->mt76.napi[i]);
}
napi_enable(&dev->mt76.tx_napi);
local_bh_disable();
mt76_for_each_q_rx(&dev->mt76, i) {
napi_schedule(&dev->mt76.napi[i]);
}
napi_schedule(&dev->mt76.tx_napi);
local_bh_enable();

View file

@ -1695,7 +1695,6 @@ mt7996_mac_restart(struct mt7996_dev *dev)
mt7996_dma_reset(dev, true);
local_bh_disable();
mt76_for_each_q_rx(mdev, i) {
if (mtk_wed_device_active(&dev->mt76.mmio.wed) &&
mt76_queue_is_wed_rro(&mdev->q_rx[i]))
@ -1703,10 +1702,11 @@ mt7996_mac_restart(struct mt7996_dev *dev)
if (mdev->q_rx[i].ndesc) {
napi_enable(&dev->mt76.napi[i]);
local_bh_disable();
napi_schedule(&dev->mt76.napi[i]);
local_bh_enable();
}
}
local_bh_enable();
clear_bit(MT76_MCU_RESET, &dev->mphy.state);
clear_bit(MT76_STATE_MCU_RUNNING, &dev->mphy.state);
@ -1764,8 +1764,8 @@ out:
if (phy3)
clear_bit(MT76_RESET, &phy3->mt76->state);
local_bh_disable();
napi_enable(&dev->mt76.tx_napi);
local_bh_disable();
napi_schedule(&dev->mt76.tx_napi);
local_bh_enable();
@ -1958,23 +1958,23 @@ void mt7996_mac_reset_work(struct work_struct *work)
if (phy3)
clear_bit(MT76_RESET, &phy3->mt76->state);
local_bh_disable();
mt76_for_each_q_rx(&dev->mt76, i) {
if (mtk_wed_device_active(&dev->mt76.mmio.wed) &&
mt76_queue_is_wed_rro(&dev->mt76.q_rx[i]))
continue;
napi_enable(&dev->mt76.napi[i]);
local_bh_disable();
napi_schedule(&dev->mt76.napi[i]);
local_bh_enable();
}
local_bh_enable();
tasklet_schedule(&dev->mt76.irq_tasklet);
mt76_worker_enable(&dev->mt76.tx_worker);
local_bh_disable();
napi_enable(&dev->mt76.tx_napi);
local_bh_disable();
napi_schedule(&dev->mt76.tx_napi);
local_bh_enable();

View file

@ -4,6 +4,7 @@
*
* Copyright (C) 2010 OMICRON electronics GmbH
*/
#include <linux/compat.h>
#include <linux/module.h>
#include <linux/posix-clock.h>
#include <linux/poll.h>
@ -176,6 +177,9 @@ long ptp_ioctl(struct posix_clock_context *pccontext, unsigned int cmd,
struct timespec64 ts;
int enable, err = 0;
if (in_compat_syscall() && cmd != PTP_ENABLE_PPS && cmd != PTP_ENABLE_PPS2)
arg = (unsigned long)compat_ptr(arg);
tsevq = pccontext->private_clkdata;
switch (cmd) {

View file

@ -217,6 +217,11 @@ static int ptp_getcycles64(struct ptp_clock_info *info, struct timespec64 *ts)
return info->gettime64(info, ts);
}
static int ptp_enable(struct ptp_clock_info *ptp, struct ptp_clock_request *request, int on)
{
return -EOPNOTSUPP;
}
static void ptp_aux_kworker(struct kthread_work *work)
{
struct ptp_clock *ptp = container_of(work, struct ptp_clock,
@ -294,6 +299,9 @@ struct ptp_clock *ptp_clock_register(struct ptp_clock_info *info,
ptp->info->getcrosscycles = ptp->info->getcrosststamp;
}
if (!ptp->info->enable)
ptp->info->enable = ptp_enable;
if (ptp->info->do_aux_work) {
kthread_init_delayed_work(&ptp->aux_work, ptp_aux_kworker);
ptp->kworker = kthread_run_worker(0, "ptp%d", ptp->index);

View file

@ -1085,8 +1085,8 @@ struct netdev_net_notifier {
*
* int (*ndo_do_ioctl)(struct net_device *dev, struct ifreq *ifr, int cmd);
* Old-style ioctl entry point. This is used internally by the
* appletalk and ieee802154 subsystems but is no longer called by
* the device ioctl handler.
* ieee802154 subsystem but is no longer called by the device
* ioctl handler.
*
* int (*ndo_siocbond)(struct net_device *dev, struct ifreq *ifr, int cmd);
* Used by the bonding driver for its device specific ioctls:

View file

@ -237,7 +237,6 @@ struct page_pool {
struct {
struct hlist_node list;
u64 detach_time;
u32 napi_id;
u32 id;
} user;
};

View file

@ -1268,9 +1268,19 @@ static inline int __xfrm_policy_check2(struct sock *sk, int dir,
if (xo) {
x = xfrm_input_state(skb);
if (x->xso.type == XFRM_DEV_OFFLOAD_PACKET)
return (xo->flags & CRYPTO_DONE) &&
(xo->status & CRYPTO_SUCCESS);
if (x->xso.type == XFRM_DEV_OFFLOAD_PACKET) {
bool check = (xo->flags & CRYPTO_DONE) &&
(xo->status & CRYPTO_SUCCESS);
/* The packets here are plain ones and secpath was
* needed to indicate that hardware already handled
* them and there is no need to do nothing in addition.
*
* Consume secpath which was set by drivers.
*/
secpath_reset(skb);
return check;
}
}
return __xfrm_check_nopolicy(net, skb, dir) ||

View file

@ -710,12 +710,12 @@ static bool l2cap_valid_mtu(struct l2cap_chan *chan, u16 mtu)
{
switch (chan->scid) {
case L2CAP_CID_ATT:
if (mtu < L2CAP_LE_MIN_MTU)
if (mtu && mtu < L2CAP_LE_MIN_MTU)
return false;
break;
default:
if (mtu < L2CAP_DEFAULT_MIN_MTU)
if (mtu && mtu < L2CAP_DEFAULT_MIN_MTU)
return false;
}

View file

@ -6708,7 +6708,7 @@ void napi_resume_irqs(unsigned int napi_id)
static void __napi_hash_add_with_id(struct napi_struct *napi,
unsigned int napi_id)
{
napi->napi_id = napi_id;
WRITE_ONCE(napi->napi_id, napi_id);
hlist_add_head_rcu(&napi->napi_hash_node,
&napi_hash[napi->napi_id % HASH_SIZE(napi_hash)]);
}
@ -9924,6 +9924,10 @@ static int dev_xdp_attach(struct net_device *dev, struct netlink_ext_ack *extack
NL_SET_ERR_MSG(extack, "Program bound to different device");
return -EINVAL;
}
if (bpf_prog_is_dev_bound(new_prog->aux) && mode == XDP_MODE_SKB) {
NL_SET_ERR_MSG(extack, "Can't attach device-bound programs in generic mode");
return -EINVAL;
}
if (new_prog->expected_attach_type == BPF_XDP_DEVMAP) {
NL_SET_ERR_MSG(extack, "BPF_XDP_DEVMAP programs can not be attached to a device");
return -EINVAL;
@ -10260,37 +10264,14 @@ static bool from_cleanup_net(void)
#endif
}
static void rtnl_drop_if_cleanup_net(void)
{
if (from_cleanup_net())
__rtnl_unlock();
}
static void rtnl_acquire_if_cleanup_net(void)
{
if (from_cleanup_net())
rtnl_lock();
}
/* Delayed registration/unregisteration */
LIST_HEAD(net_todo_list);
static LIST_HEAD(net_todo_list_for_cleanup_net);
/* TODO: net_todo_list/net_todo_list_for_cleanup_net should probably
* be provided by callers, instead of being static, rtnl protected.
*/
static struct list_head *todo_list(void)
{
return from_cleanup_net() ? &net_todo_list_for_cleanup_net :
&net_todo_list;
}
DECLARE_WAIT_QUEUE_HEAD(netdev_unregistering_wq);
atomic_t dev_unreg_count = ATOMIC_INIT(0);
static void net_set_todo(struct net_device *dev)
{
list_add_tail(&dev->todo_list, todo_list());
list_add_tail(&dev->todo_list, &net_todo_list);
}
static netdev_features_t netdev_sync_upper_features(struct net_device *lower,
@ -11140,7 +11121,7 @@ void netdev_run_todo(void)
#endif
/* Snapshot list, allow later requests */
list_replace_init(todo_list(), &list);
list_replace_init(&net_todo_list, &list);
__rtnl_unlock();
@ -11785,11 +11766,9 @@ void unregister_netdevice_many_notify(struct list_head *head,
WRITE_ONCE(dev->reg_state, NETREG_UNREGISTERING);
netdev_unlock(dev);
}
rtnl_drop_if_cleanup_net();
flush_all_backlogs();
synchronize_net();
rtnl_acquire_if_cleanup_net();
list_for_each_entry(dev, head, unreg_list) {
struct sk_buff *skb = NULL;
@ -11849,9 +11828,7 @@ void unregister_netdevice_many_notify(struct list_head *head,
#endif
}
rtnl_drop_if_cleanup_net();
synchronize_net();
rtnl_acquire_if_cleanup_net();
list_for_each_entry(dev, head, unreg_list) {
netdev_put(dev, &dev->dev_registered_tracker);

View file

@ -1146,7 +1146,9 @@ void page_pool_disable_direct_recycling(struct page_pool *pool)
WARN_ON(!test_bit(NAPI_STATE_SCHED, &pool->p.napi->state));
WARN_ON(READ_ONCE(pool->p.napi->list_owner) != -1);
mutex_lock(&page_pools_lock);
WRITE_ONCE(pool->p.napi, NULL);
mutex_unlock(&page_pools_lock);
}
EXPORT_SYMBOL(page_pool_disable_direct_recycling);

View file

@ -7,6 +7,8 @@
#include "netmem_priv.h"
extern struct mutex page_pools_lock;
s32 page_pool_inflight(const struct page_pool *pool, bool strict);
int page_pool_list(struct page_pool *pool);

View file

@ -3,6 +3,7 @@
#include <linux/mutex.h>
#include <linux/netdevice.h>
#include <linux/xarray.h>
#include <net/busy_poll.h>
#include <net/net_debug.h>
#include <net/netdev_rx_queue.h>
#include <net/page_pool/helpers.h>
@ -14,10 +15,11 @@
#include "netdev-genl-gen.h"
static DEFINE_XARRAY_FLAGS(page_pools, XA_FLAGS_ALLOC1);
/* Protects: page_pools, netdevice->page_pools, pool->slow.netdev, pool->user.
/* Protects: page_pools, netdevice->page_pools, pool->p.napi, pool->slow.netdev,
* pool->user.
* Ordering: inside rtnl_lock
*/
static DEFINE_MUTEX(page_pools_lock);
DEFINE_MUTEX(page_pools_lock);
/* Page pools are only reachable from user space (via netlink) if they are
* linked to a netdev at creation time. Following page pool "visibility"
@ -216,6 +218,7 @@ page_pool_nl_fill(struct sk_buff *rsp, const struct page_pool *pool,
{
struct net_devmem_dmabuf_binding *binding = pool->mp_priv;
size_t inflight, refsz;
unsigned int napi_id;
void *hdr;
hdr = genlmsg_iput(rsp, info);
@ -229,8 +232,10 @@ page_pool_nl_fill(struct sk_buff *rsp, const struct page_pool *pool,
nla_put_u32(rsp, NETDEV_A_PAGE_POOL_IFINDEX,
pool->slow.netdev->ifindex))
goto err_cancel;
if (pool->user.napi_id &&
nla_put_uint(rsp, NETDEV_A_PAGE_POOL_NAPI_ID, pool->user.napi_id))
napi_id = pool->p.napi ? READ_ONCE(pool->p.napi->napi_id) : 0;
if (napi_id >= MIN_NAPI_ID &&
nla_put_uint(rsp, NETDEV_A_PAGE_POOL_NAPI_ID, napi_id))
goto err_cancel;
inflight = page_pool_inflight(pool, false);
@ -319,8 +324,6 @@ int page_pool_list(struct page_pool *pool)
if (pool->slow.netdev) {
hlist_add_head(&pool->user.list,
&pool->slow.netdev->page_pools);
pool->user.napi_id = pool->p.napi ? pool->p.napi->napi_id : 0;
netdev_nl_page_pool_event(pool, NETDEV_CMD_PAGE_POOL_ADD_NTF);
}

View file

@ -998,7 +998,7 @@ static noinline_for_stack int ethtool_set_rxnfc(struct net_device *dev,
ethtool_get_flow_spec_ring(info.fs.ring_cookie))
return -EINVAL;
if (ops->get_rxfh) {
if (cmd == ETHTOOL_SRXFH && ops->get_rxfh) {
struct ethtool_rxfh_param rxfh = {};
rc = ops->get_rxfh(dev, &rxfh);

View file

@ -700,9 +700,12 @@ static int fill_frame_info(struct hsr_frame_info *frame,
frame->is_vlan = true;
if (frame->is_vlan) {
if (skb->mac_len < offsetofend(struct hsr_vlan_ethhdr, vlanhdr))
/* Note: skb->mac_len might be wrong here. */
if (!pskb_may_pull(skb,
skb_mac_offset(skb) +
offsetofend(struct hsr_vlan_ethhdr, vlanhdr)))
return -EINVAL;
vlan_hdr = (struct hsr_vlan_ethhdr *)ethhdr;
vlan_hdr = (struct hsr_vlan_ethhdr *)skb_mac_header(skb);
proto = vlan_hdr->vlanhdr.h_vlan_encapsulated_proto;
}

View file

@ -279,7 +279,7 @@ static void esp_output_done(void *data, int err)
x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP)
esp_output_tail_tcp(x, skb);
else
xfrm_output_resume(skb->sk, skb, err);
xfrm_output_resume(skb_to_full_sk(skb), skb, err);
}
}

View file

@ -330,9 +330,6 @@ next_entry:
list_for_each_entry(mfc, &mrt->mfc_unres_queue, list) {
if (e < s_e)
goto next_entry2;
if (filter->dev &&
!mr_mfc_uses_dev(mrt, mfc, filter->dev))
goto next_entry2;
err = fill(mrt, skb, NETLINK_CB(cb->skb).portid,
cb->nlh->nlmsg_seq, mfc, RTM_NEWROUTE, flags);

View file

@ -265,11 +265,14 @@ static u16 tcp_select_window(struct sock *sk)
u32 cur_win, new_win;
/* Make the window 0 if we failed to queue the data because we
* are out of memory. The window is temporary, so we don't store
* it on the socket.
* are out of memory.
*/
if (unlikely(inet_csk(sk)->icsk_ack.pending & ICSK_ACK_NOMEM))
if (unlikely(inet_csk(sk)->icsk_ack.pending & ICSK_ACK_NOMEM)) {
tp->pred_flags = 0;
tp->rcv_wnd = 0;
tp->rcv_wup = tp->rcv_nxt;
return 0;
}
cur_win = tcp_receive_window(tp);
new_win = __tcp_select_window(sk);

View file

@ -315,7 +315,7 @@ static void esp_output_done(void *data, int err)
x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP)
esp_output_tail_tcp(x, skb);
else
xfrm_output_resume(skb->sk, skb, err);
xfrm_output_resume(skb_to_full_sk(skb), skb, err);
}
}

View file

@ -82,14 +82,14 @@ static int __xfrm6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
toobig = skb->len > mtu && !skb_is_gso(skb);
if (toobig && xfrm6_local_dontfrag(skb->sk)) {
if (toobig && xfrm6_local_dontfrag(sk)) {
xfrm6_local_rxpmtu(skb, mtu);
kfree_skb(skb);
return -EMSGSIZE;
} else if (toobig && xfrm6_noneed_fragment(skb)) {
skb->ignore_df = 1;
goto skip_frag;
} else if (!skb->ignore_df && toobig && skb->sk) {
} else if (!skb->ignore_df && toobig && sk) {
xfrm_local_error(skb, mtu);
kfree_skb(skb);
return -EMSGSIZE;

View file

@ -418,9 +418,9 @@ void mptcp_active_detect_blackhole(struct sock *ssk, bool expired)
MPTCP_INC_STATS(net, MPTCP_MIB_MPCAPABLEACTIVEDROP);
subflow->mpc_drop = 1;
mptcp_subflow_early_fallback(mptcp_sk(subflow->conn), subflow);
} else {
subflow->mpc_drop = 0;
}
} else if (ssk->sk_state == TCP_SYN_SENT) {
subflow->mpc_drop = 0;
}
}

View file

@ -108,7 +108,6 @@ static void mptcp_parse_option(const struct sk_buff *skb,
mp_opt->suboptions |= OPTION_MPTCP_DSS;
mp_opt->use_map = 1;
mp_opt->mpc_map = 1;
mp_opt->use_ack = 0;
mp_opt->data_len = get_unaligned_be16(ptr);
ptr += 2;
}
@ -157,11 +156,6 @@ static void mptcp_parse_option(const struct sk_buff *skb,
pr_debug("DSS\n");
ptr++;
/* we must clear 'mpc_map' be able to detect MP_CAPABLE
* map vs DSS map in mptcp_incoming_options(), and reconstruct
* map info accordingly
*/
mp_opt->mpc_map = 0;
flags = (*ptr++) & MPTCP_DSS_FLAG_MASK;
mp_opt->data_fin = (flags & MPTCP_DSS_DATA_FIN) != 0;
mp_opt->dsn64 = (flags & MPTCP_DSS_DSN64) != 0;
@ -369,8 +363,11 @@ void mptcp_get_options(const struct sk_buff *skb,
const unsigned char *ptr;
int length;
/* initialize option status */
mp_opt->suboptions = 0;
/* Ensure that casting the whole status to u32 is efficient and safe */
BUILD_BUG_ON(sizeof_field(struct mptcp_options_received, status) != sizeof(u32));
BUILD_BUG_ON(!IS_ALIGNED(offsetof(struct mptcp_options_received, status),
sizeof(u32)));
*(u32 *)&mp_opt->status = 0;
length = (th->doff * 4) - sizeof(struct tcphdr);
ptr = (const unsigned char *)(th + 1);

View file

@ -2020,7 +2020,8 @@ int mptcp_pm_nl_set_flags(struct sk_buff *skb, struct genl_info *info)
return -EINVAL;
}
if ((addr.flags & MPTCP_PM_ADDR_FLAG_FULLMESH) &&
(entry->flags & MPTCP_PM_ADDR_FLAG_SIGNAL)) {
(entry->flags & (MPTCP_PM_ADDR_FLAG_SIGNAL |
MPTCP_PM_ADDR_FLAG_IMPLICIT))) {
spin_unlock_bh(&pernet->lock);
GENL_SET_ERR_MSG(info, "invalid addr flags");
return -EINVAL;

View file

@ -1767,8 +1767,10 @@ static int mptcp_sendmsg_fastopen(struct sock *sk, struct msghdr *msg,
* see mptcp_disconnect().
* Attempt it again outside the problematic scope.
*/
if (!mptcp_disconnect(sk, 0))
if (!mptcp_disconnect(sk, 0)) {
sk->sk_disconnects++;
sk->sk_socket->state = SS_UNCONNECTED;
}
}
inet_clear_bit(DEFER_CONNECT, sk);

View file

@ -149,22 +149,24 @@ struct mptcp_options_received {
u32 subflow_seq;
u16 data_len;
__sum16 csum;
u16 suboptions;
struct_group(status,
u16 suboptions;
u16 use_map:1,
dsn64:1,
data_fin:1,
use_ack:1,
ack64:1,
mpc_map:1,
reset_reason:4,
reset_transient:1,
echo:1,
backup:1,
deny_join_id0:1,
__unused:2;
);
u8 join_id;
u32 token;
u32 nonce;
u16 use_map:1,
dsn64:1,
data_fin:1,
use_ack:1,
ack64:1,
mpc_map:1,
reset_reason:4,
reset_transient:1,
echo:1,
backup:1,
deny_join_id0:1,
__unused:2;
u8 join_id;
u64 thmac;
u8 hmac[MPTCPOPT_HMAC_LEN];
struct mptcp_addr_info addr;

View file

@ -1385,6 +1385,12 @@ static void ncsi_probe_channel(struct ncsi_dev_priv *ndp)
nd->state = ncsi_dev_state_probe_package;
break;
case ncsi_dev_state_probe_package:
if (ndp->package_probe_id >= 8) {
/* Last package probed, finishing */
ndp->flags |= NCSI_DEV_PROBED;
break;
}
ndp->pending_req_num = 1;
nca.type = NCSI_PKT_CMD_SP;
@ -1501,13 +1507,8 @@ static void ncsi_probe_channel(struct ncsi_dev_priv *ndp)
if (ret)
goto error;
/* Probe next package */
/* Probe next package after receiving response */
ndp->package_probe_id++;
if (ndp->package_probe_id >= 8) {
/* Probe finished */
ndp->flags |= NCSI_DEV_PROBED;
break;
}
nd->state = ncsi_dev_state_probe_package;
ndp->active_package = NULL;
break;

View file

@ -1089,14 +1089,12 @@ static int ncsi_rsp_handler_netlink(struct ncsi_request *nr)
static int ncsi_rsp_handler_gmcma(struct ncsi_request *nr)
{
struct ncsi_dev_priv *ndp = nr->ndp;
struct sockaddr *saddr = &ndp->pending_mac;
struct net_device *ndev = ndp->ndev.dev;
struct ncsi_rsp_gmcma_pkt *rsp;
struct sockaddr saddr;
int ret = -1;
int i;
rsp = (struct ncsi_rsp_gmcma_pkt *)skb_network_header(nr->rsp);
saddr.sa_family = ndev->type;
ndev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
netdev_info(ndev, "NCSI: Received %d provisioned MAC addresses\n",
@ -1108,20 +1106,20 @@ static int ncsi_rsp_handler_gmcma(struct ncsi_request *nr)
rsp->addresses[i][4], rsp->addresses[i][5]);
}
saddr->sa_family = ndev->type;
for (i = 0; i < rsp->address_count; i++) {
memcpy(saddr.sa_data, &rsp->addresses[i], ETH_ALEN);
ret = ndev->netdev_ops->ndo_set_mac_address(ndev, &saddr);
if (ret < 0) {
if (!is_valid_ether_addr(rsp->addresses[i])) {
netdev_warn(ndev, "NCSI: Unable to assign %pM to device\n",
saddr.sa_data);
rsp->addresses[i]);
continue;
}
netdev_warn(ndev, "NCSI: Set MAC address to %pM\n", saddr.sa_data);
memcpy(saddr->sa_data, rsp->addresses[i], ETH_ALEN);
netdev_warn(ndev, "NCSI: Will set MAC address to %pM\n", saddr->sa_data);
break;
}
ndp->gma_flag = ret == 0;
return ret;
ndp->gma_flag = 1;
return 0;
}
static struct ncsi_rsp_handler {

View file

@ -5078,7 +5078,7 @@ static int nft_set_desc_concat_parse(const struct nlattr *attr,
static int nft_set_desc_concat(struct nft_set_desc *desc,
const struct nlattr *nla)
{
u32 num_regs = 0, key_num_regs = 0;
u32 len = 0, num_regs;
struct nlattr *attr;
int rem, err, i;
@ -5092,12 +5092,12 @@ static int nft_set_desc_concat(struct nft_set_desc *desc,
}
for (i = 0; i < desc->field_count; i++)
num_regs += DIV_ROUND_UP(desc->field_len[i], sizeof(u32));
len += round_up(desc->field_len[i], sizeof(u32));
key_num_regs = DIV_ROUND_UP(desc->klen, sizeof(u32));
if (key_num_regs != num_regs)
if (len != desc->klen)
return -EINVAL;
num_regs = DIV_ROUND_UP(desc->klen, sizeof(u32));
if (num_regs > NFT_REG32_COUNT)
return -E2BIG;

View file

@ -542,6 +542,8 @@ static u8 nci_hci_create_pipe(struct nci_dev *ndev, u8 dest_host,
pr_debug("pipe created=%d\n", pipe);
if (pipe >= NCI_HCI_MAX_PIPES)
pipe = NCI_HCI_INVALID_PIPE;
return pipe;
}

View file

@ -122,6 +122,10 @@ static void rose_heartbeat_expiry(struct timer_list *t)
struct rose_sock *rose = rose_sk(sk);
bh_lock_sock(sk);
if (sock_owned_by_user(sk)) {
sk_reset_timer(sk, &sk->sk_timer, jiffies + HZ/20);
goto out;
}
switch (rose->state) {
case ROSE_STATE_0:
/* Magic here: If we listen() and a new link dies before it
@ -152,6 +156,7 @@ static void rose_heartbeat_expiry(struct timer_list *t)
}
rose_start_heartbeat(sk);
out:
bh_unlock_sock(sk);
sock_put(sk);
}
@ -162,6 +167,10 @@ static void rose_timer_expiry(struct timer_list *t)
struct sock *sk = &rose->sock;
bh_lock_sock(sk);
if (sock_owned_by_user(sk)) {
sk_reset_timer(sk, &rose->timer, jiffies + HZ/20);
goto out;
}
switch (rose->state) {
case ROSE_STATE_1: /* T1 */
case ROSE_STATE_4: /* T2 */
@ -182,6 +191,7 @@ static void rose_timer_expiry(struct timer_list *t)
}
break;
}
out:
bh_unlock_sock(sk);
sock_put(sk);
}
@ -192,6 +202,10 @@ static void rose_idletimer_expiry(struct timer_list *t)
struct sock *sk = &rose->sock;
bh_lock_sock(sk);
if (sock_owned_by_user(sk)) {
sk_reset_timer(sk, &rose->idletimer, jiffies + HZ/20);
goto out;
}
rose_clear_queues(sk);
rose_write_internal(sk, ROSE_CLEAR_REQUEST);
@ -207,6 +221,7 @@ static void rose_idletimer_expiry(struct timer_list *t)
sk->sk_state_change(sk);
sock_set_flag(sk, SOCK_DEAD);
}
out:
bh_unlock_sock(sk);
sock_put(sk);
}

View file

@ -246,7 +246,7 @@ static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet,
bool use;
int slot;
spin_lock(&rxnet->peer_hash_lock);
spin_lock_bh(&rxnet->peer_hash_lock);
while (!list_empty(collector)) {
peer = list_entry(collector->next,
@ -257,7 +257,7 @@ static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet,
continue;
use = __rxrpc_use_local(peer->local, rxrpc_local_use_peer_keepalive);
spin_unlock(&rxnet->peer_hash_lock);
spin_unlock_bh(&rxnet->peer_hash_lock);
if (use) {
keepalive_at = peer->last_tx_at + RXRPC_KEEPALIVE_TIME;
@ -277,17 +277,17 @@ static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet,
*/
slot += cursor;
slot &= mask;
spin_lock(&rxnet->peer_hash_lock);
spin_lock_bh(&rxnet->peer_hash_lock);
list_add_tail(&peer->keepalive_link,
&rxnet->peer_keepalive[slot & mask]);
spin_unlock(&rxnet->peer_hash_lock);
spin_unlock_bh(&rxnet->peer_hash_lock);
rxrpc_unuse_local(peer->local, rxrpc_local_unuse_peer_keepalive);
}
rxrpc_put_peer(peer, rxrpc_peer_put_keepalive);
spin_lock(&rxnet->peer_hash_lock);
spin_lock_bh(&rxnet->peer_hash_lock);
}
spin_unlock(&rxnet->peer_hash_lock);
spin_unlock_bh(&rxnet->peer_hash_lock);
}
/*
@ -317,7 +317,7 @@ void rxrpc_peer_keepalive_worker(struct work_struct *work)
* second; the bucket at cursor + 1 goes at now + 1s and so
* on...
*/
spin_lock(&rxnet->peer_hash_lock);
spin_lock_bh(&rxnet->peer_hash_lock);
list_splice_init(&rxnet->peer_keepalive_new, &collector);
stop = cursor + ARRAY_SIZE(rxnet->peer_keepalive);
@ -329,7 +329,7 @@ void rxrpc_peer_keepalive_worker(struct work_struct *work)
}
base = now;
spin_unlock(&rxnet->peer_hash_lock);
spin_unlock_bh(&rxnet->peer_hash_lock);
rxnet->peer_keepalive_base = base;
rxnet->peer_keepalive_cursor = cursor;

View file

@ -325,10 +325,10 @@ void rxrpc_new_incoming_peer(struct rxrpc_local *local, struct rxrpc_peer *peer)
hash_key = rxrpc_peer_hash_key(local, &peer->srx);
rxrpc_init_peer(local, peer, hash_key);
spin_lock(&rxnet->peer_hash_lock);
spin_lock_bh(&rxnet->peer_hash_lock);
hash_add_rcu(rxnet->peer_hash, &peer->hash_link, hash_key);
list_add_tail(&peer->keepalive_link, &rxnet->peer_keepalive_new);
spin_unlock(&rxnet->peer_hash_lock);
spin_unlock_bh(&rxnet->peer_hash_lock);
}
/*
@ -360,7 +360,7 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_local *local,
return NULL;
}
spin_lock(&rxnet->peer_hash_lock);
spin_lock_bh(&rxnet->peer_hash_lock);
/* Need to check that we aren't racing with someone else */
peer = __rxrpc_lookup_peer_rcu(local, srx, hash_key);
@ -373,7 +373,7 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_local *local,
&rxnet->peer_keepalive_new);
}
spin_unlock(&rxnet->peer_hash_lock);
spin_unlock_bh(&rxnet->peer_hash_lock);
if (peer)
rxrpc_free_peer(candidate);
@ -423,10 +423,10 @@ static void __rxrpc_put_peer(struct rxrpc_peer *peer)
ASSERT(hlist_empty(&peer->error_targets));
spin_lock(&rxnet->peer_hash_lock);
spin_lock_bh(&rxnet->peer_hash_lock);
hash_del_rcu(&peer->hash_link);
list_del_init(&peer->keepalive_link);
spin_unlock(&rxnet->peer_hash_lock);
spin_unlock_bh(&rxnet->peer_hash_lock);
rxrpc_free_peer(peer);
}

View file

@ -91,6 +91,8 @@ ets_class_from_arg(struct Qdisc *sch, unsigned long arg)
{
struct ets_sched *q = qdisc_priv(sch);
if (arg == 0 || arg > q->nbands)
return NULL;
return &q->classes[arg - 1];
}

View file

@ -337,7 +337,10 @@ EXPORT_SYMBOL_GPL(vsock_find_connected_socket);
void vsock_remove_sock(struct vsock_sock *vsk)
{
vsock_remove_bound(vsk);
/* Transport reassignment must not remove the binding. */
if (sock_flag(sk_vsock(vsk), SOCK_DEAD))
vsock_remove_bound(vsk);
vsock_remove_connected(vsk);
}
EXPORT_SYMBOL_GPL(vsock_remove_sock);
@ -821,12 +824,13 @@ static void __vsock_release(struct sock *sk, int level)
*/
lock_sock_nested(sk, level);
sock_orphan(sk);
if (vsk->transport)
vsk->transport->release(vsk);
else if (sock_type_connectible(sk->sk_type))
vsock_remove_sock(vsk);
sock_orphan(sk);
sk->sk_shutdown = SHUTDOWN_MASK;
skb_queue_purge(&sk->sk_receive_queue);
@ -1519,6 +1523,11 @@ static int vsock_connect(struct socket *sock, struct sockaddr *addr,
if (err < 0)
goto out;
/* sk_err might have been set as a result of an earlier
* (failed) connect attempt.
*/
sk->sk_err = 0;
/* Mark sock as connecting and set the error code to in
* progress in case this is a non-blocking connect.
*/

View file

@ -506,7 +506,7 @@ xmit:
skb_dst_set(skb, dst);
skb->dev = tdev;
err = dst_output(xi->net, skb->sk, skb);
err = dst_output(xi->net, skb_to_full_sk(skb), skb);
if (net_xmit_eval(err) == 0) {
dev_sw_netstats_tx_add(dev, 1, length);
} else {

View file

@ -802,7 +802,7 @@ static int xfrm4_tunnel_check_size(struct sk_buff *skb)
!skb_gso_validate_network_len(skb, ip_skb_dst_mtu(skb->sk, skb)))) {
skb->protocol = htons(ETH_P_IP);
if (skb->sk)
if (skb->sk && sk_fullsock(skb->sk))
xfrm_local_error(skb, mtu);
else
icmp_send(skb, ICMP_DEST_UNREACH,
@ -838,6 +838,7 @@ static int xfrm6_tunnel_check_size(struct sk_buff *skb)
{
int mtu, ret = 0;
struct dst_entry *dst = skb_dst(skb);
struct sock *sk = skb_to_full_sk(skb);
if (skb->ignore_df)
goto out;
@ -852,9 +853,9 @@ static int xfrm6_tunnel_check_size(struct sk_buff *skb)
skb->dev = dst->dev;
skb->protocol = htons(ETH_P_IPV6);
if (xfrm6_local_dontfrag(skb->sk))
if (xfrm6_local_dontfrag(sk))
ipv6_stub->xfrm6_local_rxpmtu(skb, mtu);
else if (skb->sk)
else if (sk)
xfrm_local_error(skb, mtu);
else
icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);

View file

@ -2964,7 +2964,7 @@ static void xfrm_policy_queue_process(struct timer_list *t)
skb_dst_drop(skb);
skb_dst_set(skb, dst);
dst_output(net, skb->sk, skb);
dst_output(net, skb_to_full_sk(skb), skb);
}
out:

View file

@ -714,10 +714,12 @@ static int xfrm_replay_overflow_offload_esn(struct xfrm_state *x, struct sk_buff
oseq += skb_shinfo(skb)->gso_segs;
}
if (unlikely(xo->seq.low < replay_esn->oseq)) {
XFRM_SKB_CB(skb)->seq.output.hi = ++oseq_hi;
xo->seq.hi = oseq_hi;
replay_esn->oseq_hi = oseq_hi;
if (unlikely(oseq < replay_esn->oseq)) {
replay_esn->oseq_hi = ++oseq_hi;
if (xo->seq.low < replay_esn->oseq) {
XFRM_SKB_CB(skb)->seq.output.hi = oseq_hi;
xo->seq.hi = oseq_hi;
}
if (replay_esn->oseq_hi == 0) {
replay_esn->oseq--;
replay_esn->oseq_hi--;

View file

@ -34,6 +34,8 @@
#define xfrm_state_deref_prot(table, net) \
rcu_dereference_protected((table), lockdep_is_held(&(net)->xfrm.xfrm_state_lock))
#define xfrm_state_deref_check(table, net) \
rcu_dereference_check((table), lockdep_is_held(&(net)->xfrm.xfrm_state_lock))
static void xfrm_state_gc_task(struct work_struct *work);
@ -62,6 +64,8 @@ static inline unsigned int xfrm_dst_hash(struct net *net,
u32 reqid,
unsigned short family)
{
lockdep_assert_held(&net->xfrm.xfrm_state_lock);
return __xfrm_dst_hash(daddr, saddr, reqid, family, net->xfrm.state_hmask);
}
@ -70,6 +74,8 @@ static inline unsigned int xfrm_src_hash(struct net *net,
const xfrm_address_t *saddr,
unsigned short family)
{
lockdep_assert_held(&net->xfrm.xfrm_state_lock);
return __xfrm_src_hash(daddr, saddr, family, net->xfrm.state_hmask);
}
@ -77,11 +83,15 @@ static inline unsigned int
xfrm_spi_hash(struct net *net, const xfrm_address_t *daddr,
__be32 spi, u8 proto, unsigned short family)
{
lockdep_assert_held(&net->xfrm.xfrm_state_lock);
return __xfrm_spi_hash(daddr, spi, proto, family, net->xfrm.state_hmask);
}
static unsigned int xfrm_seq_hash(struct net *net, u32 seq)
{
lockdep_assert_held(&net->xfrm.xfrm_state_lock);
return __xfrm_seq_hash(seq, net->xfrm.state_hmask);
}
@ -1108,16 +1118,38 @@ xfrm_init_tempstate(struct xfrm_state *x, const struct flowi *fl,
x->props.family = tmpl->encap_family;
}
static struct xfrm_state *__xfrm_state_lookup_all(struct net *net, u32 mark,
struct xfrm_hash_state_ptrs {
const struct hlist_head *bydst;
const struct hlist_head *bysrc;
const struct hlist_head *byspi;
unsigned int hmask;
};
static void xfrm_hash_ptrs_get(const struct net *net, struct xfrm_hash_state_ptrs *ptrs)
{
unsigned int sequence;
do {
sequence = read_seqcount_begin(&net->xfrm.xfrm_state_hash_generation);
ptrs->bydst = xfrm_state_deref_check(net->xfrm.state_bydst, net);
ptrs->bysrc = xfrm_state_deref_check(net->xfrm.state_bysrc, net);
ptrs->byspi = xfrm_state_deref_check(net->xfrm.state_byspi, net);
ptrs->hmask = net->xfrm.state_hmask;
} while (read_seqcount_retry(&net->xfrm.xfrm_state_hash_generation, sequence));
}
static struct xfrm_state *__xfrm_state_lookup_all(const struct xfrm_hash_state_ptrs *state_ptrs,
u32 mark,
const xfrm_address_t *daddr,
__be32 spi, u8 proto,
unsigned short family,
struct xfrm_dev_offload *xdo)
{
unsigned int h = xfrm_spi_hash(net, daddr, spi, proto, family);
unsigned int h = __xfrm_spi_hash(daddr, spi, proto, family, state_ptrs->hmask);
struct xfrm_state *x;
hlist_for_each_entry_rcu(x, net->xfrm.state_byspi + h, byspi) {
hlist_for_each_entry_rcu(x, state_ptrs->byspi + h, byspi) {
#ifdef CONFIG_XFRM_OFFLOAD
if (xdo->type == XFRM_DEV_OFFLOAD_PACKET) {
if (x->xso.type != XFRM_DEV_OFFLOAD_PACKET)
@ -1151,15 +1183,16 @@ static struct xfrm_state *__xfrm_state_lookup_all(struct net *net, u32 mark,
return NULL;
}
static struct xfrm_state *__xfrm_state_lookup(struct net *net, u32 mark,
static struct xfrm_state *__xfrm_state_lookup(const struct xfrm_hash_state_ptrs *state_ptrs,
u32 mark,
const xfrm_address_t *daddr,
__be32 spi, u8 proto,
unsigned short family)
{
unsigned int h = xfrm_spi_hash(net, daddr, spi, proto, family);
unsigned int h = __xfrm_spi_hash(daddr, spi, proto, family, state_ptrs->hmask);
struct xfrm_state *x;
hlist_for_each_entry_rcu(x, net->xfrm.state_byspi + h, byspi) {
hlist_for_each_entry_rcu(x, state_ptrs->byspi + h, byspi) {
if (x->props.family != family ||
x->id.spi != spi ||
x->id.proto != proto ||
@ -1181,11 +1214,11 @@ struct xfrm_state *xfrm_input_state_lookup(struct net *net, u32 mark,
__be32 spi, u8 proto,
unsigned short family)
{
struct xfrm_hash_state_ptrs state_ptrs;
struct hlist_head *state_cache_input;
struct xfrm_state *x = NULL;
int cpu = get_cpu();
state_cache_input = per_cpu_ptr(net->xfrm.state_cache_input, cpu);
state_cache_input = raw_cpu_ptr(net->xfrm.state_cache_input);
rcu_read_lock();
hlist_for_each_entry_rcu(x, state_cache_input, state_cache_input) {
@ -1202,7 +1235,9 @@ struct xfrm_state *xfrm_input_state_lookup(struct net *net, u32 mark,
goto out;
}
x = __xfrm_state_lookup(net, mark, daddr, spi, proto, family);
xfrm_hash_ptrs_get(net, &state_ptrs);
x = __xfrm_state_lookup(&state_ptrs, mark, daddr, spi, proto, family);
if (x && x->km.state == XFRM_STATE_VALID) {
spin_lock_bh(&net->xfrm.xfrm_state_lock);
@ -1217,20 +1252,20 @@ struct xfrm_state *xfrm_input_state_lookup(struct net *net, u32 mark,
out:
rcu_read_unlock();
put_cpu();
return x;
}
EXPORT_SYMBOL(xfrm_input_state_lookup);
static struct xfrm_state *__xfrm_state_lookup_byaddr(struct net *net, u32 mark,
static struct xfrm_state *__xfrm_state_lookup_byaddr(const struct xfrm_hash_state_ptrs *state_ptrs,
u32 mark,
const xfrm_address_t *daddr,
const xfrm_address_t *saddr,
u8 proto, unsigned short family)
{
unsigned int h = xfrm_src_hash(net, daddr, saddr, family);
unsigned int h = __xfrm_src_hash(daddr, saddr, family, state_ptrs->hmask);
struct xfrm_state *x;
hlist_for_each_entry_rcu(x, net->xfrm.state_bysrc + h, bysrc) {
hlist_for_each_entry_rcu(x, state_ptrs->bysrc + h, bysrc) {
if (x->props.family != family ||
x->id.proto != proto ||
!xfrm_addr_equal(&x->id.daddr, daddr, family) ||
@ -1250,14 +1285,17 @@ static struct xfrm_state *__xfrm_state_lookup_byaddr(struct net *net, u32 mark,
static inline struct xfrm_state *
__xfrm_state_locate(struct xfrm_state *x, int use_spi, int family)
{
struct xfrm_hash_state_ptrs state_ptrs;
struct net *net = xs_net(x);
u32 mark = x->mark.v & x->mark.m;
xfrm_hash_ptrs_get(net, &state_ptrs);
if (use_spi)
return __xfrm_state_lookup(net, mark, &x->id.daddr,
return __xfrm_state_lookup(&state_ptrs, mark, &x->id.daddr,
x->id.spi, x->id.proto, family);
else
return __xfrm_state_lookup_byaddr(net, mark,
return __xfrm_state_lookup_byaddr(&state_ptrs, mark,
&x->id.daddr,
&x->props.saddr,
x->id.proto, family);
@ -1331,6 +1369,7 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
unsigned short family, u32 if_id)
{
static xfrm_address_t saddr_wildcard = { };
struct xfrm_hash_state_ptrs state_ptrs;
struct net *net = xp_net(pol);
unsigned int h, h_wildcard;
struct xfrm_state *x, *x0, *to_put;
@ -1395,8 +1434,10 @@ cached:
else if (acquire_in_progress) /* XXX: acquire_in_progress should not happen */
WARN_ON(1);
h = xfrm_dst_hash(net, daddr, saddr, tmpl->reqid, encap_family);
hlist_for_each_entry_rcu(x, net->xfrm.state_bydst + h, bydst) {
xfrm_hash_ptrs_get(net, &state_ptrs);
h = __xfrm_dst_hash(daddr, saddr, tmpl->reqid, encap_family, state_ptrs.hmask);
hlist_for_each_entry_rcu(x, state_ptrs.bydst + h, bydst) {
#ifdef CONFIG_XFRM_OFFLOAD
if (pol->xdo.type == XFRM_DEV_OFFLOAD_PACKET) {
if (x->xso.type != XFRM_DEV_OFFLOAD_PACKET)
@ -1429,8 +1470,9 @@ cached:
if (best || acquire_in_progress)
goto found;
h_wildcard = xfrm_dst_hash(net, daddr, &saddr_wildcard, tmpl->reqid, encap_family);
hlist_for_each_entry_rcu(x, net->xfrm.state_bydst + h_wildcard, bydst) {
h_wildcard = __xfrm_dst_hash(daddr, &saddr_wildcard, tmpl->reqid,
encap_family, state_ptrs.hmask);
hlist_for_each_entry_rcu(x, state_ptrs.bydst + h_wildcard, bydst) {
#ifdef CONFIG_XFRM_OFFLOAD
if (pol->xdo.type == XFRM_DEV_OFFLOAD_PACKET) {
if (x->xso.type != XFRM_DEV_OFFLOAD_PACKET)
@ -1468,7 +1510,7 @@ found:
if (!x && !error && !acquire_in_progress) {
if (tmpl->id.spi &&
(x0 = __xfrm_state_lookup_all(net, mark, daddr,
(x0 = __xfrm_state_lookup_all(&state_ptrs, mark, daddr,
tmpl->id.spi, tmpl->id.proto,
encap_family,
&pol->xdo)) != NULL) {
@ -2253,10 +2295,13 @@ struct xfrm_state *
xfrm_state_lookup(struct net *net, u32 mark, const xfrm_address_t *daddr, __be32 spi,
u8 proto, unsigned short family)
{
struct xfrm_hash_state_ptrs state_ptrs;
struct xfrm_state *x;
rcu_read_lock();
x = __xfrm_state_lookup(net, mark, daddr, spi, proto, family);
xfrm_hash_ptrs_get(net, &state_ptrs);
x = __xfrm_state_lookup(&state_ptrs, mark, daddr, spi, proto, family);
rcu_read_unlock();
return x;
}
@ -2267,10 +2312,14 @@ xfrm_state_lookup_byaddr(struct net *net, u32 mark,
const xfrm_address_t *daddr, const xfrm_address_t *saddr,
u8 proto, unsigned short family)
{
struct xfrm_hash_state_ptrs state_ptrs;
struct xfrm_state *x;
spin_lock_bh(&net->xfrm.xfrm_state_lock);
x = __xfrm_state_lookup_byaddr(net, mark, daddr, saddr, proto, family);
xfrm_hash_ptrs_get(net, &state_ptrs);
x = __xfrm_state_lookup_byaddr(&state_ptrs, mark, daddr, saddr, proto, family);
spin_unlock_bh(&net->xfrm.xfrm_state_lock);
return x;
}

View file

@ -95,7 +95,7 @@ ynl_err_walk(struct ynl_sock *ys, void *start, void *end, unsigned int off,
ynl_attr_for_each_payload(start, data_len, attr) {
astart_off = (char *)attr - (char *)start;
aend_off = astart_off + ynl_attr_data_len(attr);
aend_off = (char *)ynl_attr_data_end(attr) - (char *)start;
if (aend_off <= off)
continue;

View file

@ -142,7 +142,7 @@ function pre_ethtool {
}
function check_table {
local path=$NSIM_DEV_DFS/ports/$port/udp_ports_table$1
local path=$NSIM_DEV_DFS/ports/$port/udp_ports/table$1
local -n expected=$2
local last=$3
@ -212,7 +212,7 @@ function check_tables {
}
function print_table {
local path=$NSIM_DEV_DFS/ports/$port/udp_ports_table$1
local path=$NSIM_DEV_DFS/ports/$port/udp_ports/table$1
read -a have < $path
tree $NSIM_DEV_DFS/
@ -641,7 +641,7 @@ for port in 0 1; do
NSIM_NETDEV=`get_netdev_name old_netdevs`
ip link set dev $NSIM_NETDEV up
echo 110 > $NSIM_DEV_DFS/ports/$port/udp_ports_inject_error
echo 110 > $NSIM_DEV_DFS/ports/$port/udp_ports/inject_error
msg="1 - create VxLANs v6"
exp0=( 0 0 0 0 )
@ -663,7 +663,7 @@ for port in 0 1; do
new_geneve gnv0 20000
msg="2 - destroy GENEVE"
echo 2 > $NSIM_DEV_DFS/ports/$port/udp_ports_inject_error
echo 2 > $NSIM_DEV_DFS/ports/$port/udp_ports/inject_error
exp1=( `mke 20000 2` 0 0 0 )
del_dev gnv0
@ -764,7 +764,7 @@ for port in 0 1; do
msg="create VxLANs v4"
new_vxlan vxlan0 10000 $NSIM_NETDEV
echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports_reset
echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports/reset
check_tables
msg="NIC device goes down"
@ -775,7 +775,7 @@ for port in 0 1; do
fi
check_tables
echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports_reset
echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports/reset
check_tables
msg="NIC device goes up again"
@ -789,7 +789,7 @@ for port in 0 1; do
del_dev vxlan0
check_tables
echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports_reset
echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports/reset
check_tables
msg="destroy NIC"
@ -896,7 +896,7 @@ msg="vacate VxLAN in overflow table"
exp0=( `mke 10000 1` `mke 10004 1` 0 `mke 10003 1` )
del_dev vxlan2
echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports_reset
echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports/reset
check_tables
msg="tunnels destroyed 2"

View file

@ -215,12 +215,14 @@ def bpftool_map_list_wait(expected=0, n_retry=20, ns=""):
raise Exception("Time out waiting for map counts to stabilize want %d, have %d" % (expected, nmaps))
def bpftool_prog_load(sample, file_name, maps=[], prog_type="xdp", dev=None,
fail=True, include_stderr=False):
fail=True, include_stderr=False, dev_bind=None):
args = "prog load %s %s" % (os.path.join(bpf_test_dir, sample), file_name)
if prog_type is not None:
args += " type " + prog_type
if dev is not None:
args += " dev " + dev
elif dev_bind is not None:
args += " xdpmeta_dev " + dev_bind
if len(maps):
args += " map " + " map ".join(maps)
@ -980,6 +982,16 @@ try:
rm("/sys/fs/bpf/offload")
sim.wait_for_flush()
bpftool_prog_load("sample_ret0.bpf.o", "/sys/fs/bpf/devbound",
dev_bind=sim['ifname'])
devbound = bpf_pinned("/sys/fs/bpf/devbound")
start_test("Test dev-bound program in generic mode...")
ret, _, err = sim.set_xdp(devbound, "generic", fail=False, include_stderr=True)
fail(ret == 0, "devbound program in generic mode allowed")
check_extack(err, "Can't attach device-bound programs in generic mode.", args)
rm("/sys/fs/bpf/devbound")
sim.wait_for_flush()
start_test("Test XDP load failure...")
sim.dfs["dev/bpf_bind_verifier_accept"] = 0
ret, _, err = bpftool_prog_load("sample_ret0.bpf.o", "/sys/fs/bpf/offload",

View file

@ -1,6 +1,6 @@
# SPDX-License-Identifier: GPL-2.0
CFLAGS = -Wall -Wl,--no-as-needed -O2 -g
CFLAGS += -Wall -Wl,--no-as-needed -O2 -g
CFLAGS += -I../../../../../usr/include/ $(KHDR_INCLUDES)
# Additional include paths needed by kselftest.h
CFLAGS += -I../../

View file

@ -2,7 +2,7 @@
top_srcdir = ../../../../..
CFLAGS = -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES)
CFLAGS += -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES)
TEST_PROGS := mptcp_connect.sh pm_netlink.sh mptcp_join.sh diag.sh \
simult_flows.sh mptcp_sockopt.sh userspace_pm.sh

View file

@ -2,7 +2,7 @@
top_srcdir = ../../../../..
CFLAGS = -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES)
CFLAGS += -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES)
TEST_PROGS := openvswitch.sh

View file

@ -39,11 +39,13 @@ if [[ -n "${KSFT_MACHINE_SLOW}" ]]; then
# xfail tests that are known flaky with dbg config, not fixable.
# still run them for coverage (and expect 100% pass without dbg).
declare -ar xfail_list=(
"tcp_eor_no-coalesce-retrans.pkt"
"tcp_fast_recovery_prr-ss.*.pkt"
"tcp_slow_start_slow-start-after-win-update.pkt"
"tcp_timestamping.*.pkt"
"tcp_user_timeout_user-timeout-probe.pkt"
"tcp_zerocopy_epoll_.*.pkt"
"tcp_tcp_info_tcp-info-*-limited.pkt"
"tcp_tcp_info_tcp-info-.*-limited.pkt"
)
readonly xfail_regex="^($(printf '%s|' "${xfail_list[@]}"))$"
[[ "$script" =~ ${xfail_regex} ]] && failfunc=ktap_test_xfail

Some files were not shown because too many files have changed in this diff Show more