1
0
Fork 0
mirror of synced 2025-03-06 20:59:54 +01:00

Current release - regressions:

- core: harmonize tstats and dstats
 
   - ipv6: fix dst refleaks in rpl, seg6 and ioam6 lwtunnels
 
   - eth: tun: revert fix group permission check
 
   - eth: stmmac: revert "specify hardware capability value when FIFO size isn't specified"
 
 Previous releases - regressions:
 
   - udp: gso: do not drop small packets when PMTU reduces
 
   - rxrpc: fix race in call state changing vs recvmsg()
 
   - eth: ice: fix Rx data path for heavy 9k MTU traffic
 
   - eth: vmxnet3: fix tx queue race condition with XDP
 
 Previous releases - always broken:
 
   - sched: pfifo_tail_enqueue: drop new packet when sch->limit == 0
 
   - ethtool: ntuple: fix rss + ring_cookie check
 
   - rxrpc: fix the rxrpc_connection attend queue handling
 
 Misc:
 
   - recognize Kuniyuki Iwashima as a maintainer
 
 Signed-off-by: Paolo Abeni <pabeni@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEEg1AjqC77wbdLX2LbKSR5jcyPE6QFAmekqVQSHHBhYmVuaUBy
 ZWRoYXQuY29tAAoJECkkeY3MjxOkLgcP/3sewa114G/vERakXPptWLnCRtXLMaMk
 E8ihlBj9qI0hD51Qi+NRYKI/IJqA5ZB/p4I5cBz7xI8d5VOQYhSuCdCnwMEZPAfG
 IQS8VpcFjcdj1e7cjlFu/s5PzF6cRRvinhWQpE5YpuE2TFCl+SJ8XAupLvi8zCe4
 wo1Pet4dhKOKtrrhqiCeSNUer0/MSeLrCIB7mHha279l0D8Wx+m+h8OJMQahXI0t
 IJWNkxPyrIqx0ksR8Uy9SVAIrNlR5iEeqcwztLUtqxzs+b/s7OJcWmgqXgSMTOb5
 RUr8Bthm3DQyXuFoR5bFA/oaoJb+iZvOjcEqQaYU2dF5zcRfdY/omhQmNq6s/7/w
 7KV7C/RIrnhwLjkEd2IX5pzvgM9VuO9ewk65+sZ82o0wukhKn6RIyJYHzvwbc0rH
 rQMGAoz/uF4eiuqfS+uh86+VJfjBkG/sgdw0JSAt7dN14Fadg14+Ocz6apjwNuzJ
 0QUWABxKSpvktufycyoo5s/bPqLMBxI/W3XZwIzbJ+dupRavPxkWpT/AwI//ZTDe
 rUTl2SEd4Ruliy1w41TXgubRxuW07M7bm4AxUrLzAzTx+aX7AdBeIyZp83x3oW1p
 nNUSdJ0m7A0Z1k28tdAP/gjD/vjcQ0J5YQ6MvZ1RBQl7MI+GLYD2irWbbrlI+g+6
 HKQcXhv/7pG4
 =t5WM
 -----END PGP SIGNATURE-----

Merge tag 'net-6.14-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
 "Interestingly the recent kmemleak improvements allowed our CI to catch
  a couple of percpu leaks addressed here.

  We (mostly Jakub, to be accurate) are working to increase review
  coverage over the net code-base tweaking the MAINTAINER entries.

  Current release - regressions:

   - core: harmonize tstats and dstats

   - ipv6: fix dst refleaks in rpl, seg6 and ioam6 lwtunnels

   - eth: tun: revert fix group permission check

   - eth: stmmac: revert "specify hardware capability value when FIFO
     size isn't specified"

  Previous releases - regressions:

   - udp: gso: do not drop small packets when PMTU reduces

   - rxrpc: fix race in call state changing vs recvmsg()

   - eth: ice: fix Rx data path for heavy 9k MTU traffic

   - eth: vmxnet3: fix tx queue race condition with XDP

  Previous releases - always broken:

   - sched: pfifo_tail_enqueue: drop new packet when sch->limit == 0

   - ethtool: ntuple: fix rss + ring_cookie check

   - rxrpc: fix the rxrpc_connection attend queue handling

  Misc:

   - recognize Kuniyuki Iwashima as a maintainer"

* tag 'net-6.14-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (34 commits)
  Revert "net: stmmac: Specify hardware capability value when FIFO size isn't specified"
  MAINTAINERS: add a sample ethtool section entry
  MAINTAINERS: add entry for ethtool
  rxrpc: Fix race in call state changing vs recvmsg()
  rxrpc: Fix call state set to not include the SERVER_SECURING state
  net: sched: Fix truncation of offloaded action statistics
  tun: revert fix group permission check
  selftests/tc-testing: Add a test case for qdisc_tree_reduce_backlog()
  netem: Update sch->q.qlen before qdisc_tree_reduce_backlog()
  selftests/tc-testing: Add a test case for pfifo_head_drop qdisc when limit==0
  pfifo_tail_enqueue: Drop new packet when sch->limit == 0
  selftests: mptcp: connect: -f: no reconnect
  net: rose: lock the socket in rose_bind()
  net: atlantic: fix warning during hot unplug
  rxrpc: Fix the rxrpc_connection attend queue handling
  net: harmonize tstats and dstats
  selftests: drv-net: rss_ctx: don't fail reconfigure test if queue offset not supported
  selftests: drv-net: rss_ctx: add missing cleanup in queue reconfigure
  ethtool: ntuple: fix rss + ring_cookie check
  ethtool: rss: fix hiding unsupported fields in dumps
  ...
This commit is contained in:
Linus Torvalds 2025-02-06 09:14:54 -08:00
commit 3cf0a98fea
36 changed files with 446 additions and 178 deletions

View file

@ -16462,6 +16462,22 @@ F: include/net/dsa.h
F: net/dsa/ F: net/dsa/
F: tools/testing/selftests/drivers/net/dsa/ F: tools/testing/selftests/drivers/net/dsa/
NETWORKING [ETHTOOL]
M: Andrew Lunn <andrew@lunn.ch>
M: Jakub Kicinski <kuba@kernel.org>
F: Documentation/netlink/specs/ethtool.yaml
F: Documentation/networking/ethtool-netlink.rst
F: include/linux/ethtool*
F: include/uapi/linux/ethtool*
F: net/ethtool/
F: tools/testing/selftests/drivers/net/*/ethtool*
NETWORKING [ETHTOOL CABLE TEST]
M: Andrew Lunn <andrew@lunn.ch>
F: net/ethtool/cabletest.c
F: tools/testing/selftests/drivers/net/*/ethtool*
K: cable_test
NETWORKING [GENERAL] NETWORKING [GENERAL]
M: "David S. Miller" <davem@davemloft.net> M: "David S. Miller" <davem@davemloft.net>
M: Eric Dumazet <edumazet@google.com> M: Eric Dumazet <edumazet@google.com>
@ -16621,6 +16637,7 @@ F: tools/testing/selftests/net/mptcp/
NETWORKING [TCP] NETWORKING [TCP]
M: Eric Dumazet <edumazet@google.com> M: Eric Dumazet <edumazet@google.com>
M: Neal Cardwell <ncardwell@google.com> M: Neal Cardwell <ncardwell@google.com>
R: Kuniyuki Iwashima <kuniyu@amazon.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Maintained S: Maintained
F: Documentation/networking/net_cachelines/tcp_sock.rst F: Documentation/networking/net_cachelines/tcp_sock.rst
@ -16648,6 +16665,31 @@ F: include/net/tls.h
F: include/uapi/linux/tls.h F: include/uapi/linux/tls.h
F: net/tls/* F: net/tls/*
NETWORKING [SOCKETS]
M: Eric Dumazet <edumazet@google.com>
M: Kuniyuki Iwashima <kuniyu@amazon.com>
M: Paolo Abeni <pabeni@redhat.com>
M: Willem de Bruijn <willemb@google.com>
S: Maintained
F: include/linux/sock_diag.h
F: include/linux/socket.h
F: include/linux/sockptr.h
F: include/net/sock.h
F: include/net/sock_reuseport.h
F: include/uapi/linux/socket.h
F: net/core/*sock*
F: net/core/scm.c
F: net/socket.c
NETWORKING [UNIX SOCKETS]
M: Kuniyuki Iwashima <kuniyu@amazon.com>
S: Maintained
F: include/net/af_unix.h
F: include/net/netns/unix.h
F: include/uapi/linux/unix_diag.h
F: net/unix/
F: tools/testing/selftests/net/af_unix/
NETXEN (1/10) GbE SUPPORT NETXEN (1/10) GbE SUPPORT
M: Manish Chopra <manishc@marvell.com> M: Manish Chopra <manishc@marvell.com>
M: Rahul Verma <rahulv@marvell.com> M: Rahul Verma <rahulv@marvell.com>
@ -17713,6 +17755,7 @@ L: netdev@vger.kernel.org
L: dev@openvswitch.org L: dev@openvswitch.org
S: Maintained S: Maintained
W: http://openvswitch.org W: http://openvswitch.org
F: Documentation/networking/openvswitch.rst
F: include/uapi/linux/openvswitch.h F: include/uapi/linux/openvswitch.h
F: net/openvswitch/ F: net/openvswitch/
F: tools/testing/selftests/net/openvswitch/ F: tools/testing/selftests/net/openvswitch/

View file

@ -1441,7 +1441,9 @@ void aq_nic_deinit(struct aq_nic_s *self, bool link_down)
aq_ptp_ring_free(self); aq_ptp_ring_free(self);
aq_ptp_free(self); aq_ptp_free(self);
if (likely(self->aq_fw_ops->deinit) && link_down) { /* May be invoked during hot unplug. */
if (pci_device_is_present(self->pdev) &&
likely(self->aq_fw_ops->deinit) && link_down) {
mutex_lock(&self->fwreq_mutex); mutex_lock(&self->fwreq_mutex);
self->aq_fw_ops->deinit(self->aq_hw); self->aq_fw_ops->deinit(self->aq_hw);
mutex_unlock(&self->fwreq_mutex); mutex_unlock(&self->fwreq_mutex);

View file

@ -41,9 +41,12 @@ void bcmgenet_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
{ {
struct bcmgenet_priv *priv = netdev_priv(dev); struct bcmgenet_priv *priv = netdev_priv(dev);
struct device *kdev = &priv->pdev->dev; struct device *kdev = &priv->pdev->dev;
u32 phy_wolopts = 0;
if (dev->phydev) if (dev->phydev) {
phy_ethtool_get_wol(dev->phydev, wol); phy_ethtool_get_wol(dev->phydev, wol);
phy_wolopts = wol->wolopts;
}
/* MAC is not wake-up capable, return what the PHY does */ /* MAC is not wake-up capable, return what the PHY does */
if (!device_can_wakeup(kdev)) if (!device_can_wakeup(kdev))
@ -51,9 +54,14 @@ void bcmgenet_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
/* Overlay MAC capabilities with that of the PHY queried before */ /* Overlay MAC capabilities with that of the PHY queried before */
wol->supported |= WAKE_MAGIC | WAKE_MAGICSECURE | WAKE_FILTER; wol->supported |= WAKE_MAGIC | WAKE_MAGICSECURE | WAKE_FILTER;
wol->wolopts = priv->wolopts; wol->wolopts |= priv->wolopts;
memset(wol->sopass, 0, sizeof(wol->sopass));
/* Return the PHY configured magic password */
if (phy_wolopts & WAKE_MAGICSECURE)
return;
/* Otherwise the MAC one */
memset(wol->sopass, 0, sizeof(wol->sopass));
if (wol->wolopts & WAKE_MAGICSECURE) if (wol->wolopts & WAKE_MAGICSECURE)
memcpy(wol->sopass, priv->sopass, sizeof(priv->sopass)); memcpy(wol->sopass, priv->sopass, sizeof(priv->sopass));
} }
@ -70,7 +78,7 @@ int bcmgenet_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
/* Try Wake-on-LAN from the PHY first */ /* Try Wake-on-LAN from the PHY first */
if (dev->phydev) { if (dev->phydev) {
ret = phy_ethtool_set_wol(dev->phydev, wol); ret = phy_ethtool_set_wol(dev->phydev, wol);
if (ret != -EOPNOTSUPP) if (ret != -EOPNOTSUPP && wol->wolopts)
return ret; return ret;
} }

View file

@ -55,6 +55,7 @@
#include <linux/hwmon.h> #include <linux/hwmon.h>
#include <linux/hwmon-sysfs.h> #include <linux/hwmon-sysfs.h>
#include <linux/crc32poly.h> #include <linux/crc32poly.h>
#include <linux/dmi.h>
#include <net/checksum.h> #include <net/checksum.h>
#include <net/gso.h> #include <net/gso.h>
@ -18212,6 +18213,50 @@ unlock:
static SIMPLE_DEV_PM_OPS(tg3_pm_ops, tg3_suspend, tg3_resume); static SIMPLE_DEV_PM_OPS(tg3_pm_ops, tg3_suspend, tg3_resume);
/* Systems where ACPI _PTS (Prepare To Sleep) S5 will result in a fatal
* PCIe AER event on the tg3 device if the tg3 device is not, or cannot
* be, powered down.
*/
static const struct dmi_system_id tg3_restart_aer_quirk_table[] = {
{
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R440"),
},
},
{
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R540"),
},
},
{
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R640"),
},
},
{
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R650"),
},
},
{
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R740"),
},
},
{
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R750"),
},
},
{}
};
static void tg3_shutdown(struct pci_dev *pdev) static void tg3_shutdown(struct pci_dev *pdev)
{ {
struct net_device *dev = pci_get_drvdata(pdev); struct net_device *dev = pci_get_drvdata(pdev);
@ -18228,6 +18273,19 @@ static void tg3_shutdown(struct pci_dev *pdev)
if (system_state == SYSTEM_POWER_OFF) if (system_state == SYSTEM_POWER_OFF)
tg3_power_down(tp); tg3_power_down(tp);
else if (system_state == SYSTEM_RESTART &&
dmi_first_match(tg3_restart_aer_quirk_table) &&
pdev->current_state != PCI_D3cold &&
pdev->current_state != PCI_UNKNOWN) {
/* Disable PCIe AER on the tg3 to avoid a fatal
* error during this system restart.
*/
pcie_capability_clear_word(pdev, PCI_EXP_DEVCTL,
PCI_EXP_DEVCTL_CERE |
PCI_EXP_DEVCTL_NFERE |
PCI_EXP_DEVCTL_FERE |
PCI_EXP_DEVCTL_URRE);
}
rtnl_unlock(); rtnl_unlock();

View file

@ -981,6 +981,9 @@ static int ice_devlink_rate_node_new(struct devlink_rate *rate_node, void **priv
/* preallocate memory for ice_sched_node */ /* preallocate memory for ice_sched_node */
node = devm_kzalloc(ice_hw_to_dev(pi->hw), sizeof(*node), GFP_KERNEL); node = devm_kzalloc(ice_hw_to_dev(pi->hw), sizeof(*node), GFP_KERNEL);
if (!node)
return -ENOMEM;
*priv = node; *priv = node;
return 0; return 0;

View file

@ -527,15 +527,14 @@ err:
* @xdp: xdp_buff used as input to the XDP program * @xdp: xdp_buff used as input to the XDP program
* @xdp_prog: XDP program to run * @xdp_prog: XDP program to run
* @xdp_ring: ring to be used for XDP_TX action * @xdp_ring: ring to be used for XDP_TX action
* @rx_buf: Rx buffer to store the XDP action
* @eop_desc: Last descriptor in packet to read metadata from * @eop_desc: Last descriptor in packet to read metadata from
* *
* Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR} * Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR}
*/ */
static void static u32
ice_run_xdp(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, ice_run_xdp(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring, struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring,
struct ice_rx_buf *rx_buf, union ice_32b_rx_flex_desc *eop_desc) union ice_32b_rx_flex_desc *eop_desc)
{ {
unsigned int ret = ICE_XDP_PASS; unsigned int ret = ICE_XDP_PASS;
u32 act; u32 act;
@ -574,7 +573,7 @@ out_failure:
ret = ICE_XDP_CONSUMED; ret = ICE_XDP_CONSUMED;
} }
exit: exit:
ice_set_rx_bufs_act(xdp, rx_ring, ret); return ret;
} }
/** /**
@ -860,10 +859,8 @@ ice_add_xdp_frag(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
xdp_buff_set_frags_flag(xdp); xdp_buff_set_frags_flag(xdp);
} }
if (unlikely(sinfo->nr_frags == MAX_SKB_FRAGS)) { if (unlikely(sinfo->nr_frags == MAX_SKB_FRAGS))
ice_set_rx_bufs_act(xdp, rx_ring, ICE_XDP_CONSUMED);
return -ENOMEM; return -ENOMEM;
}
__skb_fill_page_desc_noacc(sinfo, sinfo->nr_frags++, rx_buf->page, __skb_fill_page_desc_noacc(sinfo, sinfo->nr_frags++, rx_buf->page,
rx_buf->page_offset, size); rx_buf->page_offset, size);
@ -924,7 +921,6 @@ ice_get_rx_buf(struct ice_rx_ring *rx_ring, const unsigned int size,
struct ice_rx_buf *rx_buf; struct ice_rx_buf *rx_buf;
rx_buf = &rx_ring->rx_buf[ntc]; rx_buf = &rx_ring->rx_buf[ntc];
rx_buf->pgcnt = page_count(rx_buf->page);
prefetchw(rx_buf->page); prefetchw(rx_buf->page);
if (!size) if (!size)
@ -940,6 +936,31 @@ ice_get_rx_buf(struct ice_rx_ring *rx_ring, const unsigned int size,
return rx_buf; return rx_buf;
} }
/**
* ice_get_pgcnts - grab page_count() for gathered fragments
* @rx_ring: Rx descriptor ring to store the page counts on
*
* This function is intended to be called right before running XDP
* program so that the page recycling mechanism will be able to take
* a correct decision regarding underlying pages; this is done in such
* way as XDP program can change the refcount of page
*/
static void ice_get_pgcnts(struct ice_rx_ring *rx_ring)
{
u32 nr_frags = rx_ring->nr_frags + 1;
u32 idx = rx_ring->first_desc;
struct ice_rx_buf *rx_buf;
u32 cnt = rx_ring->count;
for (int i = 0; i < nr_frags; i++) {
rx_buf = &rx_ring->rx_buf[idx];
rx_buf->pgcnt = page_count(rx_buf->page);
if (++idx == cnt)
idx = 0;
}
}
/** /**
* ice_build_skb - Build skb around an existing buffer * ice_build_skb - Build skb around an existing buffer
* @rx_ring: Rx descriptor ring to transact packets on * @rx_ring: Rx descriptor ring to transact packets on
@ -1051,12 +1072,12 @@ ice_construct_skb(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp)
rx_buf->page_offset + headlen, size, rx_buf->page_offset + headlen, size,
xdp->frame_sz); xdp->frame_sz);
} else { } else {
/* buffer is unused, change the act that should be taken later /* buffer is unused, restore biased page count in Rx buffer;
* on; data was copied onto skb's linear part so there's no * data was copied onto skb's linear part so there's no
* need for adjusting page offset and we can reuse this buffer * need for adjusting page offset and we can reuse this buffer
* as-is * as-is
*/ */
rx_buf->act = ICE_SKB_CONSUMED; rx_buf->pagecnt_bias++;
} }
if (unlikely(xdp_buff_has_frags(xdp))) { if (unlikely(xdp_buff_has_frags(xdp))) {
@ -1103,6 +1124,65 @@ ice_put_rx_buf(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf)
rx_buf->page = NULL; rx_buf->page = NULL;
} }
/**
* ice_put_rx_mbuf - ice_put_rx_buf() caller, for all frame frags
* @rx_ring: Rx ring with all the auxiliary data
* @xdp: XDP buffer carrying linear + frags part
* @xdp_xmit: XDP_TX/XDP_REDIRECT verdict storage
* @ntc: a current next_to_clean value to be stored at rx_ring
* @verdict: return code from XDP program execution
*
* Walk through gathered fragments and satisfy internal page
* recycle mechanism; we take here an action related to verdict
* returned by XDP program;
*/
static void ice_put_rx_mbuf(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
u32 *xdp_xmit, u32 ntc, u32 verdict)
{
u32 nr_frags = rx_ring->nr_frags + 1;
u32 idx = rx_ring->first_desc;
u32 cnt = rx_ring->count;
u32 post_xdp_frags = 1;
struct ice_rx_buf *buf;
int i;
if (unlikely(xdp_buff_has_frags(xdp)))
post_xdp_frags += xdp_get_shared_info_from_buff(xdp)->nr_frags;
for (i = 0; i < post_xdp_frags; i++) {
buf = &rx_ring->rx_buf[idx];
if (verdict & (ICE_XDP_TX | ICE_XDP_REDIR)) {
ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
*xdp_xmit |= verdict;
} else if (verdict & ICE_XDP_CONSUMED) {
buf->pagecnt_bias++;
} else if (verdict == ICE_XDP_PASS) {
ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
}
ice_put_rx_buf(rx_ring, buf);
if (++idx == cnt)
idx = 0;
}
/* handle buffers that represented frags released by XDP prog;
* for these we keep pagecnt_bias as-is; refcount from struct page
* has been decremented within XDP prog and we do not have to increase
* the biased refcnt
*/
for (; i < nr_frags; i++) {
buf = &rx_ring->rx_buf[idx];
ice_put_rx_buf(rx_ring, buf);
if (++idx == cnt)
idx = 0;
}
xdp->data = NULL;
rx_ring->first_desc = ntc;
rx_ring->nr_frags = 0;
}
/** /**
* ice_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf * ice_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf
* @rx_ring: Rx descriptor ring to transact packets on * @rx_ring: Rx descriptor ring to transact packets on
@ -1120,15 +1200,13 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
unsigned int total_rx_bytes = 0, total_rx_pkts = 0; unsigned int total_rx_bytes = 0, total_rx_pkts = 0;
unsigned int offset = rx_ring->rx_offset; unsigned int offset = rx_ring->rx_offset;
struct xdp_buff *xdp = &rx_ring->xdp; struct xdp_buff *xdp = &rx_ring->xdp;
u32 cached_ntc = rx_ring->first_desc;
struct ice_tx_ring *xdp_ring = NULL; struct ice_tx_ring *xdp_ring = NULL;
struct bpf_prog *xdp_prog = NULL; struct bpf_prog *xdp_prog = NULL;
u32 ntc = rx_ring->next_to_clean; u32 ntc = rx_ring->next_to_clean;
u32 cached_ntu, xdp_verdict;
u32 cnt = rx_ring->count; u32 cnt = rx_ring->count;
u32 xdp_xmit = 0; u32 xdp_xmit = 0;
u32 cached_ntu;
bool failure; bool failure;
u32 first;
xdp_prog = READ_ONCE(rx_ring->xdp_prog); xdp_prog = READ_ONCE(rx_ring->xdp_prog);
if (xdp_prog) { if (xdp_prog) {
@ -1190,6 +1268,7 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
xdp_prepare_buff(xdp, hard_start, offset, size, !!offset); xdp_prepare_buff(xdp, hard_start, offset, size, !!offset);
xdp_buff_clear_frags_flag(xdp); xdp_buff_clear_frags_flag(xdp);
} else if (ice_add_xdp_frag(rx_ring, xdp, rx_buf, size)) { } else if (ice_add_xdp_frag(rx_ring, xdp, rx_buf, size)) {
ice_put_rx_mbuf(rx_ring, xdp, NULL, ntc, ICE_XDP_CONSUMED);
break; break;
} }
if (++ntc == cnt) if (++ntc == cnt)
@ -1199,15 +1278,15 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
if (ice_is_non_eop(rx_ring, rx_desc)) if (ice_is_non_eop(rx_ring, rx_desc))
continue; continue;
ice_run_xdp(rx_ring, xdp, xdp_prog, xdp_ring, rx_buf, rx_desc); ice_get_pgcnts(rx_ring);
if (rx_buf->act == ICE_XDP_PASS) xdp_verdict = ice_run_xdp(rx_ring, xdp, xdp_prog, xdp_ring, rx_desc);
if (xdp_verdict == ICE_XDP_PASS)
goto construct_skb; goto construct_skb;
total_rx_bytes += xdp_get_buff_len(xdp); total_rx_bytes += xdp_get_buff_len(xdp);
total_rx_pkts++; total_rx_pkts++;
xdp->data = NULL; ice_put_rx_mbuf(rx_ring, xdp, &xdp_xmit, ntc, xdp_verdict);
rx_ring->first_desc = ntc;
rx_ring->nr_frags = 0;
continue; continue;
construct_skb: construct_skb:
if (likely(ice_ring_uses_build_skb(rx_ring))) if (likely(ice_ring_uses_build_skb(rx_ring)))
@ -1217,18 +1296,12 @@ construct_skb:
/* exit if we failed to retrieve a buffer */ /* exit if we failed to retrieve a buffer */
if (!skb) { if (!skb) {
rx_ring->ring_stats->rx_stats.alloc_page_failed++; rx_ring->ring_stats->rx_stats.alloc_page_failed++;
rx_buf->act = ICE_XDP_CONSUMED; xdp_verdict = ICE_XDP_CONSUMED;
if (unlikely(xdp_buff_has_frags(xdp)))
ice_set_rx_bufs_act(xdp, rx_ring,
ICE_XDP_CONSUMED);
xdp->data = NULL;
rx_ring->first_desc = ntc;
rx_ring->nr_frags = 0;
break;
} }
xdp->data = NULL; ice_put_rx_mbuf(rx_ring, xdp, &xdp_xmit, ntc, xdp_verdict);
rx_ring->first_desc = ntc;
rx_ring->nr_frags = 0; if (!skb)
break;
stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_RXE_S); stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_RXE_S);
if (unlikely(ice_test_staterr(rx_desc->wb.status_error0, if (unlikely(ice_test_staterr(rx_desc->wb.status_error0,
@ -1257,23 +1330,6 @@ construct_skb:
total_rx_pkts++; total_rx_pkts++;
} }
first = rx_ring->first_desc;
while (cached_ntc != first) {
struct ice_rx_buf *buf = &rx_ring->rx_buf[cached_ntc];
if (buf->act & (ICE_XDP_TX | ICE_XDP_REDIR)) {
ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
xdp_xmit |= buf->act;
} else if (buf->act & ICE_XDP_CONSUMED) {
buf->pagecnt_bias++;
} else if (buf->act == ICE_XDP_PASS) {
ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
}
ice_put_rx_buf(rx_ring, buf);
if (++cached_ntc >= cnt)
cached_ntc = 0;
}
rx_ring->next_to_clean = ntc; rx_ring->next_to_clean = ntc;
/* return up to cleaned_count buffers to hardware */ /* return up to cleaned_count buffers to hardware */
failure = ice_alloc_rx_bufs(rx_ring, ICE_RX_DESC_UNUSED(rx_ring)); failure = ice_alloc_rx_bufs(rx_ring, ICE_RX_DESC_UNUSED(rx_ring));

View file

@ -201,7 +201,6 @@ struct ice_rx_buf {
struct page *page; struct page *page;
unsigned int page_offset; unsigned int page_offset;
unsigned int pgcnt; unsigned int pgcnt;
unsigned int act;
unsigned int pagecnt_bias; unsigned int pagecnt_bias;
}; };

View file

@ -5,49 +5,6 @@
#define _ICE_TXRX_LIB_H_ #define _ICE_TXRX_LIB_H_
#include "ice.h" #include "ice.h"
/**
* ice_set_rx_bufs_act - propagate Rx buffer action to frags
* @xdp: XDP buffer representing frame (linear and frags part)
* @rx_ring: Rx ring struct
* act: action to store onto Rx buffers related to XDP buffer parts
*
* Set action that should be taken before putting Rx buffer from first frag
* to the last.
*/
static inline void
ice_set_rx_bufs_act(struct xdp_buff *xdp, const struct ice_rx_ring *rx_ring,
const unsigned int act)
{
u32 sinfo_frags = xdp_get_shared_info_from_buff(xdp)->nr_frags;
u32 nr_frags = rx_ring->nr_frags + 1;
u32 idx = rx_ring->first_desc;
u32 cnt = rx_ring->count;
struct ice_rx_buf *buf;
for (int i = 0; i < nr_frags; i++) {
buf = &rx_ring->rx_buf[idx];
buf->act = act;
if (++idx == cnt)
idx = 0;
}
/* adjust pagecnt_bias on frags freed by XDP prog */
if (sinfo_frags < rx_ring->nr_frags && act == ICE_XDP_CONSUMED) {
u32 delta = rx_ring->nr_frags - sinfo_frags;
while (delta) {
if (idx == 0)
idx = cnt - 1;
else
idx--;
buf = &rx_ring->rx_buf[idx];
buf->pagecnt_bias--;
delta--;
}
}
}
/** /**
* ice_test_staterr - tests bits in Rx descriptor status and error fields * ice_test_staterr - tests bits in Rx descriptor status and error fields
* @status_err_n: Rx descriptor status_error0 or status_error1 bits * @status_err_n: Rx descriptor status_error0 or status_error1 bits

View file

@ -2424,6 +2424,11 @@ static void stmmac_dma_operation_mode(struct stmmac_priv *priv)
u32 chan = 0; u32 chan = 0;
u8 qmode = 0; u8 qmode = 0;
if (rxfifosz == 0)
rxfifosz = priv->dma_cap.rx_fifo_size;
if (txfifosz == 0)
txfifosz = priv->dma_cap.tx_fifo_size;
/* Split up the shared Tx/Rx FIFO memory on DW QoS Eth and DW XGMAC */ /* Split up the shared Tx/Rx FIFO memory on DW QoS Eth and DW XGMAC */
if (priv->plat->has_gmac4 || priv->plat->has_xgmac) { if (priv->plat->has_gmac4 || priv->plat->has_xgmac) {
rxfifosz /= rx_channels_count; rxfifosz /= rx_channels_count;
@ -2892,6 +2897,11 @@ static void stmmac_set_dma_operation_mode(struct stmmac_priv *priv, u32 txmode,
int rxfifosz = priv->plat->rx_fifo_size; int rxfifosz = priv->plat->rx_fifo_size;
int txfifosz = priv->plat->tx_fifo_size; int txfifosz = priv->plat->tx_fifo_size;
if (rxfifosz == 0)
rxfifosz = priv->dma_cap.rx_fifo_size;
if (txfifosz == 0)
txfifosz = priv->dma_cap.tx_fifo_size;
/* Adjust for real per queue fifo size */ /* Adjust for real per queue fifo size */
rxfifosz /= rx_channels_count; rxfifosz /= rx_channels_count;
txfifosz /= tx_channels_count; txfifosz /= tx_channels_count;
@ -5868,6 +5878,9 @@ static int stmmac_change_mtu(struct net_device *dev, int new_mtu)
const int mtu = new_mtu; const int mtu = new_mtu;
int ret; int ret;
if (txfifosz == 0)
txfifosz = priv->dma_cap.tx_fifo_size;
txfifosz /= priv->plat->tx_queues_to_use; txfifosz /= priv->plat->tx_queues_to_use;
if (stmmac_xdp_is_enabled(priv) && new_mtu > ETH_DATA_LEN) { if (stmmac_xdp_is_enabled(priv) && new_mtu > ETH_DATA_LEN) {
@ -7219,29 +7232,15 @@ static int stmmac_hw_init(struct stmmac_priv *priv)
priv->plat->tx_queues_to_use = priv->dma_cap.number_tx_queues; priv->plat->tx_queues_to_use = priv->dma_cap.number_tx_queues;
} }
if (!priv->plat->rx_fifo_size) { if (priv->dma_cap.rx_fifo_size &&
if (priv->dma_cap.rx_fifo_size) { priv->plat->rx_fifo_size > priv->dma_cap.rx_fifo_size) {
priv->plat->rx_fifo_size = priv->dma_cap.rx_fifo_size;
} else {
dev_err(priv->device, "Can't specify Rx FIFO size\n");
return -ENODEV;
}
} else if (priv->dma_cap.rx_fifo_size &&
priv->plat->rx_fifo_size > priv->dma_cap.rx_fifo_size) {
dev_warn(priv->device, dev_warn(priv->device,
"Rx FIFO size (%u) exceeds dma capability\n", "Rx FIFO size (%u) exceeds dma capability\n",
priv->plat->rx_fifo_size); priv->plat->rx_fifo_size);
priv->plat->rx_fifo_size = priv->dma_cap.rx_fifo_size; priv->plat->rx_fifo_size = priv->dma_cap.rx_fifo_size;
} }
if (!priv->plat->tx_fifo_size) { if (priv->dma_cap.tx_fifo_size &&
if (priv->dma_cap.tx_fifo_size) { priv->plat->tx_fifo_size > priv->dma_cap.tx_fifo_size) {
priv->plat->tx_fifo_size = priv->dma_cap.tx_fifo_size;
} else {
dev_err(priv->device, "Can't specify Tx FIFO size\n");
return -ENODEV;
}
} else if (priv->dma_cap.tx_fifo_size &&
priv->plat->tx_fifo_size > priv->dma_cap.tx_fifo_size) {
dev_warn(priv->device, dev_warn(priv->device,
"Tx FIFO size (%u) exceeds dma capability\n", "Tx FIFO size (%u) exceeds dma capability\n",
priv->plat->tx_fifo_size); priv->plat->tx_fifo_size);

View file

@ -574,18 +574,14 @@ static u16 tun_select_queue(struct net_device *dev, struct sk_buff *skb,
return ret; return ret;
} }
static inline bool tun_capable(struct tun_struct *tun) static inline bool tun_not_capable(struct tun_struct *tun)
{ {
const struct cred *cred = current_cred(); const struct cred *cred = current_cred();
struct net *net = dev_net(tun->dev); struct net *net = dev_net(tun->dev);
if (ns_capable(net->user_ns, CAP_NET_ADMIN)) return ((uid_valid(tun->owner) && !uid_eq(cred->euid, tun->owner)) ||
return 1; (gid_valid(tun->group) && !in_egroup_p(tun->group))) &&
if (uid_valid(tun->owner) && uid_eq(cred->euid, tun->owner)) !ns_capable(net->user_ns, CAP_NET_ADMIN);
return 1;
if (gid_valid(tun->group) && in_egroup_p(tun->group))
return 1;
return 0;
} }
static void tun_set_real_num_queues(struct tun_struct *tun) static void tun_set_real_num_queues(struct tun_struct *tun)
@ -2782,7 +2778,7 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr)
!!(tun->flags & IFF_MULTI_QUEUE)) !!(tun->flags & IFF_MULTI_QUEUE))
return -EINVAL; return -EINVAL;
if (!tun_capable(tun)) if (tun_not_capable(tun))
return -EPERM; return -EPERM;
err = security_tun_dev_open(tun->security); err = security_tun_dev_open(tun->security);
if (err < 0) if (err < 0)

View file

@ -28,7 +28,7 @@ vmxnet3_xdp_get_tq(struct vmxnet3_adapter *adapter)
if (likely(cpu < tq_number)) if (likely(cpu < tq_number))
tq = &adapter->tx_queue[cpu]; tq = &adapter->tx_queue[cpu];
else else
tq = &adapter->tx_queue[reciprocal_scale(cpu, tq_number)]; tq = &adapter->tx_queue[cpu % tq_number];
return tq; return tq;
} }
@ -124,6 +124,7 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
u32 buf_size; u32 buf_size;
u32 dw2; u32 dw2;
spin_lock_irq(&tq->tx_lock);
dw2 = (tq->tx_ring.gen ^ 0x1) << VMXNET3_TXD_GEN_SHIFT; dw2 = (tq->tx_ring.gen ^ 0x1) << VMXNET3_TXD_GEN_SHIFT;
dw2 |= xdpf->len; dw2 |= xdpf->len;
ctx.sop_txd = tq->tx_ring.base + tq->tx_ring.next2fill; ctx.sop_txd = tq->tx_ring.base + tq->tx_ring.next2fill;
@ -134,6 +135,7 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
if (vmxnet3_cmd_ring_desc_avail(&tq->tx_ring) == 0) { if (vmxnet3_cmd_ring_desc_avail(&tq->tx_ring) == 0) {
tq->stats.tx_ring_full++; tq->stats.tx_ring_full++;
spin_unlock_irq(&tq->tx_lock);
return -ENOSPC; return -ENOSPC;
} }
@ -142,8 +144,10 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
tbi->dma_addr = dma_map_single(&adapter->pdev->dev, tbi->dma_addr = dma_map_single(&adapter->pdev->dev,
xdpf->data, buf_size, xdpf->data, buf_size,
DMA_TO_DEVICE); DMA_TO_DEVICE);
if (dma_mapping_error(&adapter->pdev->dev, tbi->dma_addr)) if (dma_mapping_error(&adapter->pdev->dev, tbi->dma_addr)) {
spin_unlock_irq(&tq->tx_lock);
return -EFAULT; return -EFAULT;
}
tbi->map_type |= VMXNET3_MAP_SINGLE; tbi->map_type |= VMXNET3_MAP_SINGLE;
} else { /* XDP buffer from page pool */ } else { /* XDP buffer from page pool */
page = virt_to_page(xdpf->data); page = virt_to_page(xdpf->data);
@ -182,6 +186,7 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
dma_wmb(); dma_wmb();
gdesc->dword[2] = cpu_to_le32(le32_to_cpu(gdesc->dword[2]) ^ gdesc->dword[2] = cpu_to_le32(le32_to_cpu(gdesc->dword[2]) ^
VMXNET3_TXD_GEN); VMXNET3_TXD_GEN);
spin_unlock_irq(&tq->tx_lock);
/* No need to handle the case when tx_num_deferred doesn't reach /* No need to handle the case when tx_num_deferred doesn't reach
* threshold. Backend driver at hypervisor side will poll and reset * threshold. Backend driver at hypervisor side will poll and reset
@ -225,6 +230,7 @@ vmxnet3_xdp_xmit(struct net_device *dev,
{ {
struct vmxnet3_adapter *adapter = netdev_priv(dev); struct vmxnet3_adapter *adapter = netdev_priv(dev);
struct vmxnet3_tx_queue *tq; struct vmxnet3_tx_queue *tq;
struct netdev_queue *nq;
int i; int i;
if (unlikely(test_bit(VMXNET3_STATE_BIT_QUIESCED, &adapter->state))) if (unlikely(test_bit(VMXNET3_STATE_BIT_QUIESCED, &adapter->state)))
@ -236,6 +242,9 @@ vmxnet3_xdp_xmit(struct net_device *dev,
if (tq->stopped) if (tq->stopped)
return -ENETDOWN; return -ENETDOWN;
nq = netdev_get_tx_queue(adapter->netdev, tq->qid);
__netif_tx_lock(nq, smp_processor_id());
for (i = 0; i < n; i++) { for (i = 0; i < n; i++) {
if (vmxnet3_xdp_xmit_frame(adapter, frames[i], tq, true)) { if (vmxnet3_xdp_xmit_frame(adapter, frames[i], tq, true)) {
tq->stats.xdp_xmit_err++; tq->stats.xdp_xmit_err++;
@ -243,6 +252,7 @@ vmxnet3_xdp_xmit(struct net_device *dev,
} }
} }
tq->stats.xdp_xmit += i; tq->stats.xdp_xmit += i;
__netif_tx_unlock(nq);
return i; return i;
} }

View file

@ -2904,9 +2904,9 @@ struct pcpu_sw_netstats {
struct pcpu_dstats { struct pcpu_dstats {
u64_stats_t rx_packets; u64_stats_t rx_packets;
u64_stats_t rx_bytes; u64_stats_t rx_bytes;
u64_stats_t rx_drops;
u64_stats_t tx_packets; u64_stats_t tx_packets;
u64_stats_t tx_bytes; u64_stats_t tx_bytes;
u64_stats_t rx_drops;
u64_stats_t tx_drops; u64_stats_t tx_drops;
struct u64_stats_sync syncp; struct u64_stats_sync syncp;
} __aligned(8 * sizeof(u64)); } __aligned(8 * sizeof(u64));

View file

@ -851,7 +851,7 @@ static inline int qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch,
} }
static inline void _bstats_update(struct gnet_stats_basic_sync *bstats, static inline void _bstats_update(struct gnet_stats_basic_sync *bstats,
__u64 bytes, __u32 packets) __u64 bytes, __u64 packets)
{ {
u64_stats_update_begin(&bstats->syncp); u64_stats_update_begin(&bstats->syncp);
u64_stats_add(&bstats->bytes, bytes); u64_stats_add(&bstats->bytes, bytes);

View file

@ -219,6 +219,7 @@
EM(rxrpc_conn_get_conn_input, "GET inp-conn") \ EM(rxrpc_conn_get_conn_input, "GET inp-conn") \
EM(rxrpc_conn_get_idle, "GET idle ") \ EM(rxrpc_conn_get_idle, "GET idle ") \
EM(rxrpc_conn_get_poke_abort, "GET pk-abort") \ EM(rxrpc_conn_get_poke_abort, "GET pk-abort") \
EM(rxrpc_conn_get_poke_secured, "GET secured ") \
EM(rxrpc_conn_get_poke_timer, "GET poke ") \ EM(rxrpc_conn_get_poke_timer, "GET poke ") \
EM(rxrpc_conn_get_service_conn, "GET svc-conn") \ EM(rxrpc_conn_get_service_conn, "GET svc-conn") \
EM(rxrpc_conn_new_client, "NEW client ") \ EM(rxrpc_conn_new_client, "NEW client ") \

View file

@ -11286,6 +11286,20 @@ struct rtnl_link_stats64 *dev_get_stats(struct net_device *dev,
const struct net_device_ops *ops = dev->netdev_ops; const struct net_device_ops *ops = dev->netdev_ops;
const struct net_device_core_stats __percpu *p; const struct net_device_core_stats __percpu *p;
/*
* IPv{4,6} and udp tunnels share common stat helpers and use
* different stat type (NETDEV_PCPU_STAT_TSTATS vs
* NETDEV_PCPU_STAT_DSTATS). Ensure the accounting is consistent.
*/
BUILD_BUG_ON(offsetof(struct pcpu_sw_netstats, rx_bytes) !=
offsetof(struct pcpu_dstats, rx_bytes));
BUILD_BUG_ON(offsetof(struct pcpu_sw_netstats, rx_packets) !=
offsetof(struct pcpu_dstats, rx_packets));
BUILD_BUG_ON(offsetof(struct pcpu_sw_netstats, tx_bytes) !=
offsetof(struct pcpu_dstats, tx_bytes));
BUILD_BUG_ON(offsetof(struct pcpu_sw_netstats, tx_packets) !=
offsetof(struct pcpu_dstats, tx_packets));
if (ops->ndo_get_stats64) { if (ops->ndo_get_stats64) {
memset(storage, 0, sizeof(*storage)); memset(storage, 0, sizeof(*storage));
ops->ndo_get_stats64(dev, storage); ops->ndo_get_stats64(dev, storage);

View file

@ -993,7 +993,7 @@ static noinline_for_stack int ethtool_set_rxnfc(struct net_device *dev,
return rc; return rc;
/* Nonzero ring with RSS only makes sense if NIC adds them together */ /* Nonzero ring with RSS only makes sense if NIC adds them together */
if (cmd == ETHTOOL_SRXCLSRLINS && info.flow_type & FLOW_RSS && if (cmd == ETHTOOL_SRXCLSRLINS && info.fs.flow_type & FLOW_RSS &&
!ops->cap_rss_rxnfc_adds && !ops->cap_rss_rxnfc_adds &&
ethtool_get_flow_spec_ring(info.fs.ring_cookie)) ethtool_get_flow_spec_ring(info.fs.ring_cookie))
return -EINVAL; return -EINVAL;

View file

@ -107,6 +107,8 @@ rss_prepare_ctx(const struct rss_req_info *request, struct net_device *dev,
u32 total_size, indir_bytes; u32 total_size, indir_bytes;
u8 *rss_config; u8 *rss_config;
data->no_key_fields = !dev->ethtool_ops->rxfh_per_ctx_key;
ctx = xa_load(&dev->ethtool->rss_ctx, request->rss_context); ctx = xa_load(&dev->ethtool->rss_ctx, request->rss_context);
if (!ctx) if (!ctx)
return -ENOENT; return -ENOENT;
@ -153,7 +155,6 @@ rss_prepare_data(const struct ethnl_req_info *req_base,
if (!ops->cap_rss_ctx_supported && !ops->create_rxfh_context) if (!ops->cap_rss_ctx_supported && !ops->create_rxfh_context)
return -EOPNOTSUPP; return -EOPNOTSUPP;
data->no_key_fields = !ops->rxfh_per_ctx_key;
return rss_prepare_ctx(request, dev, data, info); return rss_prepare_ctx(request, dev, data, info);
} }

View file

@ -1141,9 +1141,9 @@ static int udp_send_skb(struct sk_buff *skb, struct flowi4 *fl4,
const int hlen = skb_network_header_len(skb) + const int hlen = skb_network_header_len(skb) +
sizeof(struct udphdr); sizeof(struct udphdr);
if (hlen + cork->gso_size > cork->fragsize) { if (hlen + min(datalen, cork->gso_size) > cork->fragsize) {
kfree_skb(skb); kfree_skb(skb);
return -EINVAL; return -EMSGSIZE;
} }
if (datalen > cork->gso_size * UDP_MAX_SEGMENTS) { if (datalen > cork->gso_size * UDP_MAX_SEGMENTS) {
kfree_skb(skb); kfree_skb(skb);

View file

@ -336,7 +336,7 @@ static int ioam6_do_encap(struct net *net, struct sk_buff *skb,
static int ioam6_output(struct net *net, struct sock *sk, struct sk_buff *skb) static int ioam6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
{ {
struct dst_entry *dst = skb_dst(skb), *cache_dst; struct dst_entry *dst = skb_dst(skb), *cache_dst = NULL;
struct in6_addr orig_daddr; struct in6_addr orig_daddr;
struct ioam6_lwt *ilwt; struct ioam6_lwt *ilwt;
int err = -EINVAL; int err = -EINVAL;
@ -407,13 +407,15 @@ do_encap:
cache_dst = ip6_route_output(net, NULL, &fl6); cache_dst = ip6_route_output(net, NULL, &fl6);
if (cache_dst->error) { if (cache_dst->error) {
err = cache_dst->error; err = cache_dst->error;
dst_release(cache_dst);
goto drop; goto drop;
} }
local_bh_disable(); /* cache only if we don't create a dst reference loop */
dst_cache_set_ip6(&ilwt->cache, cache_dst, &fl6.saddr); if (dst->lwtstate != cache_dst->lwtstate) {
local_bh_enable(); local_bh_disable();
dst_cache_set_ip6(&ilwt->cache, cache_dst, &fl6.saddr);
local_bh_enable();
}
err = skb_cow_head(skb, LL_RESERVED_SPACE(cache_dst->dev)); err = skb_cow_head(skb, LL_RESERVED_SPACE(cache_dst->dev));
if (unlikely(err)) if (unlikely(err))
@ -426,8 +428,10 @@ do_encap:
return dst_output(net, sk, skb); return dst_output(net, sk, skb);
} }
out: out:
dst_release(cache_dst);
return dst->lwtstate->orig_output(net, sk, skb); return dst->lwtstate->orig_output(net, sk, skb);
drop: drop:
dst_release(cache_dst);
kfree_skb(skb); kfree_skb(skb);
return err; return err;
} }

View file

@ -232,13 +232,15 @@ static int rpl_output(struct net *net, struct sock *sk, struct sk_buff *skb)
dst = ip6_route_output(net, NULL, &fl6); dst = ip6_route_output(net, NULL, &fl6);
if (dst->error) { if (dst->error) {
err = dst->error; err = dst->error;
dst_release(dst);
goto drop; goto drop;
} }
local_bh_disable(); /* cache only if we don't create a dst reference loop */
dst_cache_set_ip6(&rlwt->cache, dst, &fl6.saddr); if (orig_dst->lwtstate != dst->lwtstate) {
local_bh_enable(); local_bh_disable();
dst_cache_set_ip6(&rlwt->cache, dst, &fl6.saddr);
local_bh_enable();
}
err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev)); err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
if (unlikely(err)) if (unlikely(err))
@ -251,6 +253,7 @@ static int rpl_output(struct net *net, struct sock *sk, struct sk_buff *skb)
return dst_output(net, sk, skb); return dst_output(net, sk, skb);
drop: drop:
dst_release(dst);
kfree_skb(skb); kfree_skb(skb);
return err; return err;
} }
@ -269,8 +272,10 @@ static int rpl_input(struct sk_buff *skb)
local_bh_enable(); local_bh_enable();
err = rpl_do_srh(skb, rlwt, dst); err = rpl_do_srh(skb, rlwt, dst);
if (unlikely(err)) if (unlikely(err)) {
dst_release(dst);
goto drop; goto drop;
}
if (!dst) { if (!dst) {
ip6_route_input(skb); ip6_route_input(skb);

View file

@ -482,8 +482,10 @@ static int seg6_input_core(struct net *net, struct sock *sk,
local_bh_enable(); local_bh_enable();
err = seg6_do_srh(skb, dst); err = seg6_do_srh(skb, dst);
if (unlikely(err)) if (unlikely(err)) {
dst_release(dst);
goto drop; goto drop;
}
if (!dst) { if (!dst) {
ip6_route_input(skb); ip6_route_input(skb);
@ -571,13 +573,15 @@ static int seg6_output_core(struct net *net, struct sock *sk,
dst = ip6_route_output(net, NULL, &fl6); dst = ip6_route_output(net, NULL, &fl6);
if (dst->error) { if (dst->error) {
err = dst->error; err = dst->error;
dst_release(dst);
goto drop; goto drop;
} }
local_bh_disable(); /* cache only if we don't create a dst reference loop */
dst_cache_set_ip6(&slwt->cache, dst, &fl6.saddr); if (orig_dst->lwtstate != dst->lwtstate) {
local_bh_enable(); local_bh_disable();
dst_cache_set_ip6(&slwt->cache, dst, &fl6.saddr);
local_bh_enable();
}
err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev)); err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
if (unlikely(err)) if (unlikely(err))
@ -593,6 +597,7 @@ static int seg6_output_core(struct net *net, struct sock *sk,
return dst_output(net, sk, skb); return dst_output(net, sk, skb);
drop: drop:
dst_release(dst);
kfree_skb(skb); kfree_skb(skb);
return err; return err;
} }

View file

@ -1389,9 +1389,9 @@ static int udp_v6_send_skb(struct sk_buff *skb, struct flowi6 *fl6,
const int hlen = skb_network_header_len(skb) + const int hlen = skb_network_header_len(skb) +
sizeof(struct udphdr); sizeof(struct udphdr);
if (hlen + cork->gso_size > cork->fragsize) { if (hlen + min(datalen, cork->gso_size) > cork->fragsize) {
kfree_skb(skb); kfree_skb(skb);
return -EINVAL; return -EMSGSIZE;
} }
if (datalen > cork->gso_size * UDP_MAX_SEGMENTS) { if (datalen > cork->gso_size * UDP_MAX_SEGMENTS) {
kfree_skb(skb); kfree_skb(skb);

View file

@ -701,11 +701,9 @@ static int rose_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
struct net_device *dev; struct net_device *dev;
ax25_address *source; ax25_address *source;
ax25_uid_assoc *user; ax25_uid_assoc *user;
int err = -EINVAL;
int n; int n;
if (!sock_flag(sk, SOCK_ZAPPED))
return -EINVAL;
if (addr_len != sizeof(struct sockaddr_rose) && addr_len != sizeof(struct full_sockaddr_rose)) if (addr_len != sizeof(struct sockaddr_rose) && addr_len != sizeof(struct full_sockaddr_rose))
return -EINVAL; return -EINVAL;
@ -718,8 +716,15 @@ static int rose_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
if ((unsigned int) addr->srose_ndigis > ROSE_MAX_DIGIS) if ((unsigned int) addr->srose_ndigis > ROSE_MAX_DIGIS)
return -EINVAL; return -EINVAL;
if ((dev = rose_dev_get(&addr->srose_addr)) == NULL) lock_sock(sk);
return -EADDRNOTAVAIL;
if (!sock_flag(sk, SOCK_ZAPPED))
goto out_release;
err = -EADDRNOTAVAIL;
dev = rose_dev_get(&addr->srose_addr);
if (!dev)
goto out_release;
source = &addr->srose_call; source = &addr->srose_call;
@ -730,7 +735,8 @@ static int rose_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
} else { } else {
if (ax25_uid_policy && !capable(CAP_NET_BIND_SERVICE)) { if (ax25_uid_policy && !capable(CAP_NET_BIND_SERVICE)) {
dev_put(dev); dev_put(dev);
return -EACCES; err = -EACCES;
goto out_release;
} }
rose->source_call = *source; rose->source_call = *source;
} }
@ -753,8 +759,10 @@ static int rose_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
rose_insert_socket(sk); rose_insert_socket(sk);
sock_reset_flag(sk, SOCK_ZAPPED); sock_reset_flag(sk, SOCK_ZAPPED);
err = 0;
return 0; out_release:
release_sock(sk);
return err;
} }
static int rose_connect(struct socket *sock, struct sockaddr *uaddr, int addr_len, int flags) static int rose_connect(struct socket *sock, struct sockaddr *uaddr, int addr_len, int flags)

View file

@ -582,6 +582,7 @@ enum rxrpc_call_flag {
RXRPC_CALL_EXCLUSIVE, /* The call uses a once-only connection */ RXRPC_CALL_EXCLUSIVE, /* The call uses a once-only connection */
RXRPC_CALL_RX_IS_IDLE, /* recvmsg() is idle - send an ACK */ RXRPC_CALL_RX_IS_IDLE, /* recvmsg() is idle - send an ACK */
RXRPC_CALL_RECVMSG_READ_ALL, /* recvmsg() read all of the received data */ RXRPC_CALL_RECVMSG_READ_ALL, /* recvmsg() read all of the received data */
RXRPC_CALL_CONN_CHALLENGING, /* The connection is being challenged */
}; };
/* /*
@ -602,7 +603,6 @@ enum rxrpc_call_state {
RXRPC_CALL_CLIENT_AWAIT_REPLY, /* - client awaiting reply */ RXRPC_CALL_CLIENT_AWAIT_REPLY, /* - client awaiting reply */
RXRPC_CALL_CLIENT_RECV_REPLY, /* - client receiving reply phase */ RXRPC_CALL_CLIENT_RECV_REPLY, /* - client receiving reply phase */
RXRPC_CALL_SERVER_PREALLOC, /* - service preallocation */ RXRPC_CALL_SERVER_PREALLOC, /* - service preallocation */
RXRPC_CALL_SERVER_SECURING, /* - server securing request connection */
RXRPC_CALL_SERVER_RECV_REQUEST, /* - server receiving request */ RXRPC_CALL_SERVER_RECV_REQUEST, /* - server receiving request */
RXRPC_CALL_SERVER_ACK_REQUEST, /* - server pending ACK of request */ RXRPC_CALL_SERVER_ACK_REQUEST, /* - server pending ACK of request */
RXRPC_CALL_SERVER_SEND_REPLY, /* - server sending reply */ RXRPC_CALL_SERVER_SEND_REPLY, /* - server sending reply */

View file

@ -22,7 +22,6 @@ const char *const rxrpc_call_states[NR__RXRPC_CALL_STATES] = {
[RXRPC_CALL_CLIENT_AWAIT_REPLY] = "ClAwtRpl", [RXRPC_CALL_CLIENT_AWAIT_REPLY] = "ClAwtRpl",
[RXRPC_CALL_CLIENT_RECV_REPLY] = "ClRcvRpl", [RXRPC_CALL_CLIENT_RECV_REPLY] = "ClRcvRpl",
[RXRPC_CALL_SERVER_PREALLOC] = "SvPrealc", [RXRPC_CALL_SERVER_PREALLOC] = "SvPrealc",
[RXRPC_CALL_SERVER_SECURING] = "SvSecure",
[RXRPC_CALL_SERVER_RECV_REQUEST] = "SvRcvReq", [RXRPC_CALL_SERVER_RECV_REQUEST] = "SvRcvReq",
[RXRPC_CALL_SERVER_ACK_REQUEST] = "SvAckReq", [RXRPC_CALL_SERVER_ACK_REQUEST] = "SvAckReq",
[RXRPC_CALL_SERVER_SEND_REPLY] = "SvSndRpl", [RXRPC_CALL_SERVER_SEND_REPLY] = "SvSndRpl",
@ -453,17 +452,16 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx,
call->cong_tstamp = skb->tstamp; call->cong_tstamp = skb->tstamp;
__set_bit(RXRPC_CALL_EXPOSED, &call->flags); __set_bit(RXRPC_CALL_EXPOSED, &call->flags);
rxrpc_set_call_state(call, RXRPC_CALL_SERVER_SECURING); rxrpc_set_call_state(call, RXRPC_CALL_SERVER_RECV_REQUEST);
spin_lock(&conn->state_lock); spin_lock(&conn->state_lock);
switch (conn->state) { switch (conn->state) {
case RXRPC_CONN_SERVICE_UNSECURED: case RXRPC_CONN_SERVICE_UNSECURED:
case RXRPC_CONN_SERVICE_CHALLENGING: case RXRPC_CONN_SERVICE_CHALLENGING:
rxrpc_set_call_state(call, RXRPC_CALL_SERVER_SECURING); __set_bit(RXRPC_CALL_CONN_CHALLENGING, &call->flags);
break; break;
case RXRPC_CONN_SERVICE: case RXRPC_CONN_SERVICE:
rxrpc_set_call_state(call, RXRPC_CALL_SERVER_RECV_REQUEST);
break; break;
case RXRPC_CONN_ABORTED: case RXRPC_CONN_ABORTED:

View file

@ -228,10 +228,8 @@ static void rxrpc_abort_calls(struct rxrpc_connection *conn)
*/ */
static void rxrpc_call_is_secure(struct rxrpc_call *call) static void rxrpc_call_is_secure(struct rxrpc_call *call)
{ {
if (call && __rxrpc_call_state(call) == RXRPC_CALL_SERVER_SECURING) { if (call && __test_and_clear_bit(RXRPC_CALL_CONN_CHALLENGING, &call->flags))
rxrpc_set_call_state(call, RXRPC_CALL_SERVER_RECV_REQUEST);
rxrpc_notify_socket(call); rxrpc_notify_socket(call);
}
} }
/* /*
@ -272,6 +270,7 @@ static int rxrpc_process_event(struct rxrpc_connection *conn,
* we've already received the packet, put it on the * we've already received the packet, put it on the
* front of the queue. * front of the queue.
*/ */
sp->conn = rxrpc_get_connection(conn, rxrpc_conn_get_poke_secured);
skb->mark = RXRPC_SKB_MARK_SERVICE_CONN_SECURED; skb->mark = RXRPC_SKB_MARK_SERVICE_CONN_SECURED;
rxrpc_get_skb(skb, rxrpc_skb_get_conn_secured); rxrpc_get_skb(skb, rxrpc_skb_get_conn_secured);
skb_queue_head(&conn->local->rx_queue, skb); skb_queue_head(&conn->local->rx_queue, skb);
@ -437,14 +436,16 @@ void rxrpc_input_conn_event(struct rxrpc_connection *conn, struct sk_buff *skb)
if (test_and_clear_bit(RXRPC_CONN_EV_ABORT_CALLS, &conn->events)) if (test_and_clear_bit(RXRPC_CONN_EV_ABORT_CALLS, &conn->events))
rxrpc_abort_calls(conn); rxrpc_abort_calls(conn);
switch (skb->mark) { if (skb) {
case RXRPC_SKB_MARK_SERVICE_CONN_SECURED: switch (skb->mark) {
if (conn->state != RXRPC_CONN_SERVICE) case RXRPC_SKB_MARK_SERVICE_CONN_SECURED:
break; if (conn->state != RXRPC_CONN_SERVICE)
break;
for (loop = 0; loop < RXRPC_MAXCALLS; loop++) for (loop = 0; loop < RXRPC_MAXCALLS; loop++)
rxrpc_call_is_secure(conn->channels[loop].call); rxrpc_call_is_secure(conn->channels[loop].call);
break; break;
}
} }
/* Process delayed ACKs whose time has come. */ /* Process delayed ACKs whose time has come. */

View file

@ -67,6 +67,7 @@ struct rxrpc_connection *rxrpc_alloc_connection(struct rxrpc_net *rxnet,
INIT_WORK(&conn->destructor, rxrpc_clean_up_connection); INIT_WORK(&conn->destructor, rxrpc_clean_up_connection);
INIT_LIST_HEAD(&conn->proc_link); INIT_LIST_HEAD(&conn->proc_link);
INIT_LIST_HEAD(&conn->link); INIT_LIST_HEAD(&conn->link);
INIT_LIST_HEAD(&conn->attend_link);
mutex_init(&conn->security_lock); mutex_init(&conn->security_lock);
mutex_init(&conn->tx_data_alloc_lock); mutex_init(&conn->tx_data_alloc_lock);
skb_queue_head_init(&conn->rx_queue); skb_queue_head_init(&conn->rx_queue);

View file

@ -448,11 +448,19 @@ static void rxrpc_input_queue_data(struct rxrpc_call *call, struct sk_buff *skb,
struct rxrpc_skb_priv *sp = rxrpc_skb(skb); struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
bool last = sp->hdr.flags & RXRPC_LAST_PACKET; bool last = sp->hdr.flags & RXRPC_LAST_PACKET;
skb_queue_tail(&call->recvmsg_queue, skb); spin_lock_irq(&call->recvmsg_queue.lock);
__skb_queue_tail(&call->recvmsg_queue, skb);
rxrpc_input_update_ack_window(call, window, wtop); rxrpc_input_update_ack_window(call, window, wtop);
trace_rxrpc_receive(call, last ? why + 1 : why, sp->hdr.serial, sp->hdr.seq); trace_rxrpc_receive(call, last ? why + 1 : why, sp->hdr.serial, sp->hdr.seq);
if (last) if (last)
/* Change the state inside the lock so that recvmsg syncs
* correctly with it and using sendmsg() to send a reply
* doesn't race.
*/
rxrpc_end_rx_phase(call, sp->hdr.serial); rxrpc_end_rx_phase(call, sp->hdr.serial);
spin_unlock_irq(&call->recvmsg_queue.lock);
} }
/* /*
@ -657,7 +665,7 @@ static bool rxrpc_input_split_jumbo(struct rxrpc_call *call, struct sk_buff *skb
rxrpc_propose_delay_ACK(call, sp->hdr.serial, rxrpc_propose_delay_ACK(call, sp->hdr.serial,
rxrpc_propose_ack_input_data); rxrpc_propose_ack_input_data);
} }
if (notify) { if (notify && !test_bit(RXRPC_CALL_CONN_CHALLENGING, &call->flags)) {
trace_rxrpc_notify_socket(call->debug_id, sp->hdr.serial); trace_rxrpc_notify_socket(call->debug_id, sp->hdr.serial);
rxrpc_notify_socket(call); rxrpc_notify_socket(call);
} }

View file

@ -707,7 +707,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len)
} else { } else {
switch (rxrpc_call_state(call)) { switch (rxrpc_call_state(call)) {
case RXRPC_CALL_CLIENT_AWAIT_CONN: case RXRPC_CALL_CLIENT_AWAIT_CONN:
case RXRPC_CALL_SERVER_SECURING: case RXRPC_CALL_SERVER_RECV_REQUEST:
if (p.command == RXRPC_CMD_SEND_ABORT) if (p.command == RXRPC_CMD_SEND_ABORT)
break; break;
fallthrough; fallthrough;

View file

@ -40,6 +40,9 @@ static int pfifo_tail_enqueue(struct sk_buff *skb, struct Qdisc *sch,
{ {
unsigned int prev_backlog; unsigned int prev_backlog;
if (unlikely(READ_ONCE(sch->limit) == 0))
return qdisc_drop(skb, sch, to_free);
if (likely(sch->q.qlen < READ_ONCE(sch->limit))) if (likely(sch->q.qlen < READ_ONCE(sch->limit)))
return qdisc_enqueue_tail(skb, sch); return qdisc_enqueue_tail(skb, sch);

View file

@ -749,9 +749,9 @@ deliver:
if (err != NET_XMIT_SUCCESS) { if (err != NET_XMIT_SUCCESS) {
if (net_xmit_drop_count(err)) if (net_xmit_drop_count(err))
qdisc_qstats_drop(sch); qdisc_qstats_drop(sch);
qdisc_tree_reduce_backlog(sch, 1, pkt_len);
sch->qstats.backlog -= pkt_len; sch->qstats.backlog -= pkt_len;
sch->q.qlen--; sch->q.qlen--;
qdisc_tree_reduce_backlog(sch, 1, pkt_len);
} }
goto tfifo_dequeue; goto tfifo_dequeue;
} }

View file

@ -252,6 +252,7 @@ def test_rss_queue_reconfigure(cfg, main_ctx=True):
try: try:
# this targets queue 4, which doesn't exist # this targets queue 4, which doesn't exist
ntuple2 = ethtool_create(cfg, "-N", flow) ntuple2 = ethtool_create(cfg, "-N", flow)
defer(ethtool, f"-N {cfg.ifname} delete {ntuple2}")
except CmdExitFailure: except CmdExitFailure:
pass pass
else: else:
@ -259,7 +260,13 @@ def test_rss_queue_reconfigure(cfg, main_ctx=True):
# change the table to target queues 0 and 2 # change the table to target queues 0 and 2
ethtool(f"-X {cfg.ifname} {ctx_ref} weight 1 0 1 0") ethtool(f"-X {cfg.ifname} {ctx_ref} weight 1 0 1 0")
# ntuple rule therefore targets queues 1 and 3 # ntuple rule therefore targets queues 1 and 3
ntuple2 = ethtool_create(cfg, "-N", flow) try:
ntuple2 = ethtool_create(cfg, "-N", flow)
except CmdExitFailure:
ksft_pr("Driver does not support rss + queue offset")
return
defer(ethtool, f"-N {cfg.ifname} delete {ntuple2}")
# should replace existing filter # should replace existing filter
ksft_eq(ntuple, ntuple2) ksft_eq(ntuple, ntuple2)
_send_traffic_check(cfg, port, ctx_ref, { 'target': (1, 3), _send_traffic_check(cfg, port, ctx_ref, { 'target': (1, 3),

View file

@ -1302,7 +1302,7 @@ again:
return ret; return ret;
if (cfg_truncate > 0) { if (cfg_truncate > 0) {
xdisconnect(fd); shutdown(fd, SHUT_WR);
} else if (--cfg_repeat > 0) { } else if (--cfg_repeat > 0) {
xdisconnect(fd); xdisconnect(fd);

View file

@ -102,6 +102,19 @@ struct testcase testcases_v4[] = {
.gso_len = CONST_MSS_V4, .gso_len = CONST_MSS_V4,
.r_num_mss = 1, .r_num_mss = 1,
}, },
{
/* datalen <= MSS < gso_len: will fall back to no GSO */
.tlen = CONST_MSS_V4,
.gso_len = CONST_MSS_V4 + 1,
.r_num_mss = 0,
.r_len_last = CONST_MSS_V4,
},
{
/* MSS < datalen < gso_len: fail */
.tlen = CONST_MSS_V4 + 1,
.gso_len = CONST_MSS_V4 + 2,
.tfail = true,
},
{ {
/* send a single MSS + 1B */ /* send a single MSS + 1B */
.tlen = CONST_MSS_V4 + 1, .tlen = CONST_MSS_V4 + 1,
@ -205,6 +218,19 @@ struct testcase testcases_v6[] = {
.gso_len = CONST_MSS_V6, .gso_len = CONST_MSS_V6,
.r_num_mss = 1, .r_num_mss = 1,
}, },
{
/* datalen <= MSS < gso_len: will fall back to no GSO */
.tlen = CONST_MSS_V6,
.gso_len = CONST_MSS_V6 + 1,
.r_num_mss = 0,
.r_len_last = CONST_MSS_V6,
},
{
/* MSS < datalen < gso_len: fail */
.tlen = CONST_MSS_V6 + 1,
.gso_len = CONST_MSS_V6 + 2,
.tfail = true
},
{ {
/* send a single MSS + 1B */ /* send a single MSS + 1B */
.tlen = CONST_MSS_V6 + 1, .tlen = CONST_MSS_V6 + 1,

View file

@ -94,5 +94,37 @@
"$TC qdisc del dev $DUMMY ingress", "$TC qdisc del dev $DUMMY ingress",
"$IP addr del 10.10.10.10/24 dev $DUMMY" "$IP addr del 10.10.10.10/24 dev $DUMMY"
] ]
} },
{
"id": "a4b9",
"name": "Test class qlen notification",
"category": [
"qdisc"
],
"plugins": {
"requires": "nsPlugin"
},
"setup": [
"$IP link set dev $DUMMY up || true",
"$IP addr add 10.10.10.10/24 dev $DUMMY || true",
"$TC qdisc add dev $DUMMY root handle 1: drr",
"$TC filter add dev $DUMMY parent 1: basic classid 1:1",
"$TC class add dev $DUMMY parent 1: classid 1:1 drr",
"$TC qdisc add dev $DUMMY parent 1:1 handle 2: netem",
"$TC qdisc add dev $DUMMY parent 2: handle 3: drr",
"$TC filter add dev $DUMMY parent 3: basic action drop",
"$TC class add dev $DUMMY parent 3: classid 3:1 drr",
"$TC class del dev $DUMMY classid 1:1",
"$TC class add dev $DUMMY parent 1: classid 1:1 drr"
],
"cmdUnderTest": "ping -c1 -W0.01 -I $DUMMY 10.10.10.1",
"expExitCode": "1",
"verifyCmd": "$TC qdisc ls dev $DUMMY",
"matchPattern": "drr 1: root",
"matchCount": "1",
"teardown": [
"$TC qdisc del dev $DUMMY root handle 1: drr",
"$IP addr del 10.10.10.10/24 dev $DUMMY"
]
}
] ]

View file

@ -313,6 +313,29 @@
"matchPattern": "qdisc bfifo 1: root", "matchPattern": "qdisc bfifo 1: root",
"matchCount": "0", "matchCount": "0",
"teardown": [ "teardown": [
]
},
{
"id": "d774",
"name": "Check pfifo_head_drop qdisc enqueue behaviour when limit == 0",
"category": [
"qdisc",
"pfifo_head_drop"
],
"plugins": {
"requires": "nsPlugin"
},
"setup": [
"$IP addr add 10.10.10.10/24 dev $DUMMY || true",
"$TC qdisc add dev $DUMMY root handle 1: pfifo_head_drop limit 0",
"$IP link set dev $DUMMY up || true"
],
"cmdUnderTest": "ping -c2 -W0.01 -I $DUMMY 10.10.10.1",
"expExitCode": "1",
"verifyCmd": "$TC -s qdisc show dev $DUMMY",
"matchPattern": "dropped 2",
"matchCount": "1",
"teardown": [
] ]
} }
] ]