1
0
Fork 0
mirror of synced 2025-03-06 20:59:54 +01:00

netfilter pull request 25-02-13

-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEjF9xRqF1emXiQiqU1w0aZmrPKyEFAmetwrcACgkQ1w0aZmrP
 KyHrsA/+JYLZfG5Z1IMVs1MO0OyrhP/psLdAgwBGdyMpH1s95/d+fs1jej+A7zTh
 9JtQu8i2sUzPq19eHtjPvafMb53/GTUly2qIJannmga22JxrT2Xvw3xUFsd0wiTa
 e7g+mcRM3GIanXDN6U98FcC8w/aThsVy61QjpSGab4LYjKu4cTYpgO2iZqVjOSUT
 cyfYrn3bgFkPphLA8YrJ9govwU1H6AOJtzCigU8Q8jkAQ0u8VOsWRa7ac/UhAIUa
 viG3H7cv0iIzZ2NspokFU4LBMSKPHE9FAWHbw5cCukXSdCBoww14CbljFd3lOrrQ
 z3BG+hREDLrscxMmCuBxvXLz1nN/UUPMlfTwvuDg68BySixiFPn7pjqVQUi68ij0
 AS3y+tSAIDpibK4YcXUguvn49NcdvK0oEkrI3pAEwL6y8bHpoJfwNR73T/KeH8Vm
 XQr2m1ruPhyCIWkV8yKPyga+7tWjT+txgZQAP1hwZWo/P3rao6cYDKfkWZKpdJ01
 RKk4YepI7kDVqqqgRCpFfkcMvqthaRdBaTrQU3KnQBu/bkY3CxoQAtDJtBHdrcaw
 7XykaojoDAFpydTJg7eVKTm/x6k0syWKJA/TsyX5p5OmQ4EOGt6lmZbkljxDPl+q
 NiiXLnIdugnM4uL6lR6YGSsvzOui+LUT6KUPsbw1Eg7kKPK0StI=
 =ZZ4N
 -----END PGP SIGNATURE-----

Merge tag 'nf-25-02-13' of git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf

Pablo Neira Ayuso says:

====================
Netfilter fixes for net

The following batch contains one revert for:

1) Revert flowtable entry teardown cycle when skbuff exceeds mtu to
   deal with DF flag unset scenarios. This is reverts a patch coming
   in the previous merge window (available in 6.14-rc releases).

* tag 'nf-25-02-13' of git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf:
  Revert "netfilter: flowtable: teardown flow if cached mtu is stale"
====================

Link: https://patch.msgid.link/20250213100502.3983-1-pablo@netfilter.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2025-02-13 09:38:50 -08:00
commit 458bf63d17

View file

@ -381,10 +381,8 @@ static int nf_flow_offload_forward(struct nf_flowtable_ctx *ctx,
flow = container_of(tuplehash, struct flow_offload, tuplehash[dir]);
mtu = flow->tuplehash[dir].tuple.mtu + ctx->offset;
if (unlikely(nf_flow_exceeds_mtu(skb, mtu))) {
flow_offload_teardown(flow);
if (unlikely(nf_flow_exceeds_mtu(skb, mtu)))
return 0;
}
iph = (struct iphdr *)(skb_network_header(skb) + ctx->offset);
thoff = (iph->ihl * 4) + ctx->offset;
@ -662,10 +660,8 @@ static int nf_flow_offload_ipv6_forward(struct nf_flowtable_ctx *ctx,
flow = container_of(tuplehash, struct flow_offload, tuplehash[dir]);
mtu = flow->tuplehash[dir].tuple.mtu + ctx->offset;
if (unlikely(nf_flow_exceeds_mtu(skb, mtu))) {
flow_offload_teardown(flow);
if (unlikely(nf_flow_exceeds_mtu(skb, mtu)))
return 0;
}
ip6h = (struct ipv6hdr *)(skb_network_header(skb) + ctx->offset);
thoff = sizeof(*ip6h) + ctx->offset;