1
0
Fork 0
mirror of synced 2025-03-06 20:59:54 +01:00

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.13-rc7).

Conflicts:
  a42d71e322 ("net_sched: sch_cake: Add drop reasons")
  737d4d91d3 ("sched: sch_cake: add bounds checks to host bulk flow fairness counts")

Adjacent changes:

drivers/net/ethernet/meta/fbnic/fbnic.h
  3a856ab347 ("eth: fbnic: add IRQ reuse support")
  95978931d5 ("eth: fbnic: Revert "eth: fbnic: Add hardware monitoring support via HWMON interface"")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2025-01-09 16:11:47 -08:00
commit 14ea4cd1b1
182 changed files with 1470 additions and 963 deletions

View file

@ -435,7 +435,7 @@ Martin Kepplinger <martink@posteo.de> <martin.kepplinger@ginzinger.com>
Martin Kepplinger <martink@posteo.de> <martin.kepplinger@puri.sm>
Martin Kepplinger <martink@posteo.de> <martin.kepplinger@theobroma-systems.com>
Martyna Szapar-Mudlaw <martyna.szapar-mudlaw@linux.intel.com> <martyna.szapar-mudlaw@intel.com>
Mathieu Othacehe <m.othacehe@gmail.com> <othacehe@gnu.org>
Mathieu Othacehe <othacehe@gnu.org> <m.othacehe@gmail.com>
Mat Martineau <martineau@kernel.org> <mathew.j.martineau@linux.intel.com>
Mat Martineau <martineau@kernel.org> <mathewm@codeaurora.org>
Matthew Wilcox <willy@infradead.org> <matthew.r.wilcox@intel.com>

12
CREDITS
View file

@ -20,6 +20,10 @@ N: Thomas Abraham
E: thomas.ab@samsung.com
D: Samsung pin controller driver
N: Jose Abreu
E: jose.abreu@synopsys.com
D: Synopsys DesignWare XPCS MDIO/PCS driver.
N: Dragos Acostachioaie
E: dragos@iname.com
W: http://www.arbornet.org/~dragos
@ -1428,6 +1432,10 @@ S: 8124 Constitution Apt. 7
S: Sterling Heights, Michigan 48313
S: USA
N: Andy Gospodarek
E: andy@greyhouse.net
D: Maintenance and contributions to the network interface bonding driver.
N: Wolfgang Grandegger
E: wg@grandegger.com
D: Controller Area Network (device drivers)
@ -1812,6 +1820,10 @@ D: Author/maintainer of most DRM drivers (especially ATI, MGA)
D: Core DRM templates, general DRM and 3D-related hacking
S: No fixed address
N: Woojung Huh
E: woojung.huh@microchip.com
D: Microchip LAN78XX USB Ethernet driver
N: Kenn Humborg
E: kenn@wombat.ie
D: Mods to loop device to support sparse backing files

View file

@ -436,7 +436,7 @@ AnonHugePmdMapped).
The number of file transparent huge pages mapped to userspace is available
by reading ShmemPmdMapped and ShmemHugePages fields in ``/proc/meminfo``.
To identify what applications are mapping file transparent huge pages, it
is necessary to read ``/proc/PID/smaps`` and count the FileHugeMapped fields
is necessary to read ``/proc/PID/smaps`` and count the FilePmdMapped fields
for each mapping.
Note that reading the smaps file is expensive and reading it

View file

@ -81,7 +81,7 @@ properties:
List of phandles, each pointing to the power supply for the
corresponding pairset named in 'pairset-names'. This property
aligns with IEEE 802.3-2022, Section 33.2.3 and 145.2.4.
PSE Pinout Alternatives (as per IEEE 802.3-2022 Table 145\u20133)
PSE Pinout Alternatives (as per IEEE 802.3-2022 Table 145-3)
|-----------|---------------|---------------|---------------|---------------|
| Conductor | Alternative A | Alternative A | Alternative B | Alternative B |
| | (MDI-X) | (MDI) | (X) | (S) |

View file

@ -949,7 +949,6 @@ AMAZON ETHERNET DRIVERS
M: Shay Agroskin <shayagr@amazon.com>
M: Arthur Kiyanovski <akiyano@amazon.com>
R: David Arinzon <darinzon@amazon.com>
R: Noam Dagan <ndagan@amazon.com>
R: Saeed Bishara <saeedb@amazon.com>
L: netdev@vger.kernel.org
S: Supported
@ -2690,7 +2689,6 @@ N: at91
N: atmel
ARM/Microchip Sparx5 SoC support
M: Lars Povlsen <lars.povlsen@microchip.com>
M: Steen Hegelund <Steen.Hegelund@microchip.com>
M: Daniel Machon <daniel.machon@microchip.com>
M: UNGLinuxDriver@microchip.com
@ -4065,7 +4063,6 @@ F: net/bluetooth/
BONDING DRIVER
M: Jay Vosburgh <jv@jvosburgh.net>
M: Andy Gospodarek <andy@greyhouse.net>
L: netdev@vger.kernel.org
S: Maintained
F: Documentation/networking/bonding.rst
@ -14574,7 +14571,6 @@ F: drivers/dma/mediatek/
MEDIATEK ETHERNET DRIVER
M: Felix Fietkau <nbd@nbd.name>
M: Sean Wang <sean.wang@mediatek.com>
M: Mark Lee <Mark-MC.Lee@mediatek.com>
M: Lorenzo Bianconi <lorenzo@kernel.org>
L: netdev@vger.kernel.org
S: Maintained
@ -14764,7 +14760,7 @@ F: drivers/memory/mtk-smi.c
F: include/soc/mediatek/smi.h
MEDIATEK SWITCH DRIVER
M: Arınç ÜNAL <arinc.unal@arinc9.com>
M: Chester A. Unal <chester.a.unal@arinc9.com>
M: Daniel Golle <daniel@makrotopia.org>
M: DENG Qingfang <dqfext@gmail.com>
M: Sean Wang <sean.wang@mediatek.com>
@ -18469,7 +18465,7 @@ F: Documentation/devicetree/bindings/pinctrl/mediatek,mt8183-pinctrl.yaml
F: drivers/pinctrl/mediatek/
PIN CONTROLLER - MEDIATEK MIPS
M: Arınç ÜNAL <arinc.unal@arinc9.com>
M: Chester A. Unal <chester.a.unal@arinc9.com>
M: Sergio Paracuellos <sergio.paracuellos@gmail.com>
L: linux-mediatek@lists.infradead.org (moderated for non-subscribers)
L: linux-mips@vger.kernel.org
@ -19513,7 +19509,7 @@ S: Maintained
F: arch/mips/ralink
RALINK MT7621 MIPS ARCHITECTURE
M: Arınç ÜNAL <arinc.unal@arinc9.com>
M: Chester A. Unal <chester.a.unal@arinc9.com>
M: Sergio Paracuellos <sergio.paracuellos@gmail.com>
L: linux-mips@vger.kernel.org
S: Maintained
@ -20916,6 +20912,8 @@ F: kernel/sched/
SCHEDULER - SCHED_EXT
R: Tejun Heo <tj@kernel.org>
R: David Vernet <void@manifault.com>
R: Andrea Righi <arighi@nvidia.com>
R: Changwoo Min <changwoo@igalia.com>
L: linux-kernel@vger.kernel.org
S: Maintained
W: https://github.com/sched-ext/scx
@ -22510,11 +22508,8 @@ F: Documentation/devicetree/bindings/phy/st,stm32mp25-combophy.yaml
F: drivers/phy/st/phy-stm32-combophy.c
STMMAC ETHERNET DRIVER
M: Alexandre Torgue <alexandre.torgue@foss.st.com>
M: Jose Abreu <joabreu@synopsys.com>
L: netdev@vger.kernel.org
S: Supported
W: http://www.stlinux.com
S: Orphan
F: Documentation/networking/device_drivers/ethernet/stmicro/
F: drivers/net/ethernet/stmicro/stmmac/
@ -22746,9 +22741,8 @@ S: Supported
F: drivers/net/ethernet/synopsys/
SYNOPSYS DESIGNWARE ETHERNET XPCS DRIVER
M: Jose Abreu <Jose.Abreu@synopsys.com>
L: netdev@vger.kernel.org
S: Supported
S: Orphan
F: drivers/net/pcs/pcs-xpcs.c
F: drivers/net/pcs/pcs-xpcs.h
F: include/linux/pcs/pcs-xpcs.h
@ -23656,7 +23650,6 @@ F: tools/testing/selftests/timers/
TIPC NETWORK LAYER
M: Jon Maloy <jmaloy@redhat.com>
M: Ying Xue <ying.xue@windriver.com>
L: netdev@vger.kernel.org (core kernel code)
L: tipc-discussion@lists.sourceforge.net (user apps, general discussion)
S: Maintained
@ -24262,7 +24255,8 @@ F: Documentation/devicetree/bindings/usb/nxp,isp1760.yaml
F: drivers/usb/isp1760/*
USB LAN78XX ETHERNET DRIVER
M: Woojung Huh <woojung.huh@microchip.com>
M: Thangaraj Samynathan <Thangaraj.S@microchip.com>
M: Rengarajan Sundararajan <Rengarajan.S@microchip.com>
M: UNGLinuxDriver@microchip.com
L: netdev@vger.kernel.org
S: Maintained

View file

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 13
SUBLEVEL = 0
EXTRAVERSION = -rc5
EXTRAVERSION = -rc6
NAME = Baby Opossum Posse
# *DOCUMENTATION*

View file

@ -1472,10 +1472,15 @@ EXPORT_SYMBOL_GPL(btmtk_usb_setup);
int btmtk_usb_shutdown(struct hci_dev *hdev)
{
struct btmtk_data *data = hci_get_priv(hdev);
struct btmtk_hci_wmt_params wmt_params;
u8 param = 0;
int err;
err = usb_autopm_get_interface(data->intf);
if (err < 0)
return err;
/* Disable the device */
wmt_params.op = BTMTK_WMT_FUNC_CTRL;
wmt_params.flag = 0;
@ -1486,9 +1491,11 @@ int btmtk_usb_shutdown(struct hci_dev *hdev)
err = btmtk_usb_hci_wmt_sync(hdev, &wmt_params);
if (err < 0) {
bt_dev_err(hdev, "Failed to send wmt func ctrl (%d)", err);
usb_autopm_put_interface(data->intf);
return err;
}
usb_autopm_put_interface(data->intf);
return 0;
}
EXPORT_SYMBOL_GPL(btmtk_usb_shutdown);

View file

@ -1381,6 +1381,7 @@ static void btnxpuart_tx_work(struct work_struct *work)
while ((skb = nxp_dequeue(nxpdev))) {
len = serdev_device_write_buf(serdev, skb->data, skb->len);
serdev_device_wait_until_sent(serdev, 0);
hdev->stat.byte_tx += len;
skb_pull(skb, len);

View file

@ -1106,7 +1106,7 @@ int open_for_data(struct cdrom_device_info *cdi)
}
}
cd_dbg(CD_OPEN, "all seems well, opening the devicen");
cd_dbg(CD_OPEN, "all seems well, opening the device\n");
/* all seems well, we can open the device */
ret = cdo->open(cdi, 0); /* open for data */

View file

@ -278,7 +278,8 @@ static int clk_imx8mp_audiomix_reset_controller_register(struct device *dev,
#else /* !CONFIG_RESET_CONTROLLER */
static int clk_imx8mp_audiomix_reset_controller_register(struct clk_imx8mp_audiomix_priv *priv)
static int clk_imx8mp_audiomix_reset_controller_register(struct device *dev,
struct clk_imx8mp_audiomix_priv *priv)
{
return 0;
}

View file

@ -779,6 +779,13 @@ static struct ccu_div dpu1_clk = {
},
};
static CLK_FIXED_FACTOR_HW(emmc_sdio_ref_clk, "emmc-sdio-ref",
&video_pll_clk.common.hw, 4, 1, 0);
static const struct clk_parent_data emmc_sdio_ref_clk_pd[] = {
{ .hw = &emmc_sdio_ref_clk.hw },
};
static CCU_GATE(CLK_BROM, brom_clk, "brom", ahb2_cpusys_hclk_pd, 0x100, BIT(4), 0);
static CCU_GATE(CLK_BMU, bmu_clk, "bmu", axi4_cpusys2_aclk_pd, 0x100, BIT(5), 0);
static CCU_GATE(CLK_AON2CPU_A2X, aon2cpu_a2x_clk, "aon2cpu-a2x", axi4_cpusys2_aclk_pd,
@ -798,7 +805,7 @@ static CCU_GATE(CLK_PERISYS_APB4_HCLK, perisys_apb4_hclk, "perisys-apb4-hclk", p
0x150, BIT(12), 0);
static CCU_GATE(CLK_NPU_AXI, npu_axi_clk, "npu-axi", axi_aclk_pd, 0x1c8, BIT(5), 0);
static CCU_GATE(CLK_CPU2VP, cpu2vp_clk, "cpu2vp", axi_aclk_pd, 0x1e0, BIT(13), 0);
static CCU_GATE(CLK_EMMC_SDIO, emmc_sdio_clk, "emmc-sdio", video_pll_clk_pd, 0x204, BIT(30), 0);
static CCU_GATE(CLK_EMMC_SDIO, emmc_sdio_clk, "emmc-sdio", emmc_sdio_ref_clk_pd, 0x204, BIT(30), 0);
static CCU_GATE(CLK_GMAC1, gmac1_clk, "gmac1", gmac_pll_clk_pd, 0x204, BIT(26), 0);
static CCU_GATE(CLK_PADCTRL1, padctrl1_clk, "padctrl1", perisys_apb_pclk_pd, 0x204, BIT(24), 0);
static CCU_GATE(CLK_DSMART, dsmart_clk, "dsmart", perisys_apb_pclk_pd, 0x204, BIT(23), 0);
@ -1059,6 +1066,10 @@ static int th1520_clk_probe(struct platform_device *pdev)
return ret;
priv->hws[CLK_PLL_GMAC_100M] = &gmac_pll_clk_100m.hw;
ret = devm_clk_hw_register(dev, &emmc_sdio_ref_clk.hw);
if (ret)
return ret;
ret = devm_of_clk_add_hw_provider(dev, of_clk_hw_onecell_get, priv);
if (ret)
return ret;

View file

@ -442,7 +442,7 @@ static int ebs_iterate_devices(struct dm_target *ti,
static struct target_type ebs_target = {
.name = "ebs",
.version = {1, 0, 1},
.features = DM_TARGET_PASSES_INTEGRITY,
.features = 0,
.module = THIS_MODULE,
.ctr = ebs_ctr,
.dtr = ebs_dtr,

View file

@ -2332,10 +2332,9 @@ static struct thin_c *get_first_thin(struct pool *pool)
struct thin_c *tc = NULL;
rcu_read_lock();
if (!list_empty(&pool->active_thins)) {
tc = list_entry_rcu(pool->active_thins.next, struct thin_c, list);
tc = list_first_or_null_rcu(&pool->active_thins, struct thin_c, list);
if (tc)
thin_get(tc);
}
rcu_read_unlock();
return tc;

View file

@ -39,36 +39,24 @@ static inline u64 fec_interleave(struct dm_verity *v, u64 offset)
return offset + mod * (v->fec->rounds << v->data_dev_block_bits);
}
/*
* Decode an RS block using Reed-Solomon.
*/
static int fec_decode_rs8(struct dm_verity *v, struct dm_verity_fec_io *fio,
u8 *data, u8 *fec, int neras)
{
int i;
uint16_t par[DM_VERITY_FEC_RSM - DM_VERITY_FEC_MIN_RSN];
for (i = 0; i < v->fec->roots; i++)
par[i] = fec[i];
return decode_rs8(fio->rs, data, par, v->fec->rsn, NULL, neras,
fio->erasures, 0, NULL);
}
/*
* Read error-correcting codes for the requested RS block. Returns a pointer
* to the data block. Caller is responsible for releasing buf.
*/
static u8 *fec_read_parity(struct dm_verity *v, u64 rsb, int index,
unsigned int *offset, struct dm_buffer **buf,
unsigned short ioprio)
unsigned int *offset, unsigned int par_buf_offset,
struct dm_buffer **buf, unsigned short ioprio)
{
u64 position, block, rem;
u8 *res;
/* We have already part of parity bytes read, skip to the next block */
if (par_buf_offset)
index++;
position = (index + rsb) * v->fec->roots;
block = div64_u64_rem(position, v->fec->io_size, &rem);
*offset = (unsigned int)rem;
*offset = par_buf_offset ? 0 : (unsigned int)rem;
res = dm_bufio_read_with_ioprio(v->fec->bufio, block, buf, ioprio);
if (IS_ERR(res)) {
@ -128,11 +116,13 @@ static int fec_decode_bufs(struct dm_verity *v, struct dm_verity_io *io,
{
int r, corrected = 0, res;
struct dm_buffer *buf;
unsigned int n, i, offset;
unsigned int n, i, j, offset, par_buf_offset = 0;
uint16_t par_buf[DM_VERITY_FEC_RSM - DM_VERITY_FEC_MIN_RSN];
u8 *par, *block;
struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size);
par = fec_read_parity(v, rsb, block_offset, &offset, &buf, bio_prio(bio));
par = fec_read_parity(v, rsb, block_offset, &offset,
par_buf_offset, &buf, bio_prio(bio));
if (IS_ERR(par))
return PTR_ERR(par);
@ -142,7 +132,11 @@ static int fec_decode_bufs(struct dm_verity *v, struct dm_verity_io *io,
*/
fec_for_each_buffer_rs_block(fio, n, i) {
block = fec_buffer_rs_block(v, fio, n, i);
res = fec_decode_rs8(v, fio, block, &par[offset], neras);
for (j = 0; j < v->fec->roots - par_buf_offset; j++)
par_buf[par_buf_offset + j] = par[offset + j];
/* Decode an RS block using Reed-Solomon */
res = decode_rs8(fio->rs, block, par_buf, v->fec->rsn,
NULL, neras, fio->erasures, 0, NULL);
if (res < 0) {
r = res;
goto error;
@ -155,12 +149,22 @@ static int fec_decode_bufs(struct dm_verity *v, struct dm_verity_io *io,
if (block_offset >= 1 << v->data_dev_block_bits)
goto done;
/* read the next block when we run out of parity bytes */
offset += v->fec->roots;
/* Read the next block when we run out of parity bytes */
offset += (v->fec->roots - par_buf_offset);
/* Check if parity bytes are split between blocks */
if (offset < v->fec->io_size && (offset + v->fec->roots) > v->fec->io_size) {
par_buf_offset = v->fec->io_size - offset;
for (j = 0; j < par_buf_offset; j++)
par_buf[j] = par[offset + j];
offset += par_buf_offset;
} else
par_buf_offset = 0;
if (offset >= v->fec->io_size) {
dm_bufio_release(buf);
par = fec_read_parity(v, rsb, block_offset, &offset, &buf, bio_prio(bio));
par = fec_read_parity(v, rsb, block_offset, &offset,
par_buf_offset, &buf, bio_prio(bio));
if (IS_ERR(par))
return PTR_ERR(par);
}
@ -724,10 +728,7 @@ int verity_fec_ctr(struct dm_verity *v)
return -E2BIG;
}
if ((f->roots << SECTOR_SHIFT) & ((1 << v->data_dev_block_bits) - 1))
f->io_size = 1 << v->data_dev_block_bits;
else
f->io_size = v->fec->roots << SECTOR_SHIFT;
f->io_size = 1 << v->data_dev_block_bits;
f->bufio = dm_bufio_client_create(f->dev->bdev,
f->io_size,

View file

@ -917,23 +917,27 @@ static int load_ablock(struct dm_array_cursor *c)
if (c->block)
unlock_ablock(c->info, c->block);
c->block = NULL;
c->ab = NULL;
c->index = 0;
r = dm_btree_cursor_get_value(&c->cursor, &key, &value_le);
if (r) {
DMERR("dm_btree_cursor_get_value failed");
dm_btree_cursor_end(&c->cursor);
goto out;
} else {
r = get_ablock(c->info, le64_to_cpu(value_le), &c->block, &c->ab);
if (r) {
DMERR("get_ablock failed");
dm_btree_cursor_end(&c->cursor);
goto out;
}
}
return 0;
out:
dm_btree_cursor_end(&c->cursor);
c->block = NULL;
c->ab = NULL;
return r;
}
@ -956,10 +960,10 @@ EXPORT_SYMBOL_GPL(dm_array_cursor_begin);
void dm_array_cursor_end(struct dm_array_cursor *c)
{
if (c->block) {
if (c->block)
unlock_ablock(c->info, c->block);
dm_btree_cursor_end(&c->cursor);
}
dm_btree_cursor_end(&c->cursor);
}
EXPORT_SYMBOL_GPL(dm_array_cursor_end);
@ -999,6 +1003,7 @@ int dm_array_cursor_skip(struct dm_array_cursor *c, uint32_t count)
}
count -= remaining;
c->index += (remaining - 1);
r = dm_array_cursor_next(c);
} while (!r);

View file

@ -118,7 +118,7 @@ int pdsc_dl_info_get(struct devlink *dl, struct devlink_info_req *req,
if (err && err != -EIO)
return err;
listlen = fw_list.num_fw_slots;
listlen = min(fw_list.num_fw_slots, ARRAY_SIZE(fw_list.fw_names));
for (i = 0; i < listlen; i++) {
if (i < ARRAY_SIZE(fw_slotnames))
strscpy(buf, fw_slotnames[i], sizeof(buf));

View file

@ -2897,6 +2897,13 @@ static int bnxt_hwrm_handler(struct bnxt *bp, struct tx_cmp *txcmp)
return 0;
}
static bool bnxt_vnic_is_active(struct bnxt *bp)
{
struct bnxt_vnic_info *vnic = &bp->vnic_info[0];
return vnic->fw_vnic_id != INVALID_HW_RING_ID && vnic->mru > 0;
}
static irqreturn_t bnxt_msix(int irq, void *dev_instance)
{
struct bnxt_napi *bnapi = dev_instance;
@ -3164,7 +3171,7 @@ static int bnxt_poll(struct napi_struct *napi, int budget)
break;
}
}
if (bp->flags & BNXT_FLAG_DIM) {
if ((bp->flags & BNXT_FLAG_DIM) && bnxt_vnic_is_active(bp)) {
struct dim_sample dim_sample = {};
dim_update_sample(cpr->event_ctr,
@ -3295,7 +3302,7 @@ static int bnxt_poll_p5(struct napi_struct *napi, int budget)
poll_done:
cpr_rx = &cpr->cp_ring_arr[0];
if (cpr_rx->cp_ring_type == BNXT_NQ_HDL_TYPE_RX &&
(bp->flags & BNXT_FLAG_DIM)) {
(bp->flags & BNXT_FLAG_DIM) && bnxt_vnic_is_active(bp)) {
struct dim_sample dim_sample = {};
dim_update_sample(cpr->event_ctr,
@ -7266,6 +7273,26 @@ err_out:
return rc;
}
static void bnxt_cancel_dim(struct bnxt *bp)
{
int i;
/* DIM work is initialized in bnxt_enable_napi(). Proceed only
* if NAPI is enabled.
*/
if (!bp->bnapi || test_bit(BNXT_STATE_NAPI_DISABLED, &bp->state))
return;
/* Make sure NAPI sees that the VNIC is disabled */
synchronize_net();
for (i = 0; i < bp->rx_nr_rings; i++) {
struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
struct bnxt_napi *bnapi = rxr->bnapi;
cancel_work_sync(&bnapi->cp_ring.dim.work);
}
}
static int hwrm_ring_free_send_msg(struct bnxt *bp,
struct bnxt_ring_struct *ring,
u32 ring_type, int cmpl_ring_id)
@ -7366,6 +7393,7 @@ static void bnxt_hwrm_ring_free(struct bnxt *bp, bool close_path)
}
}
bnxt_cancel_dim(bp);
for (i = 0; i < bp->rx_nr_rings; i++) {
bnxt_hwrm_rx_ring_free(bp, &bp->rx_ring[i], close_path);
bnxt_hwrm_rx_agg_ring_free(bp, &bp->rx_ring[i], close_path);
@ -11330,8 +11358,6 @@ static void bnxt_disable_napi(struct bnxt *bp)
if (bnapi->in_reset)
cpr->sw_stats->rx.rx_resets++;
napi_disable(&bnapi->napi);
if (bnapi->rx_ring)
cancel_work_sync(&cpr->dim.work);
}
}
@ -15613,8 +15639,10 @@ static int bnxt_queue_stop(struct net_device *dev, void *qmem, int idx)
bnxt_hwrm_vnic_update(bp, vnic,
VNIC_UPDATE_REQ_ENABLES_MRU_VALID);
}
/* Make sure NAPI sees that the VNIC is disabled */
synchronize_net();
rxr = &bp->rx_ring[idx];
cancel_work_sync(&rxr->bnapi->cp_ring.dim.work);
bnxt_hwrm_rx_ring_free(bp, rxr, false);
bnxt_hwrm_rx_agg_ring_free(bp, rxr, false);
rxr->rx_next_cons = 0;

View file

@ -208,7 +208,7 @@ int bnxt_send_msg(struct bnxt_en_dev *edev,
rc = hwrm_req_replace(bp, req, fw_msg->msg, fw_msg->msg_len);
if (rc)
return rc;
goto drop_req;
hwrm_req_timeout(bp, req, fw_msg->timeout);
resp = hwrm_req_hold(bp, req);
@ -220,6 +220,7 @@ int bnxt_send_msg(struct bnxt_en_dev *edev,
memcpy(fw_msg->resp, resp, resp_len);
}
drop_req:
hwrm_req_drop(bp, req);
return rc;
}

View file

@ -1799,7 +1799,10 @@ void cxgb4_remove_tid(struct tid_info *t, unsigned int chan, unsigned int tid,
struct adapter *adap = container_of(t, struct adapter, tids);
struct sk_buff *skb;
WARN_ON(tid_out_of_range(&adap->tids, tid));
if (tid_out_of_range(&adap->tids, tid)) {
dev_err(adap->pdev_dev, "tid %d out of range\n", tid);
return;
}
if (t->tid_tab[tid - adap->tids.tid_base]) {
t->tid_tab[tid - adap->tids.tid_base] = NULL;

View file

@ -2241,14 +2241,18 @@ static void gve_service_task(struct work_struct *work)
static void gve_set_netdev_xdp_features(struct gve_priv *priv)
{
xdp_features_t xdp_features;
if (priv->queue_format == GVE_GQI_QPL_FORMAT) {
priv->dev->xdp_features = NETDEV_XDP_ACT_BASIC;
priv->dev->xdp_features |= NETDEV_XDP_ACT_REDIRECT;
priv->dev->xdp_features |= NETDEV_XDP_ACT_NDO_XMIT;
priv->dev->xdp_features |= NETDEV_XDP_ACT_XSK_ZEROCOPY;
xdp_features = NETDEV_XDP_ACT_BASIC;
xdp_features |= NETDEV_XDP_ACT_REDIRECT;
xdp_features |= NETDEV_XDP_ACT_NDO_XMIT;
xdp_features |= NETDEV_XDP_ACT_XSK_ZEROCOPY;
} else {
priv->dev->xdp_features = 0;
xdp_features = 0;
}
xdp_set_features_flag(priv->dev, xdp_features);
}
static int gve_init_priv(struct gve_priv *priv, bool skip_describe_device)

View file

@ -916,9 +916,6 @@ struct hnae3_handle {
u8 netdev_flags;
struct dentry *hnae3_dbgfs;
/* protects concurrent contention between debugfs commands */
struct mutex dbgfs_lock;
char **dbgfs_buf;
/* Network interface message level enabled bits */
u32 msg_enable;

View file

@ -1260,69 +1260,55 @@ static int hns3_dbg_read_cmd(struct hns3_dbg_data *dbg_data,
static ssize_t hns3_dbg_read(struct file *filp, char __user *buffer,
size_t count, loff_t *ppos)
{
struct hns3_dbg_data *dbg_data = filp->private_data;
char *buf = filp->private_data;
return simple_read_from_buffer(buffer, count, ppos, buf, strlen(buf));
}
static int hns3_dbg_open(struct inode *inode, struct file *filp)
{
struct hns3_dbg_data *dbg_data = inode->i_private;
struct hnae3_handle *handle = dbg_data->handle;
struct hns3_nic_priv *priv = handle->priv;
ssize_t size = 0;
char **save_buf;
char *read_buf;
u32 index;
char *buf;
int ret;
if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state) ||
test_bit(HNS3_NIC_STATE_RESETTING, &priv->state))
return -EBUSY;
ret = hns3_dbg_get_cmd_index(dbg_data, &index);
if (ret)
return ret;
mutex_lock(&handle->dbgfs_lock);
save_buf = &handle->dbgfs_buf[index];
buf = kvzalloc(hns3_dbg_cmd[index].buf_len, GFP_KERNEL);
if (!buf)
return -ENOMEM;
if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state) ||
test_bit(HNS3_NIC_STATE_RESETTING, &priv->state)) {
ret = -EBUSY;
goto out;
ret = hns3_dbg_read_cmd(dbg_data, hns3_dbg_cmd[index].cmd,
buf, hns3_dbg_cmd[index].buf_len);
if (ret) {
kvfree(buf);
return ret;
}
if (*save_buf) {
read_buf = *save_buf;
} else {
read_buf = kvzalloc(hns3_dbg_cmd[index].buf_len, GFP_KERNEL);
if (!read_buf) {
ret = -ENOMEM;
goto out;
}
filp->private_data = buf;
return 0;
}
/* save the buffer addr until the last read operation */
*save_buf = read_buf;
/* get data ready for the first time to read */
ret = hns3_dbg_read_cmd(dbg_data, hns3_dbg_cmd[index].cmd,
read_buf, hns3_dbg_cmd[index].buf_len);
if (ret)
goto out;
}
size = simple_read_from_buffer(buffer, count, ppos, read_buf,
strlen(read_buf));
if (size > 0) {
mutex_unlock(&handle->dbgfs_lock);
return size;
}
out:
/* free the buffer for the last read operation */
if (*save_buf) {
kvfree(*save_buf);
*save_buf = NULL;
}
mutex_unlock(&handle->dbgfs_lock);
return ret;
static int hns3_dbg_release(struct inode *inode, struct file *filp)
{
kvfree(filp->private_data);
filp->private_data = NULL;
return 0;
}
static const struct file_operations hns3_dbg_fops = {
.owner = THIS_MODULE,
.open = simple_open,
.open = hns3_dbg_open,
.read = hns3_dbg_read,
.release = hns3_dbg_release,
};
static int hns3_dbg_bd_file_init(struct hnae3_handle *handle, u32 cmd)
@ -1379,13 +1365,6 @@ int hns3_dbg_init(struct hnae3_handle *handle)
int ret;
u32 i;
handle->dbgfs_buf = devm_kcalloc(&handle->pdev->dev,
ARRAY_SIZE(hns3_dbg_cmd),
sizeof(*handle->dbgfs_buf),
GFP_KERNEL);
if (!handle->dbgfs_buf)
return -ENOMEM;
hns3_dbg_dentry[HNS3_DBG_DENTRY_COMMON].dentry =
debugfs_create_dir(name, hns3_dbgfs_root);
handle->hnae3_dbgfs = hns3_dbg_dentry[HNS3_DBG_DENTRY_COMMON].dentry;
@ -1395,8 +1374,6 @@ int hns3_dbg_init(struct hnae3_handle *handle)
debugfs_create_dir(hns3_dbg_dentry[i].name,
handle->hnae3_dbgfs);
mutex_init(&handle->dbgfs_lock);
for (i = 0; i < ARRAY_SIZE(hns3_dbg_cmd); i++) {
if ((hns3_dbg_cmd[i].cmd == HNAE3_DBG_CMD_TM_NODES &&
ae_dev->dev_version <= HNAE3_DEVICE_VERSION_V2) ||
@ -1425,24 +1402,13 @@ int hns3_dbg_init(struct hnae3_handle *handle)
out:
debugfs_remove_recursive(handle->hnae3_dbgfs);
handle->hnae3_dbgfs = NULL;
mutex_destroy(&handle->dbgfs_lock);
return ret;
}
void hns3_dbg_uninit(struct hnae3_handle *handle)
{
u32 i;
debugfs_remove_recursive(handle->hnae3_dbgfs);
handle->hnae3_dbgfs = NULL;
for (i = 0; i < ARRAY_SIZE(hns3_dbg_cmd); i++)
if (handle->dbgfs_buf[i]) {
kvfree(handle->dbgfs_buf[i]);
handle->dbgfs_buf[i] = NULL;
}
mutex_destroy(&handle->dbgfs_lock);
}
void hns3_dbg_register_debugfs(const char *debugfs_dir_name)

View file

@ -2452,7 +2452,6 @@ static int hns3_nic_set_features(struct net_device *netdev,
return ret;
}
netdev->features = features;
return 0;
}

View file

@ -6,6 +6,7 @@
#include <linux/etherdevice.h>
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/netdevice.h>
@ -3574,6 +3575,17 @@ static int hclge_set_vf_link_state(struct hnae3_handle *handle, int vf,
return ret;
}
static void hclge_set_reset_pending(struct hclge_dev *hdev,
enum hnae3_reset_type reset_type)
{
/* When an incorrect reset type is executed, the get_reset_level
* function generates the HNAE3_NONE_RESET flag. As a result, this
* type do not need to pending.
*/
if (reset_type != HNAE3_NONE_RESET)
set_bit(reset_type, &hdev->reset_pending);
}
static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval)
{
u32 cmdq_src_reg, msix_src_reg, hw_err_src_reg;
@ -3594,7 +3606,7 @@ static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval)
*/
if (BIT(HCLGE_VECTOR0_IMPRESET_INT_B) & msix_src_reg) {
dev_info(&hdev->pdev->dev, "IMP reset interrupt\n");
set_bit(HNAE3_IMP_RESET, &hdev->reset_pending);
hclge_set_reset_pending(hdev, HNAE3_IMP_RESET);
set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state);
*clearval = BIT(HCLGE_VECTOR0_IMPRESET_INT_B);
hdev->rst_stats.imp_rst_cnt++;
@ -3604,7 +3616,7 @@ static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval)
if (BIT(HCLGE_VECTOR0_GLOBALRESET_INT_B) & msix_src_reg) {
dev_info(&hdev->pdev->dev, "global reset interrupt\n");
set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state);
set_bit(HNAE3_GLOBAL_RESET, &hdev->reset_pending);
hclge_set_reset_pending(hdev, HNAE3_GLOBAL_RESET);
*clearval = BIT(HCLGE_VECTOR0_GLOBALRESET_INT_B);
hdev->rst_stats.global_rst_cnt++;
return HCLGE_VECTOR0_EVENT_RST;
@ -3759,7 +3771,7 @@ static int hclge_misc_irq_init(struct hclge_dev *hdev)
snprintf(hdev->misc_vector.name, HNAE3_INT_NAME_LEN, "%s-misc-%s",
HCLGE_NAME, pci_name(hdev->pdev));
ret = request_irq(hdev->misc_vector.vector_irq, hclge_misc_irq_handle,
0, hdev->misc_vector.name, hdev);
IRQF_NO_AUTOEN, hdev->misc_vector.name, hdev);
if (ret) {
hclge_free_vector(hdev, 0);
dev_err(&hdev->pdev->dev, "request misc irq(%d) fail\n",
@ -4052,7 +4064,7 @@ static void hclge_do_reset(struct hclge_dev *hdev)
case HNAE3_FUNC_RESET:
dev_info(&pdev->dev, "PF reset requested\n");
/* schedule again to check later */
set_bit(HNAE3_FUNC_RESET, &hdev->reset_pending);
hclge_set_reset_pending(hdev, HNAE3_FUNC_RESET);
hclge_reset_task_schedule(hdev);
break;
default:
@ -4086,6 +4098,8 @@ static enum hnae3_reset_type hclge_get_reset_level(struct hnae3_ae_dev *ae_dev,
clear_bit(HNAE3_FLR_RESET, addr);
}
clear_bit(HNAE3_NONE_RESET, addr);
if (hdev->reset_type != HNAE3_NONE_RESET &&
rst_level < hdev->reset_type)
return HNAE3_NONE_RESET;
@ -4227,7 +4241,7 @@ static bool hclge_reset_err_handle(struct hclge_dev *hdev)
return false;
} else if (hdev->rst_stats.reset_fail_cnt < MAX_RESET_FAIL_CNT) {
hdev->rst_stats.reset_fail_cnt++;
set_bit(hdev->reset_type, &hdev->reset_pending);
hclge_set_reset_pending(hdev, hdev->reset_type);
dev_info(&hdev->pdev->dev,
"re-schedule reset task(%u)\n",
hdev->rst_stats.reset_fail_cnt);
@ -4470,8 +4484,20 @@ static void hclge_reset_event(struct pci_dev *pdev, struct hnae3_handle *handle)
static void hclge_set_def_reset_request(struct hnae3_ae_dev *ae_dev,
enum hnae3_reset_type rst_type)
{
#define HCLGE_SUPPORT_RESET_TYPE \
(BIT(HNAE3_FLR_RESET) | BIT(HNAE3_FUNC_RESET) | \
BIT(HNAE3_GLOBAL_RESET) | BIT(HNAE3_IMP_RESET))
struct hclge_dev *hdev = ae_dev->priv;
if (!(BIT(rst_type) & HCLGE_SUPPORT_RESET_TYPE)) {
/* To prevent reset triggered by hclge_reset_event */
set_bit(HNAE3_NONE_RESET, &hdev->default_reset_request);
dev_warn(&hdev->pdev->dev, "unsupported reset type %d\n",
rst_type);
return;
}
set_bit(rst_type, &hdev->default_reset_request);
}
@ -11881,9 +11907,6 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev)
hclge_init_rxd_adv_layout(hdev);
/* Enable MISC vector(vector0) */
hclge_enable_vector(&hdev->misc_vector, true);
ret = hclge_init_wol(hdev);
if (ret)
dev_warn(&pdev->dev,
@ -11896,6 +11919,10 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev)
hclge_state_init(hdev);
hdev->last_reset_time = jiffies;
/* Enable MISC vector(vector0) */
enable_irq(hdev->misc_vector.vector_irq);
hclge_enable_vector(&hdev->misc_vector, true);
dev_info(&hdev->pdev->dev, "%s driver initialization finished.\n",
HCLGE_DRIVER_NAME);
@ -12301,7 +12328,7 @@ static void hclge_uninit_ae_dev(struct hnae3_ae_dev *ae_dev)
/* Disable MISC vector(vector0) */
hclge_enable_vector(&hdev->misc_vector, false);
synchronize_irq(hdev->misc_vector.vector_irq);
disable_irq(hdev->misc_vector.vector_irq);
/* Disable all hw interrupts */
hclge_config_mac_tnl_int(hdev, false);

View file

@ -58,6 +58,9 @@ bool hclge_ptp_set_tx_info(struct hnae3_handle *handle, struct sk_buff *skb)
struct hclge_dev *hdev = vport->back;
struct hclge_ptp *ptp = hdev->ptp;
if (!ptp)
return false;
if (!test_bit(HCLGE_PTP_FLAG_TX_EN, &ptp->flags) ||
test_and_set_bit(HCLGE_STATE_PTP_TX_HANDLING, &hdev->state)) {
ptp->tx_skipped++;

View file

@ -510,9 +510,9 @@ out:
static int hclge_fetch_pf_reg(struct hclge_dev *hdev, void *data,
struct hnae3_knic_private_info *kinfo)
{
#define HCLGE_RING_REG_OFFSET 0x200
#define HCLGE_RING_INT_REG_OFFSET 0x4
struct hnae3_queue *tqp;
int i, j, reg_num;
int data_num_sum;
u32 *reg = data;
@ -533,10 +533,11 @@ static int hclge_fetch_pf_reg(struct hclge_dev *hdev, void *data,
reg_num = ARRAY_SIZE(ring_reg_addr_list);
for (j = 0; j < kinfo->num_tqps; j++) {
reg += hclge_reg_get_tlv(HCLGE_REG_TAG_RING, reg_num, reg);
tqp = kinfo->tqp[j];
for (i = 0; i < reg_num; i++)
*reg++ = hclge_read_dev(&hdev->hw,
ring_reg_addr_list[i] +
HCLGE_RING_REG_OFFSET * j);
*reg++ = readl_relaxed(tqp->io_base -
HCLGE_TQP_REG_OFFSET +
ring_reg_addr_list[i]);
}
data_num_sum += (reg_num + HCLGE_REG_TLV_SPACE) * kinfo->num_tqps;

View file

@ -1393,6 +1393,17 @@ static int hclgevf_notify_roce_client(struct hclgevf_dev *hdev,
return ret;
}
static void hclgevf_set_reset_pending(struct hclgevf_dev *hdev,
enum hnae3_reset_type reset_type)
{
/* When an incorrect reset type is executed, the get_reset_level
* function generates the HNAE3_NONE_RESET flag. As a result, this
* type do not need to pending.
*/
if (reset_type != HNAE3_NONE_RESET)
set_bit(reset_type, &hdev->reset_pending);
}
static int hclgevf_reset_wait(struct hclgevf_dev *hdev)
{
#define HCLGEVF_RESET_WAIT_US 20000
@ -1542,7 +1553,7 @@ static void hclgevf_reset_err_handle(struct hclgevf_dev *hdev)
hdev->rst_stats.rst_fail_cnt);
if (hdev->rst_stats.rst_fail_cnt < HCLGEVF_RESET_MAX_FAIL_CNT)
set_bit(hdev->reset_type, &hdev->reset_pending);
hclgevf_set_reset_pending(hdev, hdev->reset_type);
if (hclgevf_is_reset_pending(hdev)) {
set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state);
@ -1662,6 +1673,8 @@ static enum hnae3_reset_type hclgevf_get_reset_level(unsigned long *addr)
clear_bit(HNAE3_FLR_RESET, addr);
}
clear_bit(HNAE3_NONE_RESET, addr);
return rst_level;
}
@ -1671,14 +1684,15 @@ static void hclgevf_reset_event(struct pci_dev *pdev,
struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev);
struct hclgevf_dev *hdev = ae_dev->priv;
dev_info(&hdev->pdev->dev, "received reset request from VF enet\n");
if (hdev->default_reset_request)
hdev->reset_level =
hclgevf_get_reset_level(&hdev->default_reset_request);
else
hdev->reset_level = HNAE3_VF_FUNC_RESET;
dev_info(&hdev->pdev->dev, "received reset request from VF enet, reset level is %d\n",
hdev->reset_level);
/* reset of this VF requested */
set_bit(HCLGEVF_RESET_REQUESTED, &hdev->reset_state);
hclgevf_reset_task_schedule(hdev);
@ -1689,8 +1703,20 @@ static void hclgevf_reset_event(struct pci_dev *pdev,
static void hclgevf_set_def_reset_request(struct hnae3_ae_dev *ae_dev,
enum hnae3_reset_type rst_type)
{
#define HCLGEVF_SUPPORT_RESET_TYPE \
(BIT(HNAE3_VF_RESET) | BIT(HNAE3_VF_FUNC_RESET) | \
BIT(HNAE3_VF_PF_FUNC_RESET) | BIT(HNAE3_VF_FULL_RESET) | \
BIT(HNAE3_FLR_RESET) | BIT(HNAE3_VF_EXP_RESET))
struct hclgevf_dev *hdev = ae_dev->priv;
if (!(BIT(rst_type) & HCLGEVF_SUPPORT_RESET_TYPE)) {
/* To prevent reset triggered by hclge_reset_event */
set_bit(HNAE3_NONE_RESET, &hdev->default_reset_request);
dev_info(&hdev->pdev->dev, "unsupported reset type %d\n",
rst_type);
return;
}
set_bit(rst_type, &hdev->default_reset_request);
}
@ -1847,14 +1873,14 @@ static void hclgevf_reset_service_task(struct hclgevf_dev *hdev)
*/
if (hdev->reset_attempts > HCLGEVF_MAX_RESET_ATTEMPTS_CNT) {
/* prepare for full reset of stack + pcie interface */
set_bit(HNAE3_VF_FULL_RESET, &hdev->reset_pending);
hclgevf_set_reset_pending(hdev, HNAE3_VF_FULL_RESET);
/* "defer" schedule the reset task again */
set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state);
} else {
hdev->reset_attempts++;
set_bit(hdev->reset_level, &hdev->reset_pending);
hclgevf_set_reset_pending(hdev, hdev->reset_level);
set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state);
}
hclgevf_reset_task_schedule(hdev);
@ -1977,7 +2003,7 @@ static enum hclgevf_evt_cause hclgevf_check_evt_cause(struct hclgevf_dev *hdev,
rst_ing_reg = hclgevf_read_dev(&hdev->hw, HCLGEVF_RST_ING);
dev_info(&hdev->pdev->dev,
"receive reset interrupt 0x%x!\n", rst_ing_reg);
set_bit(HNAE3_VF_RESET, &hdev->reset_pending);
hclgevf_set_reset_pending(hdev, HNAE3_VF_RESET);
set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state);
set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state);
*clearval = ~(1U << HCLGEVF_VECTOR0_RST_INT_B);
@ -2287,6 +2313,8 @@ static void hclgevf_state_init(struct hclgevf_dev *hdev)
clear_bit(HCLGEVF_STATE_RST_FAIL, &hdev->state);
INIT_DELAYED_WORK(&hdev->service_task, hclgevf_service_task);
/* timer needs to be initialized before misc irq */
timer_setup(&hdev->reset_timer, hclgevf_reset_timer, 0);
mutex_init(&hdev->mbx_resp.mbx_mutex);
sema_init(&hdev->reset_sem, 1);
@ -2986,7 +3014,6 @@ static int hclgevf_init_hdev(struct hclgevf_dev *hdev)
HCLGEVF_DRIVER_NAME);
hclgevf_task_schedule(hdev, round_jiffies_relative(HZ));
timer_setup(&hdev->reset_timer, hclgevf_reset_timer, 0);
return 0;

View file

@ -123,10 +123,10 @@ int hclgevf_get_regs_len(struct hnae3_handle *handle)
void hclgevf_get_regs(struct hnae3_handle *handle, u32 *version,
void *data)
{
#define HCLGEVF_RING_REG_OFFSET 0x200
#define HCLGEVF_RING_INT_REG_OFFSET 0x4
struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
struct hnae3_queue *tqp;
int i, j, reg_um;
u32 *reg = data;
@ -147,10 +147,11 @@ void hclgevf_get_regs(struct hnae3_handle *handle, u32 *version,
reg_um = ARRAY_SIZE(ring_reg_addr_list);
for (j = 0; j < hdev->num_tqps; j++) {
reg += hclgevf_reg_get_tlv(HCLGEVF_REG_TAG_RING, reg_um, reg);
tqp = &hdev->htqp[j].q;
for (i = 0; i < reg_um; i++)
*reg++ = hclgevf_read_dev(&hdev->hw,
ring_reg_addr_list[i] +
HCLGEVF_RING_REG_OFFSET * j);
*reg++ = readl_relaxed(tqp->io_base -
HCLGEVF_TQP_REG_OFFSET +
ring_reg_addr_list[i]);
}
reg_um = ARRAY_SIZE(tqp_intr_reg_addr_list);

View file

@ -2271,6 +2271,8 @@ struct ice_aqc_get_pkg_info_resp {
struct ice_aqc_get_pkg_info pkg_info[];
};
#define ICE_AQC_GET_CGU_MAX_PHASE_ADJ GENMASK(30, 0)
/* Get CGU abilities command response data structure (indirect 0x0C61) */
struct ice_aqc_get_cgu_abilities {
u8 num_inputs;

View file

@ -2064,6 +2064,18 @@ static int ice_dpll_init_worker(struct ice_pf *pf)
return 0;
}
/**
* ice_dpll_phase_range_set - initialize phase adjust range helper
* @range: pointer to phase adjust range struct to be initialized
* @phase_adj: a value to be used as min(-)/max(+) boundary
*/
static void ice_dpll_phase_range_set(struct dpll_pin_phase_adjust_range *range,
u32 phase_adj)
{
range->min = -phase_adj;
range->max = phase_adj;
}
/**
* ice_dpll_init_info_pins_generic - initializes generic pins info
* @pf: board private structure
@ -2105,8 +2117,8 @@ static int ice_dpll_init_info_pins_generic(struct ice_pf *pf, bool input)
for (i = 0; i < pin_num; i++) {
pins[i].idx = i;
pins[i].prop.board_label = labels[i];
pins[i].prop.phase_range.min = phase_adj_max;
pins[i].prop.phase_range.max = -phase_adj_max;
ice_dpll_phase_range_set(&pins[i].prop.phase_range,
phase_adj_max);
pins[i].prop.capabilities = cap;
pins[i].pf = pf;
ret = ice_dpll_pin_state_update(pf, &pins[i], pin_type, NULL);
@ -2152,6 +2164,7 @@ ice_dpll_init_info_direct_pins(struct ice_pf *pf,
struct ice_hw *hw = &pf->hw;
struct ice_dpll_pin *pins;
unsigned long caps;
u32 phase_adj_max;
u8 freq_supp_num;
bool input;
@ -2159,11 +2172,13 @@ ice_dpll_init_info_direct_pins(struct ice_pf *pf,
case ICE_DPLL_PIN_TYPE_INPUT:
pins = pf->dplls.inputs;
num_pins = pf->dplls.num_inputs;
phase_adj_max = pf->dplls.input_phase_adj_max;
input = true;
break;
case ICE_DPLL_PIN_TYPE_OUTPUT:
pins = pf->dplls.outputs;
num_pins = pf->dplls.num_outputs;
phase_adj_max = pf->dplls.output_phase_adj_max;
input = false;
break;
default:
@ -2188,19 +2203,13 @@ ice_dpll_init_info_direct_pins(struct ice_pf *pf,
return ret;
caps |= (DPLL_PIN_CAPABILITIES_PRIORITY_CAN_CHANGE |
DPLL_PIN_CAPABILITIES_STATE_CAN_CHANGE);
pins[i].prop.phase_range.min =
pf->dplls.input_phase_adj_max;
pins[i].prop.phase_range.max =
-pf->dplls.input_phase_adj_max;
} else {
pins[i].prop.phase_range.min =
pf->dplls.output_phase_adj_max;
pins[i].prop.phase_range.max =
-pf->dplls.output_phase_adj_max;
ret = ice_cgu_get_output_pin_state_caps(hw, i, &caps);
if (ret)
return ret;
}
ice_dpll_phase_range_set(&pins[i].prop.phase_range,
phase_adj_max);
pins[i].prop.capabilities = caps;
ret = ice_dpll_pin_state_update(pf, &pins[i], pin_type, NULL);
if (ret)
@ -2308,8 +2317,10 @@ static int ice_dpll_init_info(struct ice_pf *pf, bool cgu)
dp->dpll_idx = abilities.pps_dpll_idx;
d->num_inputs = abilities.num_inputs;
d->num_outputs = abilities.num_outputs;
d->input_phase_adj_max = le32_to_cpu(abilities.max_in_phase_adj);
d->output_phase_adj_max = le32_to_cpu(abilities.max_out_phase_adj);
d->input_phase_adj_max = le32_to_cpu(abilities.max_in_phase_adj) &
ICE_AQC_GET_CGU_MAX_PHASE_ADJ;
d->output_phase_adj_max = le32_to_cpu(abilities.max_out_phase_adj) &
ICE_AQC_GET_CGU_MAX_PHASE_ADJ;
alloc_size = sizeof(*d->inputs) * d->num_inputs;
d->inputs = kzalloc(alloc_size, GFP_KERNEL);

View file

@ -761,9 +761,9 @@ const struct ice_vernier_info_e82x e822_vernier[NUM_ICE_PTP_LNK_SPD] = {
/* rx_desk_rsgb_par */
644531250, /* 644.53125 MHz Reed Solomon gearbox */
/* tx_desk_rsgb_pcs */
644531250, /* 644.53125 MHz Reed Solomon gearbox */
390625000, /* 390.625 MHz Reed Solomon gearbox */
/* rx_desk_rsgb_pcs */
644531250, /* 644.53125 MHz Reed Solomon gearbox */
390625000, /* 390.625 MHz Reed Solomon gearbox */
/* tx_fixed_delay */
1620,
/* pmd_adj_divisor */

View file

@ -68,6 +68,10 @@ static s32 igc_init_nvm_params_base(struct igc_hw *hw)
u32 eecd = rd32(IGC_EECD);
u16 size;
/* failed to read reg and got all F's */
if (!(~eecd))
return -ENXIO;
size = FIELD_GET(IGC_EECD_SIZE_EX_MASK, eecd);
/* Added to a constant, "size" becomes the left-shift value
@ -221,6 +225,8 @@ static s32 igc_get_invariants_base(struct igc_hw *hw)
/* NVM initialization */
ret_val = igc_init_nvm_params_base(hw);
if (ret_val)
goto out;
switch (hw->mac.type) {
case igc_i225:
ret_val = igc_init_nvm_params_i225(hw);

View file

@ -1013,6 +1013,7 @@ static void cmd_work_handler(struct work_struct *work)
complete(&ent->done);
}
up(&cmd->vars.sem);
complete(&ent->slotted);
return;
}
} else {

View file

@ -13,7 +13,6 @@ fbnic-y := fbnic_csr.o \
fbnic_ethtool.o \
fbnic_fw.o \
fbnic_hw_stats.o \
fbnic_hwmon.o \
fbnic_irq.o \
fbnic_mac.o \
fbnic_netdev.o \

View file

@ -24,7 +24,6 @@ struct fbnic_dev {
struct device *dev;
struct net_device *netdev;
struct dentry *dbg_fbd;
struct device *hwmon;
u32 __iomem *uc_addr0;
u32 __iomem *uc_addr4;
@ -42,7 +41,6 @@ struct fbnic_dev {
struct fbnic_fw_mbx mbx[FBNIC_IPC_MBX_INDICES];
struct fbnic_fw_cap fw_cap;
struct fbnic_fw_completion *cmpl_data;
/* Lock protecting Tx Mailbox queue to prevent possible races */
spinlock_t fw_tx_lock;
@ -151,9 +149,6 @@ void fbnic_devlink_unregister(struct fbnic_dev *fbd);
int fbnic_fw_enable_mbx(struct fbnic_dev *fbd);
void fbnic_fw_disable_mbx(struct fbnic_dev *fbd);
void fbnic_hwmon_register(struct fbnic_dev *fbd);
void fbnic_hwmon_unregister(struct fbnic_dev *fbd);
int fbnic_pcs_irq_enable(struct fbnic_dev *fbd);
void fbnic_pcs_irq_disable(struct fbnic_dev *fbd);

View file

@ -44,13 +44,6 @@ struct fbnic_fw_cap {
u8 link_fec;
};
struct fbnic_fw_completion {
struct {
s32 millivolts;
s32 millidegrees;
} tsene;
};
void fbnic_mbx_init(struct fbnic_dev *fbd);
void fbnic_mbx_clean(struct fbnic_dev *fbd);
void fbnic_mbx_poll(struct fbnic_dev *fbd);

View file

@ -1,81 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) Meta Platforms, Inc. and affiliates. */
#include <linux/hwmon.h>
#include "fbnic.h"
#include "fbnic_mac.h"
static int fbnic_hwmon_sensor_id(enum hwmon_sensor_types type)
{
if (type == hwmon_temp)
return FBNIC_SENSOR_TEMP;
if (type == hwmon_in)
return FBNIC_SENSOR_VOLTAGE;
return -EOPNOTSUPP;
}
static umode_t fbnic_hwmon_is_visible(const void *drvdata,
enum hwmon_sensor_types type,
u32 attr, int channel)
{
if (type == hwmon_temp && attr == hwmon_temp_input)
return 0444;
if (type == hwmon_in && attr == hwmon_in_input)
return 0444;
return 0;
}
static int fbnic_hwmon_read(struct device *dev, enum hwmon_sensor_types type,
u32 attr, int channel, long *val)
{
struct fbnic_dev *fbd = dev_get_drvdata(dev);
const struct fbnic_mac *mac = fbd->mac;
int id;
id = fbnic_hwmon_sensor_id(type);
return id < 0 ? id : mac->get_sensor(fbd, id, val);
}
static const struct hwmon_ops fbnic_hwmon_ops = {
.is_visible = fbnic_hwmon_is_visible,
.read = fbnic_hwmon_read,
};
static const struct hwmon_channel_info *fbnic_hwmon_info[] = {
HWMON_CHANNEL_INFO(temp, HWMON_T_INPUT),
HWMON_CHANNEL_INFO(in, HWMON_I_INPUT),
NULL
};
static const struct hwmon_chip_info fbnic_chip_info = {
.ops = &fbnic_hwmon_ops,
.info = fbnic_hwmon_info,
};
void fbnic_hwmon_register(struct fbnic_dev *fbd)
{
if (!IS_REACHABLE(CONFIG_HWMON))
return;
fbd->hwmon = hwmon_device_register_with_info(fbd->dev, "fbnic",
fbd, &fbnic_chip_info,
NULL);
if (IS_ERR(fbd->hwmon)) {
dev_notice(fbd->dev,
"Failed to register hwmon device %pe\n",
fbd->hwmon);
fbd->hwmon = NULL;
}
}
void fbnic_hwmon_unregister(struct fbnic_dev *fbd)
{
if (!IS_REACHABLE(CONFIG_HWMON) || !fbd->hwmon)
return;
hwmon_device_unregister(fbd->hwmon);
fbd->hwmon = NULL;
}

View file

@ -686,27 +686,6 @@ fbnic_mac_get_eth_mac_stats(struct fbnic_dev *fbd, bool reset,
MAC_STAT_TX_BROADCAST);
}
static int fbnic_mac_get_sensor_asic(struct fbnic_dev *fbd, int id, long *val)
{
struct fbnic_fw_completion fw_cmpl;
s32 *sensor;
switch (id) {
case FBNIC_SENSOR_TEMP:
sensor = &fw_cmpl.tsene.millidegrees;
break;
case FBNIC_SENSOR_VOLTAGE:
sensor = &fw_cmpl.tsene.millivolts;
break;
default:
return -EINVAL;
}
*val = *sensor;
return 0;
}
static const struct fbnic_mac fbnic_mac_asic = {
.init_regs = fbnic_mac_init_regs,
.pcs_enable = fbnic_pcs_enable_asic,
@ -716,7 +695,6 @@ static const struct fbnic_mac fbnic_mac_asic = {
.get_eth_mac_stats = fbnic_mac_get_eth_mac_stats,
.link_down = fbnic_mac_link_down_asic,
.link_up = fbnic_mac_link_up_asic,
.get_sensor = fbnic_mac_get_sensor_asic,
};
/**

View file

@ -47,11 +47,6 @@ enum {
#define FBNIC_LINK_MODE_PAM4 (FBNIC_LINK_50R1)
#define FBNIC_LINK_MODE_MASK (FBNIC_LINK_AUTO - 1)
enum fbnic_sensor_id {
FBNIC_SENSOR_TEMP, /* Temp in millidegrees Centigrade */
FBNIC_SENSOR_VOLTAGE, /* Voltage in millivolts */
};
/* This structure defines the interface hooks for the MAC. The MAC hooks
* will be configured as a const struct provided with a set of function
* pointers.
@ -88,8 +83,6 @@ struct fbnic_mac {
void (*link_down)(struct fbnic_dev *fbd);
void (*link_up)(struct fbnic_dev *fbd, bool tx_pause, bool rx_pause);
int (*get_sensor)(struct fbnic_dev *fbd, int id, long *val);
};
int fbnic_mac_init(struct fbnic_dev *fbd);

View file

@ -296,8 +296,6 @@ static int fbnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
/* Capture snapshot of hardware stats so netdev can calculate delta */
fbnic_reset_hw_stats(fbd);
fbnic_hwmon_register(fbd);
if (!fbd->dsn) {
dev_warn(&pdev->dev, "Reading serial number failed\n");
goto init_failure_mode;
@ -360,7 +358,6 @@ static void fbnic_remove(struct pci_dev *pdev)
fbnic_netdev_free(fbd);
}
fbnic_hwmon_unregister(fbd);
fbnic_dbg_fbd_exit(fbd);
fbnic_devlink_unregister(fbd);
fbnic_fw_disable_mbx(fbd);

View file

@ -1828,7 +1828,7 @@ static int rtase_alloc_msix(struct pci_dev *pdev, struct rtase_private *tp)
for (i = 0; i < tp->int_nums; i++) {
irq = pci_irq_vector(pdev, i);
if (!irq) {
if (irq < 0) {
pci_disable_msix(pdev);
return irq;
}

View file

@ -1,4 +1,5 @@
// SPDX-License-Identifier: GPL-2.0-only
#include <linux/iommu.h>
#include <linux/platform_device.h>
#include <linux/of.h>
#include <linux/module.h>
@ -19,6 +20,8 @@ struct tegra_mgbe {
struct reset_control *rst_mac;
struct reset_control *rst_pcs;
u32 iommu_sid;
void __iomem *hv;
void __iomem *regs;
void __iomem *xpcs;
@ -50,7 +53,6 @@ struct tegra_mgbe {
#define MGBE_WRAP_COMMON_INTR_ENABLE 0x8704
#define MAC_SBD_INTR BIT(2)
#define MGBE_WRAP_AXI_ASID0_CTRL 0x8400
#define MGBE_SID 0x6
static int __maybe_unused tegra_mgbe_suspend(struct device *dev)
{
@ -84,7 +86,7 @@ static int __maybe_unused tegra_mgbe_resume(struct device *dev)
writel(MAC_SBD_INTR, mgbe->regs + MGBE_WRAP_COMMON_INTR_ENABLE);
/* Program SID */
writel(MGBE_SID, mgbe->hv + MGBE_WRAP_AXI_ASID0_CTRL);
writel(mgbe->iommu_sid, mgbe->hv + MGBE_WRAP_AXI_ASID0_CTRL);
value = readl(mgbe->xpcs + XPCS_WRAP_UPHY_STATUS);
if ((value & XPCS_WRAP_UPHY_STATUS_TX_P_UP) == 0) {
@ -241,6 +243,12 @@ static int tegra_mgbe_probe(struct platform_device *pdev)
if (IS_ERR(mgbe->xpcs))
return PTR_ERR(mgbe->xpcs);
/* get controller's stream id from iommu property in device tree */
if (!tegra_dev_iommu_get_stream_id(mgbe->dev, &mgbe->iommu_sid)) {
dev_err(mgbe->dev, "failed to get iommu stream id\n");
return -EINVAL;
}
res.addr = mgbe->regs;
res.irq = irq;
@ -346,7 +354,7 @@ static int tegra_mgbe_probe(struct platform_device *pdev)
writel(MAC_SBD_INTR, mgbe->regs + MGBE_WRAP_COMMON_INTR_ENABLE);
/* Program SID */
writel(MGBE_SID, mgbe->hv + MGBE_WRAP_AXI_ASID0_CTRL);
writel(mgbe->iommu_sid, mgbe->hv + MGBE_WRAP_AXI_ASID0_CTRL);
plat->flags |= STMMAC_FLAG_SERDES_UP_AFTER_PHY_LINKUP;

View file

@ -334,27 +334,25 @@ int wx_host_interface_command(struct wx *wx, u32 *buffer,
status = read_poll_timeout(rd32, hicr, hicr & WX_MNG_MBOX_CTL_FWRDY, 1000,
timeout * 1000, false, wx, WX_MNG_MBOX_CTL);
buf[0] = rd32(wx, WX_MNG_MBOX);
if ((buf[0] & 0xff0000) >> 16 == 0x80) {
wx_err(wx, "Unknown FW command: 0x%x\n", buffer[0] & 0xff);
status = -EINVAL;
goto rel_out;
}
/* Check command completion */
if (status) {
wx_dbg(wx, "Command has failed with no status valid.\n");
buf[0] = rd32(wx, WX_MNG_MBOX);
if ((buffer[0] & 0xff) != (~buf[0] >> 24)) {
status = -EINVAL;
goto rel_out;
}
if ((buf[0] & 0xff0000) >> 16 == 0x80) {
wx_dbg(wx, "It's unknown cmd.\n");
status = -EINVAL;
goto rel_out;
}
wx_err(wx, "Command has failed with no status valid.\n");
wx_dbg(wx, "write value:\n");
for (i = 0; i < dword_len; i++)
wx_dbg(wx, "%x ", buffer[i]);
wx_dbg(wx, "read value:\n");
for (i = 0; i < dword_len; i++)
wx_dbg(wx, "%x ", buf[i]);
wx_dbg(wx, "\ncheck: %x %x\n", buffer[0] & 0xff, ~buf[0] >> 24);
goto rel_out;
}
if (!return_data)

View file

@ -3072,7 +3072,11 @@ static int ca8210_probe(struct spi_device *spi_device)
spi_set_drvdata(priv->spi, priv);
if (IS_ENABLED(CONFIG_IEEE802154_CA8210_DEBUGFS)) {
cascoda_api_upstream = ca8210_test_int_driver_write;
ca8210_test_interface_init(priv);
ret = ca8210_test_interface_init(priv);
if (ret) {
dev_crit(&spi_device->dev, "ca8210_test_interface_init failed\n");
goto error;
}
} else {
cascoda_api_upstream = NULL;
}

View file

@ -125,6 +125,8 @@ static int mctp_i3c_read(struct mctp_i3c_device *mi)
xfer.data.in = skb_put(skb, mi->mrl);
/* Make sure netif_rx() is read in the same order as i3c. */
mutex_lock(&mi->lock);
rc = i3c_device_do_priv_xfers(mi->i3c, &xfer, 1);
if (rc < 0)
goto err;
@ -166,8 +168,10 @@ static int mctp_i3c_read(struct mctp_i3c_device *mi)
stats->rx_dropped++;
}
mutex_unlock(&mi->lock);
return 0;
err:
mutex_unlock(&mi->lock);
kfree_skb(skb);
return rc;
}

View file

@ -173,6 +173,11 @@ enum nvme_quirks {
* MSI (but not MSI-X) interrupts are broken and never fire.
*/
NVME_QUIRK_BROKEN_MSI = (1 << 21),
/*
* Align dma pool segment size to 512 bytes
*/
NVME_QUIRK_DMAPOOL_ALIGN_512 = (1 << 22),
};
/*

View file

@ -2834,15 +2834,20 @@ static int nvme_disable_prepare_reset(struct nvme_dev *dev, bool shutdown)
static int nvme_setup_prp_pools(struct nvme_dev *dev)
{
size_t small_align = 256;
dev->prp_page_pool = dma_pool_create("prp list page", dev->dev,
NVME_CTRL_PAGE_SIZE,
NVME_CTRL_PAGE_SIZE, 0);
if (!dev->prp_page_pool)
return -ENOMEM;
if (dev->ctrl.quirks & NVME_QUIRK_DMAPOOL_ALIGN_512)
small_align = 512;
/* Optimisation for I/Os between 4k and 128k */
dev->prp_small_pool = dma_pool_create("prp list 256", dev->dev,
256, 256, 0);
256, small_align, 0);
if (!dev->prp_small_pool) {
dma_pool_destroy(dev->prp_page_pool);
return -ENOMEM;
@ -3607,7 +3612,7 @@ static const struct pci_device_id nvme_id_table[] = {
{ PCI_VDEVICE(REDHAT, 0x0010), /* Qemu emulated controller */
.driver_data = NVME_QUIRK_BOGUS_NID, },
{ PCI_DEVICE(0x1217, 0x8760), /* O2 Micro 64GB Steam Deck */
.driver_data = NVME_QUIRK_QDEPTH_ONE },
.driver_data = NVME_QUIRK_DMAPOOL_ALIGN_512, },
{ PCI_DEVICE(0x126f, 0x2262), /* Silicon Motion generic */
.driver_data = NVME_QUIRK_NO_DEEPEST_PS |
NVME_QUIRK_BOGUS_NID, },

View file

@ -2024,14 +2024,6 @@ static int nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl)
return __nvme_tcp_alloc_io_queues(ctrl);
}
static void nvme_tcp_destroy_io_queues(struct nvme_ctrl *ctrl, bool remove)
{
nvme_tcp_stop_io_queues(ctrl);
if (remove)
nvme_remove_io_tag_set(ctrl);
nvme_tcp_free_io_queues(ctrl);
}
static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new)
{
int ret, nr_queues;
@ -2176,9 +2168,11 @@ static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
nvme_sync_io_queues(ctrl);
nvme_tcp_stop_io_queues(ctrl);
nvme_cancel_tagset(ctrl);
if (remove)
if (remove) {
nvme_unquiesce_io_queues(ctrl);
nvme_tcp_destroy_io_queues(ctrl, remove);
nvme_remove_io_tag_set(ctrl);
}
nvme_tcp_free_io_queues(ctrl);
}
static void nvme_tcp_reconnect_or_remove(struct nvme_ctrl *ctrl,
@ -2267,7 +2261,9 @@ destroy_io:
nvme_sync_io_queues(ctrl);
nvme_tcp_stop_io_queues(ctrl);
nvme_cancel_tagset(ctrl);
nvme_tcp_destroy_io_queues(ctrl, new);
if (new)
nvme_remove_io_tag_set(ctrl);
nvme_tcp_free_io_queues(ctrl);
}
destroy_admin:
nvme_stop_keep_alive(ctrl);

View file

@ -139,7 +139,7 @@ static u16 nvmet_get_smart_log_all(struct nvmet_req *req,
unsigned long idx;
ctrl = req->sq->ctrl;
xa_for_each(&ctrl->subsys->namespaces, idx, ns) {
nvmet_for_each_enabled_ns(&ctrl->subsys->namespaces, idx, ns) {
/* we don't have the right data for file backed ns */
if (!ns->bdev)
continue;
@ -331,9 +331,10 @@ static u32 nvmet_format_ana_group(struct nvmet_req *req, u32 grpid,
u32 count = 0;
if (!(req->cmd->get_log_page.lsp & NVME_ANA_LOG_RGO)) {
xa_for_each(&ctrl->subsys->namespaces, idx, ns)
nvmet_for_each_enabled_ns(&ctrl->subsys->namespaces, idx, ns) {
if (ns->anagrpid == grpid)
desc->nsids[count++] = cpu_to_le32(ns->nsid);
}
}
desc->grpid = cpu_to_le32(grpid);
@ -772,7 +773,7 @@ static void nvmet_execute_identify_endgrp_list(struct nvmet_req *req)
goto out;
}
xa_for_each(&ctrl->subsys->namespaces, idx, ns) {
nvmet_for_each_enabled_ns(&ctrl->subsys->namespaces, idx, ns) {
if (ns->nsid <= min_endgid)
continue;
@ -815,7 +816,7 @@ static void nvmet_execute_identify_nslist(struct nvmet_req *req, bool match_css)
goto out;
}
xa_for_each(&ctrl->subsys->namespaces, idx, ns) {
nvmet_for_each_enabled_ns(&ctrl->subsys->namespaces, idx, ns) {
if (ns->nsid <= min_nsid)
continue;
if (match_css && req->ns->csi != req->cmd->identify.csi)

View file

@ -810,18 +810,6 @@ static struct configfs_attribute *nvmet_ns_attrs[] = {
NULL,
};
bool nvmet_subsys_nsid_exists(struct nvmet_subsys *subsys, u32 nsid)
{
struct config_item *ns_item;
char name[12];
snprintf(name, sizeof(name), "%u", nsid);
mutex_lock(&subsys->namespaces_group.cg_subsys->su_mutex);
ns_item = config_group_find_item(&subsys->namespaces_group, name);
mutex_unlock(&subsys->namespaces_group.cg_subsys->su_mutex);
return ns_item != NULL;
}
static void nvmet_ns_release(struct config_item *item)
{
struct nvmet_ns *ns = to_nvmet_ns(item);
@ -2254,12 +2242,17 @@ static ssize_t nvmet_root_discovery_nqn_store(struct config_item *item,
const char *page, size_t count)
{
struct list_head *entry;
char *old_nqn, *new_nqn;
size_t len;
len = strcspn(page, "\n");
if (!len || len > NVMF_NQN_FIELD_LEN - 1)
return -EINVAL;
new_nqn = kstrndup(page, len, GFP_KERNEL);
if (!new_nqn)
return -ENOMEM;
down_write(&nvmet_config_sem);
list_for_each(entry, &nvmet_subsystems_group.cg_children) {
struct config_item *item =
@ -2268,13 +2261,15 @@ static ssize_t nvmet_root_discovery_nqn_store(struct config_item *item,
if (!strncmp(config_item_name(item), page, len)) {
pr_err("duplicate NQN %s\n", config_item_name(item));
up_write(&nvmet_config_sem);
kfree(new_nqn);
return -EINVAL;
}
}
memset(nvmet_disc_subsys->subsysnqn, 0, NVMF_NQN_FIELD_LEN);
memcpy(nvmet_disc_subsys->subsysnqn, page, len);
old_nqn = nvmet_disc_subsys->subsysnqn;
nvmet_disc_subsys->subsysnqn = new_nqn;
up_write(&nvmet_config_sem);
kfree(old_nqn);
return len;
}

View file

@ -127,7 +127,7 @@ static u32 nvmet_max_nsid(struct nvmet_subsys *subsys)
unsigned long idx;
u32 nsid = 0;
xa_for_each(&subsys->namespaces, idx, cur)
nvmet_for_each_enabled_ns(&subsys->namespaces, idx, cur)
nsid = cur->nsid;
return nsid;
@ -441,11 +441,14 @@ u16 nvmet_req_find_ns(struct nvmet_req *req)
struct nvmet_subsys *subsys = nvmet_req_subsys(req);
req->ns = xa_load(&subsys->namespaces, nsid);
if (unlikely(!req->ns)) {
if (unlikely(!req->ns || !req->ns->enabled)) {
req->error_loc = offsetof(struct nvme_common_command, nsid);
if (nvmet_subsys_nsid_exists(subsys, nsid))
return NVME_SC_INTERNAL_PATH_ERROR;
return NVME_SC_INVALID_NS | NVME_STATUS_DNR;
if (!req->ns) /* ns doesn't exist! */
return NVME_SC_INVALID_NS | NVME_STATUS_DNR;
/* ns exists but it's disabled */
req->ns = NULL;
return NVME_SC_INTERNAL_PATH_ERROR;
}
percpu_ref_get(&req->ns->ref);
@ -583,8 +586,6 @@ int nvmet_ns_enable(struct nvmet_ns *ns)
goto out_unlock;
ret = -EMFILE;
if (subsys->nr_namespaces == NVMET_MAX_NAMESPACES)
goto out_unlock;
ret = nvmet_bdev_ns_enable(ns);
if (ret == -ENOTBLK)
@ -599,38 +600,19 @@ int nvmet_ns_enable(struct nvmet_ns *ns)
list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry)
nvmet_p2pmem_ns_add_p2p(ctrl, ns);
ret = percpu_ref_init(&ns->ref, nvmet_destroy_namespace,
0, GFP_KERNEL);
if (ret)
goto out_dev_put;
if (ns->nsid > subsys->max_nsid)
subsys->max_nsid = ns->nsid;
ret = xa_insert(&subsys->namespaces, ns->nsid, ns, GFP_KERNEL);
if (ret)
goto out_restore_subsys_maxnsid;
if (ns->pr.enable) {
ret = nvmet_pr_init_ns(ns);
if (ret)
goto out_remove_from_subsys;
goto out_dev_put;
}
subsys->nr_namespaces++;
nvmet_ns_changed(subsys, ns->nsid);
ns->enabled = true;
xa_set_mark(&subsys->namespaces, ns->nsid, NVMET_NS_ENABLED);
ret = 0;
out_unlock:
mutex_unlock(&subsys->lock);
return ret;
out_remove_from_subsys:
xa_erase(&subsys->namespaces, ns->nsid);
out_restore_subsys_maxnsid:
subsys->max_nsid = nvmet_max_nsid(subsys);
percpu_ref_exit(&ns->ref);
out_dev_put:
list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry)
pci_dev_put(radix_tree_delete(&ctrl->p2p_ns_map, ns->nsid));
@ -649,15 +631,37 @@ void nvmet_ns_disable(struct nvmet_ns *ns)
goto out_unlock;
ns->enabled = false;
xa_erase(&ns->subsys->namespaces, ns->nsid);
if (ns->nsid == subsys->max_nsid)
subsys->max_nsid = nvmet_max_nsid(subsys);
xa_clear_mark(&subsys->namespaces, ns->nsid, NVMET_NS_ENABLED);
list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry)
pci_dev_put(radix_tree_delete(&ctrl->p2p_ns_map, ns->nsid));
mutex_unlock(&subsys->lock);
if (ns->pr.enable)
nvmet_pr_exit_ns(ns);
mutex_lock(&subsys->lock);
nvmet_ns_changed(subsys, ns->nsid);
nvmet_ns_dev_disable(ns);
out_unlock:
mutex_unlock(&subsys->lock);
}
void nvmet_ns_free(struct nvmet_ns *ns)
{
struct nvmet_subsys *subsys = ns->subsys;
nvmet_ns_disable(ns);
mutex_lock(&subsys->lock);
xa_erase(&subsys->namespaces, ns->nsid);
if (ns->nsid == subsys->max_nsid)
subsys->max_nsid = nvmet_max_nsid(subsys);
mutex_unlock(&subsys->lock);
/*
* Now that we removed the namespaces from the lookup list, we
* can kill the per_cpu ref and wait for any remaining references
@ -671,21 +675,9 @@ void nvmet_ns_disable(struct nvmet_ns *ns)
wait_for_completion(&ns->disable_done);
percpu_ref_exit(&ns->ref);
if (ns->pr.enable)
nvmet_pr_exit_ns(ns);
mutex_lock(&subsys->lock);
subsys->nr_namespaces--;
nvmet_ns_changed(subsys, ns->nsid);
nvmet_ns_dev_disable(ns);
out_unlock:
mutex_unlock(&subsys->lock);
}
void nvmet_ns_free(struct nvmet_ns *ns)
{
nvmet_ns_disable(ns);
down_write(&nvmet_ana_sem);
nvmet_ana_group_enabled[ns->anagrpid]--;
@ -699,15 +691,33 @@ struct nvmet_ns *nvmet_ns_alloc(struct nvmet_subsys *subsys, u32 nsid)
{
struct nvmet_ns *ns;
mutex_lock(&subsys->lock);
if (subsys->nr_namespaces == NVMET_MAX_NAMESPACES)
goto out_unlock;
ns = kzalloc(sizeof(*ns), GFP_KERNEL);
if (!ns)
return NULL;
goto out_unlock;
init_completion(&ns->disable_done);
ns->nsid = nsid;
ns->subsys = subsys;
if (percpu_ref_init(&ns->ref, nvmet_destroy_namespace, 0, GFP_KERNEL))
goto out_free;
if (ns->nsid > subsys->max_nsid)
subsys->max_nsid = nsid;
if (xa_insert(&subsys->namespaces, ns->nsid, ns, GFP_KERNEL))
goto out_exit;
subsys->nr_namespaces++;
mutex_unlock(&subsys->lock);
down_write(&nvmet_ana_sem);
ns->anagrpid = NVMET_DEFAULT_ANA_GRPID;
nvmet_ana_group_enabled[ns->anagrpid]++;
@ -718,6 +728,14 @@ struct nvmet_ns *nvmet_ns_alloc(struct nvmet_subsys *subsys, u32 nsid)
ns->csi = NVME_CSI_NVM;
return ns;
out_exit:
subsys->max_nsid = nvmet_max_nsid(subsys);
percpu_ref_exit(&ns->ref);
out_free:
kfree(ns);
out_unlock:
mutex_unlock(&subsys->lock);
return NULL;
}
static void nvmet_update_sq_head(struct nvmet_req *req)
@ -1394,7 +1412,7 @@ static void nvmet_setup_p2p_ns_map(struct nvmet_ctrl *ctrl,
ctrl->p2p_client = get_device(req->p2p_client);
xa_for_each(&ctrl->subsys->namespaces, idx, ns)
nvmet_for_each_enabled_ns(&ctrl->subsys->namespaces, idx, ns)
nvmet_p2pmem_ns_add_p2p(ctrl, ns);
}

View file

@ -36,7 +36,7 @@ void nvmet_bdev_set_limits(struct block_device *bdev, struct nvme_id_ns *id)
*/
id->nsfeat |= 1 << 4;
/* NPWG = Namespace Preferred Write Granularity. 0's based */
id->npwg = lpp0b;
id->npwg = to0based(bdev_io_min(bdev) / bdev_logical_block_size(bdev));
/* NPWA = Namespace Preferred Write Alignment. 0's based */
id->npwa = id->npwg;
/* NPDG = Namespace Preferred Deallocate Granularity. 0's based */

View file

@ -24,6 +24,7 @@
#define NVMET_DEFAULT_VS NVME_VS(2, 1, 0)
#define NVMET_NS_ENABLED XA_MARK_1
#define NVMET_ASYNC_EVENTS 4
#define NVMET_ERROR_LOG_SLOTS 128
#define NVMET_NO_ERROR_LOC ((u16)-1)
@ -33,6 +34,12 @@
#define NVMET_FR_MAX_SIZE 8
#define NVMET_PR_LOG_QUEUE_SIZE 64
#define nvmet_for_each_ns(xa, index, entry) \
xa_for_each(xa, index, entry)
#define nvmet_for_each_enabled_ns(xa, index, entry) \
xa_for_each_marked(xa, index, entry, NVMET_NS_ENABLED)
/*
* Supported optional AENs:
*/

View file

@ -60,7 +60,7 @@ u16 nvmet_set_feat_resv_notif_mask(struct nvmet_req *req, u32 mask)
goto success;
}
xa_for_each(&ctrl->subsys->namespaces, idx, ns) {
nvmet_for_each_enabled_ns(&ctrl->subsys->namespaces, idx, ns) {
if (ns->pr.enable)
WRITE_ONCE(ns->pr.notify_mask, mask);
}
@ -1056,7 +1056,7 @@ int nvmet_ctrl_init_pr(struct nvmet_ctrl *ctrl)
* nvmet_pr_init_ns(), see more details in nvmet_ns_enable().
* So just check ns->pr.enable.
*/
xa_for_each(&subsys->namespaces, idx, ns) {
nvmet_for_each_enabled_ns(&subsys->namespaces, idx, ns) {
if (ns->pr.enable) {
ret = nvmet_pr_alloc_and_insert_pc_ref(ns, ctrl->cntlid,
&ctrl->hostid);
@ -1067,7 +1067,7 @@ int nvmet_ctrl_init_pr(struct nvmet_ctrl *ctrl)
return 0;
free_per_ctrl_refs:
xa_for_each(&subsys->namespaces, idx, ns) {
nvmet_for_each_enabled_ns(&subsys->namespaces, idx, ns) {
if (ns->pr.enable) {
pc_ref = xa_erase(&ns->pr_per_ctrl_refs, ctrl->cntlid);
if (pc_ref)
@ -1087,7 +1087,7 @@ void nvmet_ctrl_destroy_pr(struct nvmet_ctrl *ctrl)
kfifo_free(&ctrl->pr_log_mgr.log_queue);
mutex_destroy(&ctrl->pr_log_mgr.lock);
xa_for_each(&ctrl->subsys->namespaces, idx, ns) {
nvmet_for_each_enabled_ns(&ctrl->subsys->namespaces, idx, ns) {
if (ns->pr.enable) {
pc_ref = xa_erase(&ns->pr_per_ctrl_refs, ctrl->cntlid);
if (pc_ref)

View file

@ -237,12 +237,6 @@ static inline void ufshcd_vops_config_scaling_param(struct ufs_hba *hba,
hba->vops->config_scaling_param(hba, p, data);
}
static inline void ufshcd_vops_reinit_notify(struct ufs_hba *hba)
{
if (hba->vops && hba->vops->reinit_notify)
hba->vops->reinit_notify(hba);
}
static inline int ufshcd_vops_mcq_config_resource(struct ufs_hba *hba)
{
if (hba->vops && hba->vops->mcq_config_resource)

View file

@ -8858,7 +8858,6 @@ static int ufshcd_probe_hba(struct ufs_hba *hba, bool init_dev_params)
ufshcd_device_reset(hba);
ufs_put_device_desc(hba);
ufshcd_hba_stop(hba);
ufshcd_vops_reinit_notify(hba);
ret = ufshcd_hba_enable(hba);
if (ret) {
dev_err(hba->dev, "Host controller enable failed\n");
@ -10591,14 +10590,17 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
}
/*
* Set the default power management level for runtime and system PM.
* Set the default power management level for runtime and system PM if
* not set by the host controller drivers.
* Default power saving mode is to keep UFS link in Hibern8 state
* and UFS device in sleep state.
*/
hba->rpm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state(
if (!hba->rpm_lvl)
hba->rpm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state(
UFS_SLEEP_PWR_MODE,
UIC_LINK_HIBERN8_STATE);
hba->spm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state(
if (!hba->spm_lvl)
hba->spm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state(
UFS_SLEEP_PWR_MODE,
UIC_LINK_HIBERN8_STATE);

View file

@ -368,6 +368,11 @@ static int ufs_qcom_power_up_sequence(struct ufs_hba *hba)
if (ret)
return ret;
if (phy->power_count) {
phy_power_off(phy);
phy_exit(phy);
}
/* phy initialization - calibrate the phy */
ret = phy_init(phy);
if (ret) {
@ -866,6 +871,7 @@ static u32 ufs_qcom_get_ufs_hci_version(struct ufs_hba *hba)
*/
static void ufs_qcom_advertise_quirks(struct ufs_hba *hba)
{
const struct ufs_qcom_drvdata *drvdata = of_device_get_match_data(hba->dev);
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
if (host->hw_ver.major == 0x2)
@ -874,9 +880,8 @@ static void ufs_qcom_advertise_quirks(struct ufs_hba *hba)
if (host->hw_ver.major > 0x3)
hba->quirks |= UFSHCD_QUIRK_REINIT_AFTER_MAX_GEAR_SWITCH;
if (of_device_is_compatible(hba->dev->of_node, "qcom,sm8550-ufshc") ||
of_device_is_compatible(hba->dev->of_node, "qcom,sm8650-ufshc"))
hba->quirks |= UFSHCD_QUIRK_BROKEN_LSDBS_CAP;
if (drvdata && drvdata->quirks)
hba->quirks |= drvdata->quirks;
}
static void ufs_qcom_set_phy_gear(struct ufs_qcom_host *host)
@ -1064,6 +1069,7 @@ static int ufs_qcom_init(struct ufs_hba *hba)
struct device *dev = hba->dev;
struct ufs_qcom_host *host;
struct ufs_clk_info *clki;
const struct ufs_qcom_drvdata *drvdata = of_device_get_match_data(hba->dev);
host = devm_kzalloc(dev, sizeof(*host), GFP_KERNEL);
if (!host)
@ -1143,6 +1149,9 @@ static int ufs_qcom_init(struct ufs_hba *hba)
dev_warn(dev, "%s: failed to configure the testbus %d\n",
__func__, err);
if (drvdata && drvdata->no_phy_retention)
hba->spm_lvl = UFS_PM_LVL_5;
return 0;
out_variant_clear:
@ -1579,13 +1588,6 @@ static void ufs_qcom_config_scaling_param(struct ufs_hba *hba,
}
#endif
static void ufs_qcom_reinit_notify(struct ufs_hba *hba)
{
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
phy_power_off(host->generic_phy);
}
/* Resources */
static const struct ufshcd_res_info ufs_res_info[RES_MAX] = {
{.name = "ufs_mem",},
@ -1825,7 +1827,6 @@ static const struct ufs_hba_variant_ops ufs_hba_qcom_vops = {
.device_reset = ufs_qcom_device_reset,
.config_scaling_param = ufs_qcom_config_scaling_param,
.program_key = ufs_qcom_ice_program_key,
.reinit_notify = ufs_qcom_reinit_notify,
.mcq_config_resource = ufs_qcom_mcq_config_resource,
.get_hba_mac = ufs_qcom_get_hba_mac,
.op_runtime_config = ufs_qcom_op_runtime_config,
@ -1868,9 +1869,15 @@ static void ufs_qcom_remove(struct platform_device *pdev)
platform_device_msi_free_irqs_all(hba->dev);
}
static const struct ufs_qcom_drvdata ufs_qcom_sm8550_drvdata = {
.quirks = UFSHCD_QUIRK_BROKEN_LSDBS_CAP,
.no_phy_retention = true,
};
static const struct of_device_id ufs_qcom_of_match[] __maybe_unused = {
{ .compatible = "qcom,ufshc" },
{ .compatible = "qcom,sm8550-ufshc" },
{ .compatible = "qcom,sm8550-ufshc", .data = &ufs_qcom_sm8550_drvdata },
{ .compatible = "qcom,sm8650-ufshc", .data = &ufs_qcom_sm8550_drvdata },
{},
};
MODULE_DEVICE_TABLE(of, ufs_qcom_of_match);

View file

@ -217,6 +217,11 @@ struct ufs_qcom_host {
bool esi_enabled;
};
struct ufs_qcom_drvdata {
enum ufshcd_quirks quirks;
bool no_phy_retention;
};
static inline u32
ufs_qcom_get_debug_reg_offset(struct ufs_qcom_host *host, u32 reg)
{

View file

@ -1661,14 +1661,15 @@ static vm_fault_t vfio_pci_mmap_huge_fault(struct vm_fault *vmf,
unsigned long pfn, pgoff = vmf->pgoff - vma->vm_pgoff;
vm_fault_t ret = VM_FAULT_SIGBUS;
if (order && (vmf->address & ((PAGE_SIZE << order) - 1) ||
pfn = vma_to_pfn(vma) + pgoff;
if (order && (pfn & ((1 << order) - 1) ||
vmf->address & ((PAGE_SIZE << order) - 1) ||
vmf->address + (PAGE_SIZE << order) > vma->vm_end)) {
ret = VM_FAULT_FALLBACK;
goto out;
}
pfn = vma_to_pfn(vma);
down_read(&vdev->memory_lock);
if (vdev->pm_runtime_engaged || !__vfio_pci_memory_enabled(vdev))
@ -1676,18 +1677,18 @@ static vm_fault_t vfio_pci_mmap_huge_fault(struct vm_fault *vmf,
switch (order) {
case 0:
ret = vmf_insert_pfn(vma, vmf->address, pfn + pgoff);
ret = vmf_insert_pfn(vma, vmf->address, pfn);
break;
#ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP
case PMD_ORDER:
ret = vmf_insert_pfn_pmd(vmf, __pfn_to_pfn_t(pfn + pgoff,
PFN_DEV), false);
ret = vmf_insert_pfn_pmd(vmf,
__pfn_to_pfn_t(pfn, PFN_DEV), false);
break;
#endif
#ifdef CONFIG_ARCH_SUPPORTS_PUD_PFNMAP
case PUD_ORDER:
ret = vmf_insert_pfn_pud(vmf, __pfn_to_pfn_t(pfn + pgoff,
PFN_DEV), false);
ret = vmf_insert_pfn_pud(vmf,
__pfn_to_pfn_t(pfn, PFN_DEV), false);
break;
#endif
default:

View file

@ -286,7 +286,7 @@ static int stm32_iwdg_irq_init(struct platform_device *pdev,
if (!wdt->data->has_early_wakeup)
return 0;
irq = platform_get_irq(pdev, 0);
irq = platform_get_irq_optional(pdev, 0);
if (irq <= 0)
return 0;

View file

@ -57,6 +57,8 @@ static void v9fs_issue_write(struct netfs_io_subrequest *subreq)
int err, len;
len = p9_client_write(fid, subreq->start, &subreq->io_iter, &err);
if (len > 0)
__set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);
netfs_write_subrequest_terminated(subreq, len ?: err, false);
}
@ -80,8 +82,10 @@ static void v9fs_issue_read(struct netfs_io_subrequest *subreq)
if (pos + total >= i_size_read(rreq->inode))
__set_bit(NETFS_SREQ_HIT_EOF, &subreq->flags);
if (!err)
if (!err) {
subreq->transferred += total;
__set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);
}
netfs_read_subreq_terminated(subreq, err, false);
}

View file

@ -122,7 +122,7 @@ static void afs_issue_write_worker(struct work_struct *work)
if (subreq->debug_index == 3)
return netfs_write_subrequest_terminated(subreq, -ENOANO, false);
if (!test_bit(NETFS_SREQ_RETRYING, &subreq->flags)) {
if (!subreq->retry_count) {
set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);
return netfs_write_subrequest_terminated(subreq, -EAGAIN, false);
}
@ -149,6 +149,9 @@ static void afs_issue_write_worker(struct work_struct *work)
afs_wait_for_operation(op);
ret = afs_put_operation(op);
switch (ret) {
case 0:
__set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);
break;
case -EACCES:
case -EPERM:
case -ENOKEY:

View file

@ -4878,25 +4878,29 @@ out_fail:
return ret;
}
struct btrfs_uring_encoded_data {
struct btrfs_ioctl_encoded_io_args args;
struct iovec iovstack[UIO_FASTIOV];
struct iovec *iov;
struct iov_iter iter;
};
static int btrfs_uring_encoded_read(struct io_uring_cmd *cmd, unsigned int issue_flags)
{
size_t copy_end_kernel = offsetofend(struct btrfs_ioctl_encoded_io_args, flags);
size_t copy_end;
struct btrfs_ioctl_encoded_io_args args = { 0 };
int ret;
u64 disk_bytenr, disk_io_size;
struct file *file;
struct btrfs_inode *inode;
struct btrfs_fs_info *fs_info;
struct extent_io_tree *io_tree;
struct iovec iovstack[UIO_FASTIOV];
struct iovec *iov = iovstack;
struct iov_iter iter;
loff_t pos;
struct kiocb kiocb;
struct extent_state *cached_state = NULL;
u64 start, lockend;
void __user *sqe_addr;
struct btrfs_uring_encoded_data *data = io_uring_cmd_get_async_data(cmd)->op_data;
if (!capable(CAP_SYS_ADMIN)) {
ret = -EPERM;
@ -4910,43 +4914,64 @@ static int btrfs_uring_encoded_read(struct io_uring_cmd *cmd, unsigned int issue
if (issue_flags & IO_URING_F_COMPAT) {
#if defined(CONFIG_64BIT) && defined(CONFIG_COMPAT)
struct btrfs_ioctl_encoded_io_args_32 args32;
copy_end = offsetofend(struct btrfs_ioctl_encoded_io_args_32, flags);
if (copy_from_user(&args32, sqe_addr, copy_end)) {
ret = -EFAULT;
goto out_acct;
}
args.iov = compat_ptr(args32.iov);
args.iovcnt = args32.iovcnt;
args.offset = args32.offset;
args.flags = args32.flags;
#else
return -ENOTTY;
#endif
} else {
copy_end = copy_end_kernel;
if (copy_from_user(&args, sqe_addr, copy_end)) {
ret = -EFAULT;
}
if (!data) {
data = kzalloc(sizeof(*data), GFP_NOFS);
if (!data) {
ret = -ENOMEM;
goto out_acct;
}
io_uring_cmd_get_async_data(cmd)->op_data = data;
if (issue_flags & IO_URING_F_COMPAT) {
#if defined(CONFIG_64BIT) && defined(CONFIG_COMPAT)
struct btrfs_ioctl_encoded_io_args_32 args32;
if (copy_from_user(&args32, sqe_addr, copy_end)) {
ret = -EFAULT;
goto out_acct;
}
data->args.iov = compat_ptr(args32.iov);
data->args.iovcnt = args32.iovcnt;
data->args.offset = args32.offset;
data->args.flags = args32.flags;
#endif
} else {
if (copy_from_user(&data->args, sqe_addr, copy_end)) {
ret = -EFAULT;
goto out_acct;
}
}
if (data->args.flags != 0) {
ret = -EINVAL;
goto out_acct;
}
data->iov = data->iovstack;
ret = import_iovec(ITER_DEST, data->args.iov, data->args.iovcnt,
ARRAY_SIZE(data->iovstack), &data->iov,
&data->iter);
if (ret < 0)
goto out_acct;
if (iov_iter_count(&data->iter) == 0) {
ret = 0;
goto out_free;
}
}
if (args.flags != 0)
return -EINVAL;
ret = import_iovec(ITER_DEST, args.iov, args.iovcnt, ARRAY_SIZE(iovstack),
&iov, &iter);
if (ret < 0)
goto out_acct;
if (iov_iter_count(&iter) == 0) {
ret = 0;
goto out_free;
}
pos = args.offset;
ret = rw_verify_area(READ, file, &pos, args.len);
pos = data->args.offset;
ret = rw_verify_area(READ, file, &pos, data->args.len);
if (ret < 0)
goto out_free;
@ -4959,15 +4984,16 @@ static int btrfs_uring_encoded_read(struct io_uring_cmd *cmd, unsigned int issue
start = ALIGN_DOWN(pos, fs_info->sectorsize);
lockend = start + BTRFS_MAX_UNCOMPRESSED - 1;
ret = btrfs_encoded_read(&kiocb, &iter, &args, &cached_state,
ret = btrfs_encoded_read(&kiocb, &data->iter, &data->args, &cached_state,
&disk_bytenr, &disk_io_size);
if (ret < 0 && ret != -EIOCBQUEUED)
goto out_free;
file_accessed(file);
if (copy_to_user(sqe_addr + copy_end, (const char *)&args + copy_end_kernel,
sizeof(args) - copy_end_kernel)) {
if (copy_to_user(sqe_addr + copy_end,
(const char *)&data->args + copy_end_kernel,
sizeof(data->args) - copy_end_kernel)) {
if (ret == -EIOCBQUEUED) {
unlock_extent(io_tree, start, lockend, &cached_state);
btrfs_inode_unlock(inode, BTRFS_ILOCK_SHARED);
@ -4977,40 +5003,22 @@ static int btrfs_uring_encoded_read(struct io_uring_cmd *cmd, unsigned int issue
}
if (ret == -EIOCBQUEUED) {
u64 count;
/*
* If we've optimized things by storing the iovecs on the stack,
* undo this.
*/
if (!iov) {
iov = kmalloc(sizeof(struct iovec) * args.iovcnt, GFP_NOFS);
if (!iov) {
unlock_extent(io_tree, start, lockend, &cached_state);
btrfs_inode_unlock(inode, BTRFS_ILOCK_SHARED);
ret = -ENOMEM;
goto out_acct;
}
memcpy(iov, iovstack, sizeof(struct iovec) * args.iovcnt);
}
count = min_t(u64, iov_iter_count(&iter), disk_io_size);
u64 count = min_t(u64, iov_iter_count(&data->iter), disk_io_size);
/* Match ioctl by not returning past EOF if uncompressed. */
if (!args.compression)
count = min_t(u64, count, args.len);
if (!data->args.compression)
count = min_t(u64, count, data->args.len);
ret = btrfs_uring_read_extent(&kiocb, &iter, start, lockend,
cached_state, disk_bytenr,
disk_io_size, count,
args.compression, iov, cmd);
ret = btrfs_uring_read_extent(&kiocb, &data->iter, start, lockend,
cached_state, disk_bytenr, disk_io_size,
count, data->args.compression,
data->iov, cmd);
goto out_acct;
}
out_free:
kfree(iov);
kfree(data->iov);
out_acct:
if (ret > 0)

View file

@ -1541,6 +1541,10 @@ static int scrub_find_fill_first_stripe(struct btrfs_block_group *bg,
u64 extent_gen;
int ret;
if (unlikely(!extent_root)) {
btrfs_err(fs_info, "no valid extent root for scrub");
return -EUCLEAN;
}
memset(stripe->sectors, 0, sizeof(struct scrub_sector_verification) *
stripe->nr_sectors);
scrub_stripe_reset_bitmaps(stripe);

View file

@ -174,10 +174,10 @@ int zlib_compress_folios(struct list_head *ws, struct address_space *mapping,
copy_page(workspace->buf + i * PAGE_SIZE,
data_in);
start += PAGE_SIZE;
workspace->strm.avail_in =
(in_buf_folios << PAGE_SHIFT);
}
workspace->strm.next_in = workspace->buf;
workspace->strm.avail_in = min(bytes_left,
in_buf_folios << PAGE_SHIFT);
} else {
unsigned int pg_off;
unsigned int cur_len;

View file

@ -748,8 +748,9 @@ int btrfs_check_zoned_mode(struct btrfs_fs_info *fs_info)
(u64)lim->max_segments << PAGE_SHIFT),
fs_info->sectorsize);
fs_info->fs_devices->chunk_alloc_policy = BTRFS_CHUNK_ALLOC_ZONED;
if (fs_info->max_zone_append_size < fs_info->max_extent_size)
fs_info->max_extent_size = fs_info->max_zone_append_size;
fs_info->max_extent_size = min_not_zero(fs_info->max_extent_size,
fs_info->max_zone_append_size);
/*
* Check mount options here, because we might change fs_info->zoned

View file

@ -15,6 +15,7 @@
#include <linux/namei.h>
#include <linux/poll.h>
#include <linux/mount.h>
#include <linux/security.h>
#include <linux/statfs.h>
#include <linux/ctype.h>
#include <linux/string.h>
@ -576,7 +577,7 @@ static int cachefiles_daemon_dir(struct cachefiles_cache *cache, char *args)
*/
static int cachefiles_daemon_secctx(struct cachefiles_cache *cache, char *args)
{
char *secctx;
int err;
_enter(",%s", args);
@ -585,16 +586,16 @@ static int cachefiles_daemon_secctx(struct cachefiles_cache *cache, char *args)
return -EINVAL;
}
if (cache->secctx) {
if (cache->have_secid) {
pr_err("Second security context specified\n");
return -EINVAL;
}
secctx = kstrdup(args, GFP_KERNEL);
if (!secctx)
return -ENOMEM;
err = security_secctx_to_secid(args, strlen(args), &cache->secid);
if (err)
return err;
cache->secctx = secctx;
cache->have_secid = true;
return 0;
}
@ -820,7 +821,6 @@ static void cachefiles_daemon_unbind(struct cachefiles_cache *cache)
put_cred(cache->cache_cred);
kfree(cache->rootdirname);
kfree(cache->secctx);
kfree(cache->tag);
_leave("");

View file

@ -122,7 +122,6 @@ struct cachefiles_cache {
#define CACHEFILES_STATE_CHANGED 3 /* T if state changed (poll trigger) */
#define CACHEFILES_ONDEMAND_MODE 4 /* T if in on-demand read mode */
char *rootdirname; /* name of cache root directory */
char *secctx; /* LSM security context */
char *tag; /* cache binding tag */
refcount_t unbind_pincount;/* refcount to do daemon unbind */
struct xarray reqs; /* xarray of pending on-demand requests */
@ -130,6 +129,8 @@ struct cachefiles_cache {
struct xarray ondemand_ids; /* xarray for ondemand_id allocation */
u32 ondemand_id_next;
u32 msg_id_next;
u32 secid; /* LSM security id */
bool have_secid; /* whether "secid" was set */
};
static inline bool cachefiles_in_ondemand_mode(struct cachefiles_cache *cache)

View file

@ -18,7 +18,7 @@ int cachefiles_get_security_ID(struct cachefiles_cache *cache)
struct cred *new;
int ret;
_enter("{%s}", cache->secctx);
_enter("{%u}", cache->have_secid ? cache->secid : 0);
new = prepare_kernel_cred(current);
if (!new) {
@ -26,8 +26,8 @@ int cachefiles_get_security_ID(struct cachefiles_cache *cache)
goto error;
}
if (cache->secctx) {
ret = set_security_override_from_ctx(new, cache->secctx);
if (cache->have_secid) {
ret = set_security_override(new, cache->secid);
if (ret < 0) {
put_cred(new);
pr_err("Security denies permission to nominate security context: error %d\n",

View file

@ -122,7 +122,7 @@ static int exfat_readdir(struct inode *inode, loff_t *cpos, struct exfat_dir_ent
type = exfat_get_entry_type(ep);
if (type == TYPE_UNUSED) {
brelse(bh);
break;
goto out;
}
if (type != TYPE_FILE && type != TYPE_DIR) {
@ -170,6 +170,7 @@ static int exfat_readdir(struct inode *inode, loff_t *cpos, struct exfat_dir_ent
}
}
out:
dir_entry->namebuf.lfn[0] = '\0';
*cpos = EXFAT_DEN_TO_B(dentry);
return 0;

View file

@ -216,6 +216,16 @@ static int __exfat_free_cluster(struct inode *inode, struct exfat_chain *p_chain
if (err)
goto dec_used_clus;
if (num_clusters >= sbi->num_clusters - EXFAT_FIRST_CLUSTER) {
/*
* The cluster chain includes a loop, scan the
* bitmap to get the number of used clusters.
*/
exfat_count_used_clusters(sb, &sbi->used_clusters);
return 0;
}
} while (clu != EXFAT_EOF_CLUSTER);
}

View file

@ -545,6 +545,7 @@ static int exfat_extend_valid_size(struct file *file, loff_t new_valid_size)
while (pos < new_valid_size) {
u32 len;
struct folio *folio;
unsigned long off;
len = PAGE_SIZE - (pos & (PAGE_SIZE - 1));
if (pos + len > new_valid_size)
@ -554,6 +555,9 @@ static int exfat_extend_valid_size(struct file *file, loff_t new_valid_size)
if (err)
goto out;
off = offset_in_folio(folio, pos);
folio_zero_new_buffers(folio, off, off + len);
err = ops->write_end(file, mapping, pos, len, len, folio, NULL);
if (err < 0)
goto out;
@ -563,6 +567,8 @@ static int exfat_extend_valid_size(struct file *file, loff_t new_valid_size)
cond_resched();
}
return 0;
out:
return err;
}

View file

@ -330,8 +330,8 @@ static int exfat_find_empty_entry(struct inode *inode,
while ((dentry = exfat_search_empty_slot(sb, &hint_femp, p_dir,
num_entries, es)) < 0) {
if (dentry == -EIO)
break;
if (dentry != -ENOSPC)
return dentry;
if (exfat_check_max_dentries(inode))
return -ENOSPC;

View file

@ -22,6 +22,7 @@
#include <linux/close_range.h>
#include <linux/file_ref.h>
#include <net/sock.h>
#include <linux/init_task.h>
#include "internal.h"

View file

@ -1681,6 +1681,8 @@ static int fuse_dir_open(struct inode *inode, struct file *file)
*/
if (ff->open_flags & (FOPEN_STREAM | FOPEN_NONSEEKABLE))
nonseekable_open(inode, file);
if (!(ff->open_flags & FOPEN_KEEP_CACHE))
invalidate_inode_pages2(inode->i_mapping);
}
return err;

View file

@ -349,11 +349,13 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc)
goto bail_no_root;
res = hfs_cat_find_brec(sb, HFS_ROOT_CNID, &fd);
if (!res) {
if (fd.entrylength > sizeof(rec) || fd.entrylength < 0) {
if (fd.entrylength != sizeof(rec.dir)) {
res = -EIO;
goto bail_hfs_find;
}
hfs_bnode_read(fd.bnode, &rec, fd.entryoffset, fd.entrylength);
if (rec.type != HFS_CDR_DIR)
res = -EIO;
}
if (res)
goto bail_hfs_find;

View file

@ -1774,7 +1774,8 @@ static bool iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t pos)
*/
static int iomap_add_to_ioend(struct iomap_writepage_ctx *wpc,
struct writeback_control *wbc, struct folio *folio,
struct inode *inode, loff_t pos, unsigned len)
struct inode *inode, loff_t pos, loff_t end_pos,
unsigned len)
{
struct iomap_folio_state *ifs = folio->private;
size_t poff = offset_in_folio(folio, pos);
@ -1793,15 +1794,60 @@ new_ioend:
if (ifs)
atomic_add(len, &ifs->write_bytes_pending);
/*
* Clamp io_offset and io_size to the incore EOF so that ondisk
* file size updates in the ioend completion are byte-accurate.
* This avoids recovering files with zeroed tail regions when
* writeback races with appending writes:
*
* Thread 1: Thread 2:
* ------------ -----------
* write [A, A+B]
* update inode size to A+B
* submit I/O [A, A+BS]
* write [A+B, A+B+C]
* update inode size to A+B+C
* <I/O completes, updates disk size to min(A+B+C, A+BS)>
* <power failure>
*
* After reboot:
* 1) with A+B+C < A+BS, the file has zero padding in range
* [A+B, A+B+C]
*
* |< Block Size (BS) >|
* |DDDDDDDDDDDD0000000000000|
* ^ ^ ^
* A A+B A+B+C
* (EOF)
*
* 2) with A+B+C > A+BS, the file has zero padding in range
* [A+B, A+BS]
*
* |< Block Size (BS) >|< Block Size (BS) >|
* |DDDDDDDDDDDD0000000000000|00000000000000000000000000|
* ^ ^ ^ ^
* A A+B A+BS A+B+C
* (EOF)
*
* D = Valid Data
* 0 = Zero Padding
*
* Note that this defeats the ability to chain the ioends of
* appending writes.
*/
wpc->ioend->io_size += len;
if (wpc->ioend->io_offset + wpc->ioend->io_size > end_pos)
wpc->ioend->io_size = end_pos - wpc->ioend->io_offset;
wbc_account_cgroup_owner(wbc, folio, len);
return 0;
}
static int iomap_writepage_map_blocks(struct iomap_writepage_ctx *wpc,
struct writeback_control *wbc, struct folio *folio,
struct inode *inode, u64 pos, unsigned dirty_len,
unsigned *count)
struct inode *inode, u64 pos, u64 end_pos,
unsigned dirty_len, unsigned *count)
{
int error;
@ -1826,7 +1872,7 @@ static int iomap_writepage_map_blocks(struct iomap_writepage_ctx *wpc,
break;
default:
error = iomap_add_to_ioend(wpc, wbc, folio, inode, pos,
map_len);
end_pos, map_len);
if (!error)
(*count)++;
break;
@ -1897,11 +1943,11 @@ static bool iomap_writepage_handle_eof(struct folio *folio, struct inode *inode,
* remaining memory is zeroed when mapped, and writes to that
* region are not written out to the file.
*
* Also adjust the writeback range to skip all blocks entirely
* beyond i_size.
* Also adjust the end_pos to the end of file and skip writeback
* for all blocks entirely beyond i_size.
*/
folio_zero_segment(folio, poff, folio_size(folio));
*end_pos = round_up(isize, i_blocksize(inode));
*end_pos = isize;
}
return true;
@ -1914,6 +1960,7 @@ static int iomap_writepage_map(struct iomap_writepage_ctx *wpc,
struct inode *inode = folio->mapping->host;
u64 pos = folio_pos(folio);
u64 end_pos = pos + folio_size(folio);
u64 end_aligned = 0;
unsigned count = 0;
int error = 0;
u32 rlen;
@ -1955,9 +2002,10 @@ static int iomap_writepage_map(struct iomap_writepage_ctx *wpc,
/*
* Walk through the folio to find dirty areas to write back.
*/
while ((rlen = iomap_find_dirty_range(folio, &pos, end_pos))) {
end_aligned = round_up(end_pos, i_blocksize(inode));
while ((rlen = iomap_find_dirty_range(folio, &pos, end_aligned))) {
error = iomap_writepage_map_blocks(wpc, wbc, folio, inode,
pos, rlen, &count);
pos, end_pos, rlen, &count);
if (error)
break;
pos += rlen;

View file

@ -772,9 +772,9 @@ start_journal_io:
/*
* If the journal is not located on the file system device,
* then we must flush the file system device before we issue
* the commit record
* the commit record and update the journal tail sequence.
*/
if (commit_transaction->t_need_data_flush &&
if ((commit_transaction->t_need_data_flush || update_tail) &&
(journal->j_fs_dev != journal->j_dev) &&
(journal->j_flags & JBD2_BARRIER))
blkdev_issue_flush(journal->j_fs_dev);

View file

@ -654,7 +654,7 @@ static void flush_descriptor(journal_t *journal,
set_buffer_jwrite(descriptor);
BUFFER_TRACE(descriptor, "write");
set_buffer_dirty(descriptor);
write_dirty_buffer(descriptor, REQ_SYNC);
write_dirty_buffer(descriptor, JBD2_JOURNAL_REQ_FLAGS);
}
#endif

View file

@ -2055,9 +2055,15 @@ SYSCALL_DEFINE1(oldumount, char __user *, name)
static bool is_mnt_ns_file(struct dentry *dentry)
{
struct ns_common *ns;
/* Is this a proxy for a mount namespace? */
return dentry->d_op == &ns_dentry_operations &&
dentry->d_fsdata == &mntns_operations;
if (dentry->d_op != &ns_dentry_operations)
return false;
ns = d_inode(dentry)->i_private;
return ns->ops == &mntns_operations;
}
struct ns_common *from_mnt_ns(struct mnt_namespace *mnt)

View file

@ -275,22 +275,14 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
netfs_stat(&netfs_n_rh_download);
if (rreq->netfs_ops->prepare_read) {
ret = rreq->netfs_ops->prepare_read(subreq);
if (ret < 0) {
atomic_dec(&rreq->nr_outstanding);
netfs_put_subrequest(subreq, false,
netfs_sreq_trace_put_cancel);
break;
}
if (ret < 0)
goto prep_failed;
trace_netfs_sreq(subreq, netfs_sreq_trace_prepare);
}
slice = netfs_prepare_read_iterator(subreq);
if (slice < 0) {
atomic_dec(&rreq->nr_outstanding);
netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel);
ret = slice;
break;
}
if (slice < 0)
goto prep_iter_failed;
rreq->netfs_ops->issue_read(subreq);
goto done;
@ -302,6 +294,8 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
netfs_stat(&netfs_n_rh_zero);
slice = netfs_prepare_read_iterator(subreq);
if (slice < 0)
goto prep_iter_failed;
__set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);
netfs_read_subreq_terminated(subreq, 0, false);
goto done;
@ -310,6 +304,8 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
if (source == NETFS_READ_FROM_CACHE) {
trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
slice = netfs_prepare_read_iterator(subreq);
if (slice < 0)
goto prep_iter_failed;
netfs_read_cache_to_pagecache(rreq, subreq);
goto done;
}
@ -318,6 +314,14 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
WARN_ON_ONCE(1);
break;
prep_iter_failed:
ret = slice;
prep_failed:
subreq->error = ret;
atomic_dec(&rreq->nr_outstanding);
netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel);
break;
done:
size -= slice;
start += slice;

View file

@ -104,7 +104,6 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
trace_netfs_rreq(wreq, netfs_rreq_trace_wait_ip);
wait_on_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS,
TASK_UNINTERRUPTIBLE);
smp_rmb(); /* Read error/transferred after RIP flag */
ret = wreq->error;
if (ret == 0) {
ret = wreq->transferred;

View file

@ -62,10 +62,14 @@ static void netfs_unlock_read_folio(struct netfs_io_subrequest *subreq,
} else {
trace_netfs_folio(folio, netfs_folio_trace_read_done);
}
folioq_clear(folioq, slot);
} else {
// TODO: Use of PG_private_2 is deprecated.
if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags))
netfs_pgpriv2_mark_copy_to_cache(subreq, rreq, folioq, slot);
else
folioq_clear(folioq, slot);
}
if (!test_bit(NETFS_RREQ_DONT_UNLOCK_FOLIOS, &rreq->flags)) {
@ -77,8 +81,6 @@ static void netfs_unlock_read_folio(struct netfs_io_subrequest *subreq,
folio_unlock(folio);
}
}
folioq_clear(folioq, slot);
}
/*
@ -247,16 +249,17 @@ donation_changed:
/* Deal with the trickiest case: that this subreq is in the middle of a
* folio, not touching either edge, but finishes first. In such a
* case, we donate to the previous subreq, if there is one, so that the
* donation is only handled when that completes - and remove this
* subreq from the list.
* case, we donate to the previous subreq, if there is one and if it is
* contiguous, so that the donation is only handled when that completes
* - and remove this subreq from the list.
*
* If the previous subreq finished first, we will have acquired their
* donation and should be able to unlock folios and/or donate nextwards.
*/
if (!subreq->consumed &&
!prev_donated &&
!list_is_first(&subreq->rreq_link, &rreq->subrequests)) {
!list_is_first(&subreq->rreq_link, &rreq->subrequests) &&
subreq->start == prev->start + prev->len) {
prev = list_prev_entry(subreq, rreq_link);
WRITE_ONCE(prev->next_donated, prev->next_donated + subreq->len);
subreq->start += subreq->len;
@ -378,8 +381,7 @@ static void netfs_rreq_assess(struct netfs_io_request *rreq)
task_io_account_read(rreq->transferred);
trace_netfs_rreq(rreq, netfs_rreq_trace_wake_ip);
clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
wake_up_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS);
clear_and_wake_up_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
trace_netfs_rreq(rreq, netfs_rreq_trace_done);
netfs_clear_subrequests(rreq, false);
@ -438,7 +440,7 @@ void netfs_read_subreq_progress(struct netfs_io_subrequest *subreq,
rreq->origin == NETFS_READPAGE ||
rreq->origin == NETFS_READ_FOR_WRITE)) {
netfs_consume_read_data(subreq, was_async);
__clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags);
__set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);
}
}
EXPORT_SYMBOL(netfs_read_subreq_progress);
@ -497,7 +499,7 @@ void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq,
rreq->origin == NETFS_READPAGE ||
rreq->origin == NETFS_READ_FOR_WRITE)) {
netfs_consume_read_data(subreq, was_async);
__clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags);
__set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);
}
rreq->transferred += subreq->transferred;
}
@ -511,10 +513,13 @@ void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq,
} else {
trace_netfs_sreq(subreq, netfs_sreq_trace_short);
if (subreq->transferred > subreq->consumed) {
__set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);
__clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags);
set_bit(NETFS_RREQ_NEED_RETRY, &rreq->flags);
} else if (!__test_and_set_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags)) {
/* If we didn't read new data, abandon retry. */
if (subreq->retry_count &&
test_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags)) {
__set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);
set_bit(NETFS_RREQ_NEED_RETRY, &rreq->flags);
}
} else if (test_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags)) {
__set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);
set_bit(NETFS_RREQ_NEED_RETRY, &rreq->flags);
} else {

View file

@ -170,6 +170,10 @@ void netfs_pgpriv2_write_to_the_cache(struct netfs_io_request *rreq)
trace_netfs_write(wreq, netfs_write_trace_copy_to_cache);
netfs_stat(&netfs_n_wh_copy_to_cache);
if (!wreq->io_streams[1].avail) {
netfs_put_request(wreq, false, netfs_rreq_trace_put_return);
goto couldnt_start;
}
for (;;) {
error = netfs_pgpriv2_copy_folio(wreq, folio);

View file

@ -49,13 +49,15 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
* up to the first permanently failed one.
*/
if (!rreq->netfs_ops->prepare_read &&
!test_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags)) {
!rreq->cache_resources.ops) {
struct netfs_io_subrequest *subreq;
list_for_each_entry(subreq, &rreq->subrequests, rreq_link) {
if (test_bit(NETFS_SREQ_FAILED, &subreq->flags))
break;
if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) {
__clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);
subreq->retry_count++;
netfs_reset_iter(subreq);
netfs_reissue_read(rreq, subreq);
}
@ -137,7 +139,8 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
stream0->sreq_max_len = subreq->len;
__clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);
__set_bit(NETFS_SREQ_RETRYING, &subreq->flags);
__clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);
subreq->retry_count++;
spin_lock_bh(&rreq->lock);
list_add_tail(&subreq->rreq_link, &rreq->subrequests);
@ -213,7 +216,6 @@ abandon:
subreq->error = -ENOMEM;
__clear_bit(NETFS_SREQ_FAILED, &subreq->flags);
__clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);
__clear_bit(NETFS_SREQ_RETRYING, &subreq->flags);
}
spin_lock_bh(&rreq->lock);
list_splice_tail_init(&queue, &rreq->subrequests);

View file

@ -179,7 +179,6 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,
struct iov_iter source = subreq->io_iter;
iov_iter_revert(&source, subreq->len - source.count);
__set_bit(NETFS_SREQ_RETRYING, &subreq->flags);
netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit);
netfs_reissue_write(stream, subreq, &source);
}
@ -234,7 +233,7 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,
/* Renegotiate max_len (wsize) */
trace_netfs_sreq(subreq, netfs_sreq_trace_retry);
__clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);
__set_bit(NETFS_SREQ_RETRYING, &subreq->flags);
subreq->retry_count++;
stream->prepare_write(subreq);
part = min(len, stream->sreq_max_len);
@ -279,7 +278,7 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,
subreq->start = start;
subreq->debug_index = atomic_inc_return(&wreq->subreq_counter);
subreq->stream_nr = to->stream_nr;
__set_bit(NETFS_SREQ_RETRYING, &subreq->flags);
subreq->retry_count = 1;
trace_netfs_sreq_ref(wreq->debug_id, subreq->debug_index,
refcount_read(&subreq->ref),
@ -501,8 +500,7 @@ reassess_streams:
goto need_retry;
if ((notes & MADE_PROGRESS) && test_bit(NETFS_RREQ_PAUSE, &wreq->flags)) {
trace_netfs_rreq(wreq, netfs_rreq_trace_unpause);
clear_bit_unlock(NETFS_RREQ_PAUSE, &wreq->flags);
wake_up_bit(&wreq->flags, NETFS_RREQ_PAUSE);
clear_and_wake_up_bit(NETFS_RREQ_PAUSE, &wreq->flags);
}
if (notes & NEED_REASSESS) {
@ -605,8 +603,7 @@ void netfs_write_collection_worker(struct work_struct *work)
_debug("finished");
trace_netfs_rreq(wreq, netfs_rreq_trace_wake_ip);
clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &wreq->flags);
wake_up_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS);
clear_and_wake_up_bit(NETFS_RREQ_IN_PROGRESS, &wreq->flags);
if (wreq->iocb) {
size_t written = min(wreq->transferred, wreq->len);
@ -714,8 +711,7 @@ void netfs_write_subrequest_terminated(void *_op, ssize_t transferred_or_error,
trace_netfs_sreq(subreq, netfs_sreq_trace_terminated);
clear_bit_unlock(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
wake_up_bit(&subreq->flags, NETFS_SREQ_IN_PROGRESS);
clear_and_wake_up_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
/* If we are at the head of the queue, wake up the collector,
* transferring a ref to it if we were the ones to do so.

View file

@ -244,6 +244,8 @@ void netfs_reissue_write(struct netfs_io_stream *stream,
iov_iter_advance(source, size);
iov_iter_truncate(&subreq->io_iter, size);
subreq->retry_count++;
__clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);
__set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
netfs_do_issue_write(stream, subreq);
}

View file

@ -263,6 +263,12 @@ int nfs_netfs_readahead(struct readahead_control *ractl)
static atomic_t nfs_netfs_debug_id;
static int nfs_netfs_init_request(struct netfs_io_request *rreq, struct file *file)
{
if (!file) {
if (WARN_ON_ONCE(rreq->origin != NETFS_PGPRIV2_COPY_TO_CACHE))
return -EIO;
return 0;
}
rreq->netfs_priv = get_nfs_open_context(nfs_file_open_context(file));
rreq->debug_id = atomic_inc_return(&nfs_netfs_debug_id);
/* [DEPRECATED] Use PG_private_2 to mark folio being written to the cache. */
@ -274,7 +280,8 @@ static int nfs_netfs_init_request(struct netfs_io_request *rreq, struct file *fi
static void nfs_netfs_free_request(struct netfs_io_request *rreq)
{
put_nfs_open_context(rreq->netfs_priv);
if (rreq->netfs_priv)
put_nfs_open_context(rreq->netfs_priv);
}
static struct nfs_netfs_io_data *nfs_netfs_alloc(struct netfs_io_subrequest *sreq)

View file

@ -47,10 +47,8 @@ static void show_mark_fhandle(struct seq_file *m, struct inode *inode)
size = f->handle_bytes >> 2;
ret = exportfs_encode_fid(inode, (struct fid *)f->f_handle, &size);
if ((ret == FILEID_INVALID) || (ret < 0)) {
WARN_ONCE(1, "Can't encode file handler for inotify: %d\n", ret);
if ((ret == FILEID_INVALID) || (ret < 0))
return;
}
f->handle_type = ret;
f->handle_bytes = size * sizeof(u32);

View file

@ -893,7 +893,7 @@ static int ocfs2_get_next_id(struct super_block *sb, struct kqid *qid)
int status = 0;
trace_ocfs2_get_next_id(from_kqid(&init_user_ns, *qid), type);
if (!sb_has_quota_loaded(sb, type)) {
if (!sb_has_quota_active(sb, type)) {
status = -ESRCH;
goto out;
}

View file

@ -867,6 +867,7 @@ out:
brelse(oinfo->dqi_libh);
brelse(oinfo->dqi_lqi_bh);
kfree(oinfo);
info->dqi_priv = NULL;
return status;
}

View file

@ -415,13 +415,13 @@ int ovl_set_attr(struct ovl_fs *ofs, struct dentry *upperdentry,
return err;
}
struct ovl_fh *ovl_encode_real_fh(struct ovl_fs *ofs, struct dentry *real,
struct ovl_fh *ovl_encode_real_fh(struct ovl_fs *ofs, struct inode *realinode,
bool is_upper)
{
struct ovl_fh *fh;
int fh_type, dwords;
int buflen = MAX_HANDLE_SZ;
uuid_t *uuid = &real->d_sb->s_uuid;
uuid_t *uuid = &realinode->i_sb->s_uuid;
int err;
/* Make sure the real fid stays 32bit aligned */
@ -438,13 +438,13 @@ struct ovl_fh *ovl_encode_real_fh(struct ovl_fs *ofs, struct dentry *real,
* the price or reconnecting the dentry.
*/
dwords = buflen >> 2;
fh_type = exportfs_encode_fh(real, (void *)fh->fb.fid, &dwords, 0);
fh_type = exportfs_encode_inode_fh(realinode, (void *)fh->fb.fid,
&dwords, NULL, 0);
buflen = (dwords << 2);
err = -EIO;
if (WARN_ON(fh_type < 0) ||
WARN_ON(buflen > MAX_HANDLE_SZ) ||
WARN_ON(fh_type == FILEID_INVALID))
if (fh_type < 0 || fh_type == FILEID_INVALID ||
WARN_ON(buflen > MAX_HANDLE_SZ))
goto out_err;
fh->fb.version = OVL_FH_VERSION;
@ -480,7 +480,7 @@ struct ovl_fh *ovl_get_origin_fh(struct ovl_fs *ofs, struct dentry *origin)
if (!ovl_can_decode_fh(origin->d_sb))
return NULL;
return ovl_encode_real_fh(ofs, origin, false);
return ovl_encode_real_fh(ofs, d_inode(origin), false);
}
int ovl_set_origin_fh(struct ovl_fs *ofs, const struct ovl_fh *fh,
@ -505,7 +505,7 @@ static int ovl_set_upper_fh(struct ovl_fs *ofs, struct dentry *upper,
const struct ovl_fh *fh;
int err;
fh = ovl_encode_real_fh(ofs, upper, true);
fh = ovl_encode_real_fh(ofs, d_inode(upper), true);
if (IS_ERR(fh))
return PTR_ERR(fh);

View file

@ -176,35 +176,37 @@ static int ovl_connect_layer(struct dentry *dentry)
*
* Return 0 for upper file handle, > 0 for lower file handle or < 0 on error.
*/
static int ovl_check_encode_origin(struct dentry *dentry)
static int ovl_check_encode_origin(struct inode *inode)
{
struct ovl_fs *ofs = OVL_FS(dentry->d_sb);
struct ovl_fs *ofs = OVL_FS(inode->i_sb);
bool decodable = ofs->config.nfs_export;
struct dentry *dentry;
int err;
/* No upper layer? */
if (!ovl_upper_mnt(ofs))
return 1;
/* Lower file handle for non-upper non-decodable */
if (!ovl_dentry_upper(dentry) && !decodable)
if (!ovl_inode_upper(inode) && !decodable)
return 1;
/* Upper file handle for pure upper */
if (!ovl_dentry_lower(dentry))
if (!ovl_inode_lower(inode))
return 0;
/*
* Root is never indexed, so if there's an upper layer, encode upper for
* root.
*/
if (dentry == dentry->d_sb->s_root)
if (inode == d_inode(inode->i_sb->s_root))
return 0;
/*
* Upper decodable file handle for non-indexed upper.
*/
if (ovl_dentry_upper(dentry) && decodable &&
!ovl_test_flag(OVL_INDEX, d_inode(dentry)))
if (ovl_inode_upper(inode) && decodable &&
!ovl_test_flag(OVL_INDEX, inode))
return 0;
/*
@ -213,14 +215,23 @@ static int ovl_check_encode_origin(struct dentry *dentry)
* ovl_connect_layer() will try to make origin's layer "connected" by
* copying up a "connectable" ancestor.
*/
if (d_is_dir(dentry) && decodable)
return ovl_connect_layer(dentry);
if (!decodable || !S_ISDIR(inode->i_mode))
return 1;
dentry = d_find_any_alias(inode);
if (!dentry)
return -ENOENT;
err = ovl_connect_layer(dentry);
dput(dentry);
if (err < 0)
return err;
/* Lower file handle for indexed and non-upper dir/non-dir */
return 1;
}
static int ovl_dentry_to_fid(struct ovl_fs *ofs, struct dentry *dentry,
static int ovl_dentry_to_fid(struct ovl_fs *ofs, struct inode *inode,
u32 *fid, int buflen)
{
struct ovl_fh *fh = NULL;
@ -231,13 +242,13 @@ static int ovl_dentry_to_fid(struct ovl_fs *ofs, struct dentry *dentry,
* Check if we should encode a lower or upper file handle and maybe
* copy up an ancestor to make lower file handle connectable.
*/
err = enc_lower = ovl_check_encode_origin(dentry);
err = enc_lower = ovl_check_encode_origin(inode);
if (enc_lower < 0)
goto fail;
/* Encode an upper or lower file handle */
fh = ovl_encode_real_fh(ofs, enc_lower ? ovl_dentry_lower(dentry) :
ovl_dentry_upper(dentry), !enc_lower);
fh = ovl_encode_real_fh(ofs, enc_lower ? ovl_inode_lower(inode) :
ovl_inode_upper(inode), !enc_lower);
if (IS_ERR(fh))
return PTR_ERR(fh);
@ -251,8 +262,8 @@ out:
return err;
fail:
pr_warn_ratelimited("failed to encode file handle (%pd2, err=%i)\n",
dentry, err);
pr_warn_ratelimited("failed to encode file handle (ino=%lu, err=%i)\n",
inode->i_ino, err);
goto out;
}
@ -260,19 +271,13 @@ static int ovl_encode_fh(struct inode *inode, u32 *fid, int *max_len,
struct inode *parent)
{
struct ovl_fs *ofs = OVL_FS(inode->i_sb);
struct dentry *dentry;
int bytes, buflen = *max_len << 2;
/* TODO: encode connectable file handles */
if (parent)
return FILEID_INVALID;
dentry = d_find_any_alias(inode);
if (!dentry)
return FILEID_INVALID;
bytes = ovl_dentry_to_fid(ofs, dentry, fid, buflen);
dput(dentry);
bytes = ovl_dentry_to_fid(ofs, inode, fid, buflen);
if (bytes <= 0)
return FILEID_INVALID;

View file

@ -542,7 +542,7 @@ int ovl_verify_origin_xattr(struct ovl_fs *ofs, struct dentry *dentry,
struct ovl_fh *fh;
int err;
fh = ovl_encode_real_fh(ofs, real, is_upper);
fh = ovl_encode_real_fh(ofs, d_inode(real), is_upper);
err = PTR_ERR(fh);
if (IS_ERR(fh)) {
fh = NULL;
@ -738,7 +738,7 @@ int ovl_get_index_name(struct ovl_fs *ofs, struct dentry *origin,
struct ovl_fh *fh;
int err;
fh = ovl_encode_real_fh(ofs, origin, false);
fh = ovl_encode_real_fh(ofs, d_inode(origin), false);
if (IS_ERR(fh))
return PTR_ERR(fh);

View file

@ -865,7 +865,7 @@ int ovl_copy_up_with_data(struct dentry *dentry);
int ovl_maybe_copy_up(struct dentry *dentry, int flags);
int ovl_copy_xattr(struct super_block *sb, const struct path *path, struct dentry *new);
int ovl_set_attr(struct ovl_fs *ofs, struct dentry *upper, struct kstat *stat);
struct ovl_fh *ovl_encode_real_fh(struct ovl_fs *ofs, struct dentry *real,
struct ovl_fh *ovl_encode_real_fh(struct ovl_fs *ofs, struct inode *realinode,
bool is_upper);
struct ovl_fh *ovl_get_origin_fh(struct ovl_fs *ofs, struct dentry *origin);
int ovl_set_origin_fh(struct ovl_fs *ofs, const struct ovl_fh *fh,

View file

@ -1810,7 +1810,7 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
}
for (; addr != end; addr += PAGE_SIZE, idx++) {
unsigned long cur_flags = flags;
u64 cur_flags = flags;
pagemap_entry_t pme;
if (folio && (flags & PM_PRESENT) &&

View file

@ -179,8 +179,7 @@ static int qnx6_statfs(struct dentry *dentry, struct kstatfs *buf)
*/
static const char *qnx6_checkroot(struct super_block *s)
{
static char match_root[2][3] = {".\0\0", "..\0"};
int i, error = 0;
int error = 0;
struct qnx6_dir_entry *dir_entry;
struct inode *root = d_inode(s->s_root);
struct address_space *mapping = root->i_mapping;
@ -189,11 +188,9 @@ static const char *qnx6_checkroot(struct super_block *s)
if (IS_ERR(folio))
return "error reading root directory";
dir_entry = kmap_local_folio(folio, 0);
for (i = 0; i < 2; i++) {
/* maximum 3 bytes - due to match_root limitation */
if (strncmp(dir_entry[i].de_fname, match_root[i], 3))
error = 1;
}
if (memcmp(dir_entry[0].de_fname, ".", 2) ||
memcmp(dir_entry[1].de_fname, "..", 3))
error = 1;
folio_release_kmap(folio, dir_entry);
if (error)
return "error reading root directory.";

View file

@ -1319,14 +1319,16 @@ cifs_readv_callback(struct mid_q_entry *mid)
}
if (rdata->result == -ENODATA) {
__set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags);
rdata->result = 0;
__set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags);
} else {
size_t trans = rdata->subreq.transferred + rdata->got_bytes;
if (trans < rdata->subreq.len &&
rdata->subreq.start + trans == ictx->remote_i_size) {
__set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags);
rdata->result = 0;
__set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags);
} else if (rdata->got_bytes > 0) {
__set_bit(NETFS_SREQ_MADE_PROGRESS, &rdata->subreq.flags);
}
}
@ -1670,10 +1672,13 @@ cifs_writev_callback(struct mid_q_entry *mid)
if (written > wdata->subreq.len)
written &= 0xFFFF;
if (written < wdata->subreq.len)
if (written < wdata->subreq.len) {
result = -ENOSPC;
else
} else {
result = written;
if (written > 0)
__set_bit(NETFS_SREQ_MADE_PROGRESS, &wdata->subreq.flags);
}
break;
case MID_REQUEST_SUBMITTED:
case MID_RETRY_NEEDED:

View file

@ -4615,6 +4615,7 @@ smb2_readv_callback(struct mid_q_entry *mid)
__set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags);
rdata->result = 0;
}
__set_bit(NETFS_SREQ_MADE_PROGRESS, &rdata->subreq.flags);
}
trace_smb3_rw_credits(rreq_debug_id, subreq_debug_index, rdata->credits.value,
server->credits, server->in_flight,
@ -4842,10 +4843,12 @@ smb2_writev_callback(struct mid_q_entry *mid)
cifs_stats_bytes_written(tcon, written);
if (written < wdata->subreq.len)
if (written < wdata->subreq.len) {
wdata->result = -ENOSPC;
else
} else if (written > 0) {
wdata->subreq.len = written;
__set_bit(NETFS_SREQ_MADE_PROGRESS, &wdata->subreq.flags);
}
break;
case MID_REQUEST_SUBMITTED:
case MID_RETRY_NEEDED:
@ -5014,7 +5017,7 @@ smb2_async_writev(struct cifs_io_subrequest *wdata)
}
#endif
if (test_bit(NETFS_SREQ_RETRYING, &wdata->subreq.flags))
if (wdata->subreq.retry_count > 0)
smb2_set_replay(server, &rqst);
cifs_dbg(FYI, "async write at %llu %u bytes iter=%zx\n",

View file

@ -18,6 +18,11 @@ struct io_uring_cmd {
u8 pdu[32]; /* available inline for free use */
};
struct io_uring_cmd_data {
struct io_uring_sqe sqes[2];
void *op_data;
};
static inline const void *io_uring_sqe_cmd(const struct io_uring_sqe *sqe)
{
return sqe->cmd;
@ -113,4 +118,9 @@ static inline struct task_struct *io_uring_cmd_get_task(struct io_uring_cmd *cmd
return cmd_to_io_kiocb(cmd)->tctx->task;
}
static inline struct io_uring_cmd_data *io_uring_cmd_get_async_data(struct io_uring_cmd *cmd)
{
return cmd_to_io_kiocb(cmd)->async_data;
}
#endif /* _LINUX_IO_URING_CMD_H */

Some files were not shown because too many files have changed in this diff Show more